Measurement and Statistics on Science and Technology
How do we objectively measure scientific activities? What proport...
91 downloads
1173 Views
1MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Measurement and Statistics on Science and Technology
How do we objectively measure scientific activities? What proportion of economic activities should a society devote to research and development? How can public-sector and private-sector research best be directed to achieve social goals? Governments and researchers from industrial countries have been trying to answer these questions for more than eighty years. This book provides the first comprehensive account of the attempts to measure science and technology activities in Western countries and the successes and shortcomings of historical weighing systems. Godin guides readers through the historical moments that led to the development of statistics on science and technology and also examines the socio-political dynamics behind the activities of science measurement. He also examines the philosophical and ideological conceptions that drove measurement and provides a thorough account of what statistics and indicators were developed and how they were utilized. The first portion of Measurement and Statistics on Science and Technology concentrates on the construction and development of statistics from 1920 to the present, the principles at work, and the vested interests and forces behind that construction. The second part analyzes the end uses of these statistics and how statistics were used to document their case or to promote their political agenda. Godin’s enlightening account and thoughtful analysis will be of interest to students and academics investigating science measurement as well as policy-makers working in this burgeoning field. Benoˆıt Godin is professor at INRS in Montreal, Canada. He holds a DPhil in science policy from Sussex (UK) University and has written extensively on science policy, research evaluation, and science indicators. Benoˆıt Godin has written extensively on science policy and statistics and has published many articles in the major international journals.
Routledge studies in the history of science, technology and medicine Edited by John Krige Georgia Institute of Technology, Atlanta, USA
Routledge Studies in the History of Science, Technology and Medicine aims to stimulate research in the field, concentrating on the twentieth century. It seeks to contribute to our understanding of science, technology and medicine as they are embedded in society, exploring the links between the subjects on the one hand and the cultural, economic, political, and institutional contexts of their genesis and development on the other. Within this framework, and while not favoring any particular methodological approach, the series welcomes studies which examine relations between science, technology, medicine, and society in new ways, for example, the social construction of technologies, large technical systems etc. 1 Technological Change Methods and themes in the history of technology Edited by Robert Fox 2 Technology Transfer out of Germany after 1945 Edited by Matthias Judt and Burghard Ciesla 3 Entomology, Ecology and Agriculture The making of scientific careers in North America, 1885–1985 Paolo Palladino 4 The Historiography of Contemporary Science and Technology Edited by Thomas Söderquist 5 Science and Spectacle The work of Jodrell bank in post-war British culture Jon Agar 6 Molecularizing Biology and Medicine New practices and alliances, 1910s–1970s Edited by Soraya de Chadarevian and Harmke Kamminga
7 Cold War, Hot Science Applied research in Britian’s defence laboratories 1945–1990 Edited by Robert Bud and Philip Gammett 8 Planning Armageddon Britain, the United States and the command of Western Nuclear Forces 1945–1964 Stephen Twigge and Len Scott 9 Cultures of Control Edited by Miriam R. Levin 10 Science, Cold War and the American State Lloyd V. Berkner and the balance of professional ideals Alan A. Needell 11 Reconsidering Sputnik Forty years since the soviet satellite Edited by Roger D. Launius 12 Crossing Boundaries, Building Bridges Comparing the history of women engineers, 1870s–1990s Edited by Annie Canel, Ruth Oldenziel and Karin Zachmann 13 Changing Images in Mathematics From the French revolution to the new millennium Edited by Umberto Bottazzini and Amy Dahan Dalmedico 14 Heredity and Infection The history of disease transmission Edited by Jean-Paul Gaudillière and Llana Löwy 15 The Analogue Alternative The electric analogue computer in Britain and the USA, 1930–1975 James S. Small 16 Instruments, Travel and Science Itineraries of precision from the seventeenth to the twentieth century Edited by Marie-Noëlle Bourguet, Christian Licoppe and H. Otto Sibum 17 The Fight Against Cancer France, 1890–1940 Patrice Pinell
18 Collaboration in the Pharmaceutical Industry Changing relationships in Britain and France, 1935–1965 Viviane Quirke 19 Classical Genetic Research and Its Legacy The mapping cultures of twentieth-century genetics Edited by Hans-Jörg Rheinberger and Jean-Paul Gaudillière 20 From Molecular Genetics to Genomics The mapping cultures of twentieth-century genetics Edited by Jean-Paul Gaudillière and Hans-Jörg Rheinberger 21 Interferon The science and selling of a miracle drug Toine Pieters 22 Measurement and Statistics on Science and Technology 1920 to the present Benoît Godin Also published by Routledge in hardback and paperback Science and Ideology A comparative history Mark Walker
Measurement and Statistics on Science and Technology 1920 to the present Benoît Godin
First published 2005 by Routledge 2 Park Square, Milton Park, Abingdon, Oxon OX14 4RN Simultaneously published in the USA and Canada by Routledge 270 Madison Ave, New York, NY 10016 Routledge is an imprint of the Taylor & Francis Group
This edition published in the Taylor & Francis e-Library, 2005. "To purchase your own copy of this or any of Taylor & Francis or Routledge's collection of thousands of eBooks please go to www.eBookstore.tandf.co.uk.” © 2005 Benoît Godin All rights reserved. No part of this book may be reprinted or reproduced or utilised in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library Library of Congress Cataloging in Publication Data A catalog record for this book has been requested
ISBN 0-203-48152-6 Master e-book ISBN
ISBN 0-203-67208-9 (Adobe eReader Format) ISBN 0–415–32849–7 (Print Edition)
In memoriam Keith Pavitt
If we take in our hand any volume of divinity or school metaphysics, for instance, let us ask: does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact or existence? No. Commit it then to the flames, for it can contain nothing but sophistry and illusion. (D. Hume (1748), An Enquiry Concerning Human Understanding)
Contents
Foreword Acknowledgments
xiv xxi
Introduction
1
PART I
Constructing science and technology statistics
13
Naming and defining 13 Classifying 15 Measuring 16 SECTION I
The number makers 1 Eighty years of science and technology statistics
19 21
The forerunners 21 The OECD 32 The European Commission 42 UNESCO 44 Conclusion 45 2 Taking demand seriously: NESTI and the role of national statisticians The NESTI group 47 Ad hoc review groups 49 Just-in-time numbers 51 Sharing work between leading countries 53 Conclusion 54
47
x
Contents
SECTION II
Defining science and technology 3 Is research always systematic?
55 57
Institutionalized research 58 The semantics of “systematic” 60 Industrialized research 65 What do R&D surveys count? 68 Conclusion 70 4 Neglected scientific activities: the (non-)measurement of related scientific activities
72
Defining R&D 74 What are scientific activities? 76 The international politics of numbers 82 The “autonomization” of RSA 86 Conclusion 88 5 What’s so difficult about international statistics? UNESCO and the measurement of scientific and technological activities
90
The road toward international statistics 91 The view from nowhere 92 Facing the OECD monopoly 97 The end of a dream 99 Conclusion 101 SECTION III
Imagining new measurements 6 The emergence of science and technology indicators: why did governments supplement statistics with indicators? Indicators as policy tools 106 Indicators under pressure 108 Following SI through the OECD 113 Over NSF’s shoulders 115 Conclusion 118
103
105
Contents 7 Measuring output: when economics drives science and technology measurement
xi 120
Economically speaking 121 Controlling the instrument 132 Conclusion 135 8 The rise of innovation surveys: measuring a fuzzy concept
138
R&D as a legitimate proxy 139 Measuring innovation proper 141 Internationalizing the official approach 145 Conclusion 153 SECTION IV
Dealing with methodological problems 9 Metadata: how footnotes make for doubtful numbers
155 157
Living with differences 158 Comparing the incomparable 164 Metadata 176 Conclusion 179 10 Tradition and innovation: the historical contingency of science and technology statistical classifications
182
The system of national accounts 183 Classification problems 191 Conclusion 194 PART II
Using science and technology statistics
197
11 The most cherished indicator: Gross Domestic Expenditures on R&D (GERD)
201
The first exercises on a national budget 202 An accounting system for R&D 204 The mystique of ranking 207 Conclusion 215
xii
Contents
12 Technological gaps: between quantitative evidence and qualitative arguments
218
The productivity gap 219 French ambitions 223 The OECD study on technological gaps 225 The American reaction 229 The official response 233 Conclusion 237 13 Highly qualified personnel: should we really believe in shortages?
239
Reminiscences of war 240 Internationalizing the discourses 249 Conclusion 258 14 Is there basic research without statistics?
262
Emergence 263 Crystallization 266 Contested boundaries 272 Conclusion 285 15 Are statistics really useful? myths and politics of science and technology indicators
287
Rationalizing science and technology policy 288 Controlling research 296 Managing industrial R&D 303 A lobby in action 305 Conclusion 311 Conclusion
314
Appendices
324
Appendix 1: major OEEC/OECD science policy documents 324 Appendix 2: early experiments in official measurement of R&D 325 Appendix 3: early directories on S&T 326 Appendix 4: early NSF statistical publications (1950–1960) 327 Appendix 5: OEEC/OECD committees 328 Appendix 6: DSTI seminars, workshops, and conferences on S&T statistics 329
Contents
xiii
Appendix 7: DSA/DSTI publications on S&T statistics 330 Appendix 8: mandates of the ad hoc review group 332 Appendix 9: some definitions of research 333 Appendix 10: activities to be excluded from R&D 335 Appendix 11: OECD/ICCP publications 338 Appendix 12: UNESCO conferences and meetings on S&T statistics 340 Appendix 13: UNESCO documents on S&T statistics 341 Appendix 14: NSF committee’s choice of indicators (1971) 343 Appendix 15: coverage of activities in early R&D surveys 346 Appendix 16: differences in definition and coverage according to Freeman and Young (1965) 347 Appendix 17: OECD standard footnotes 348 Appendix 18: GERD and its footnotes (millions of current $) 349 Appendix 19: industries (ISIC) 350 Appendix 20: fields of science (FOS) 352 Appendix 21: socioeconomic objectives (SEO) 354 Appendix 22: taxonomies of research 355 Index
357
Foreword
For anyone who has experienced and participated closely in this history, the remarkable inquiry which Benoît Godin has conducted has a certain fascination: in summary, it is a part, if not of my own history, at least, of my career, both professionally and intellectually, that is evoked here. In fact, I have not only lived intimately through the attention paid to the birth and growth of research and development statistics by the OECD member countries, but also, the Science and Technology Policy Division, for which I was responsible within the Directorate of Scientific Affairs, was itself a major consumer, even demander, of these statistics for all of our work—studies, examinations by country, policy evaluations or reports intended for the ministers responsible for these issues. As a consequence, all of the conclusions which Benoît Godin arrives at cannot be indifferent to the image I can today envision, or evoke retrospectively, of our activities, of the functions that they have been able to exercise, and thus, of the strictly political usage to which they have led, whether it be with the OECD as such, or with the member governments, and also with the various participants and research institutions within the member countries. In fact, how many questions are not raised here that require us to personally question ourselves on the meaning of the actions that have been undertaken? This is certainly the case for scientific activities—the realm of measurement and thus, in principle, of certainty—such as we aim to measure it through the means, the concepts, the methods, and the advances of the science of statistics: are we still on the same ground of objectivity and of scientific demonstration, with competing conclusions about the certainties of the science? The first question we should ask is, what is the validity of these indicators and the evaluations to which they give rise? But also, the second question, which must also be seriously addressed is, what is the true role of these evaluations in the decision-making process? And therefore, the third question, what exactly is the objective pursued here, what form of knowledge do we intend to attain or do we hope to attain? The pursuit and extension of knowledge for its own sake? Or that of oriented research? Or that of applied research, and if applied in what instances and for whose interests? So many questions, which signal that this measure of scientific activity, of which the growth dates from the public policies dealing with scientific and
Foreword xv technical research since the end of World War II, cannot be approached as an innocent or neutral scientific activity, if indeed innocence and neutrality can be said to characterize scientific activity. It is obvious here that we are in the realm of politics, that is, in a field where the scientific approach inevitably comes face to face with the politics of different interests and values, and this is basically the entire demonstration of Benoît Godin’s investigative work—both a detective and an examining magistrate, caring about identifying, appreciating, and understanding whom these indicators could possibly benefit, as well as how and why, and in what historical context. A work of a historian and a sociologist, and therefore meticulous and beyond, almost to the point of obsession in his rummaging and reading of the texts (and God knows the OECD, and our Directorate in particular, rolled them out), but also a rigorous and critical vision that puts the references into perspective, does not trust appearances and deepens his inquiry until he can put his finger on the good and the bad uses of statistics, on the intentions we give them, whether rightly or wrongly, just as one may show certain cards and keep others hidden; as Benoît Godin has eloquently displayed, it is a function that is both rhetorical and ideological. I began as a consultant in 1962 at what was then, under the impetus and dynamism of Alexander King, the Directorate of Education and Science at the OECD. With six months to grapple with the mythical prospect of expansion of Turkish universities, as it was conceived by the planners in Ankara: on the basis of statistical projections extrapolated ad absurdum, they dreamed of doubling the number of professors and researchers within ten years! In April 1963, I was recruited as an administrator to assist Emmanuel Mesthene, an American philosopher from the RAND corporation, in the preparation of the first Ministerial Conference on Science. The Frascati manual had already been developed, the first international comparisons of research/development statistics had already begun within the unit led by Yvan Fabian and Alison Young, and the study prepared by Christopher Freeman, comparing the research efforts of Europe, the United States and the Soviet Union, destined to be one of the central reports of the ministerial conference, was already in the works. I therefore discovered—as did all Europeans interested in scientific activities, with the sole exception of Christopher Freeman in England and François Perroux in France, true pioneers and promoters of the economy of science and innovation— the concepts and the use of R&D statistics as a tool for evaluation, comparisons and alert signals directed at/towards European governments to denounce their backwardness and their deficiencies, in the efforts aiming to confront economic reconstruction and growth through science and innovation. Ever since the inception of the Productivity Agency, created as part of the Marshal Plan, the objective of Europe catching up to the United States was defined as dependent on technical capacity and thus on the level of competence of manpower, in particular of training large numbers of highly-qualified scientific and technical personnel. And it was immediately after the reconstruction period and the European economic renaissance, with the OECD succeeding the OECE, and within the
xvi
Foreword
context of the cold war—the “lutte-concours” competition, as François Perroux called it—between the capitalist system and the communist system, liberal economies and administered economies, that education, technical training, and scientific research became priorities, these three sectors defining the famous “residual factor,” as indispensable to the process of growth as work and capital, and the weight of which, nevertheless, resisted, and continues to resist, all efforts of measurement, to the point where it has been defined as “the very measure of our ignorance.”1 Such are the limits of statistical compilations, but also of the exploitation to which they may give rise: you cannot apply as ready-made recipes the heavenly theories of macroeconomics to the earthly world of nations nor even of industries; quite the contrary, in the wake of the economic growth of the 1960s, the actual developments of macro-economic theory were invoked to evaluate technical change as though it were a causal and linear process by which increasing investment in research and development would automatically lead to an increase in the rate of growth in productivity, and thus, of economic growth in general— so many mirages to which the fanaticism (or scientism) of statistical compilations can lead. But these mirages also had the benefit of motivating people’s minds to the importance and the necessity of increasing investment to foster training of technical skills and public support for scientific activities. As has often been said, the politics of science and technology were primarily the product of World War II, and of the days that followed without peace which perpetuated the strategic objectives: the learning in Europe of the economics of research and innovation existed within the context of the bipolar rivalry between the two blocs, and any comparison with the ideological adversary was dedicated to reinforcing the options of one’s own side, even by denouncing their shortcomings. The kick-off was given by Christopher Freeman’s report comparing research efforts between the United States, Europe, and the Soviet Union, and which already suggested the notion of a gap that must be closed at all costs.2 It was to suggest that the Western world had been surpassed by the communist world in its capacity to produce and increase the number of scientists and engineers, a theme with next to no basis in fact, just like the missile gap of the same period, but how motivating! As Benoît Godin clearly demonstrates, there is no more consistency in the theme of the technology gap put forward by Pierre Cognard of the Délégation générale à la recherche scientifique et technique in France, with the help of the OECD’s Directorate for Education and Science and the support of the majority (with the exception of the Unites States) of the Committee for Science and Technology Policy (CSTP): certain countries and the Brussels Commission were more than
1 P. N. Rasmussen (1975), The Economics of Technological Change. Wiksell Lectures, The Stockholm School of Economics, “Residual entered the scene and has since then occupied a major role in a play which could be given various titles—like ‘What we don’t know’ or ‘As you like it’. ” 2 C. Freeman and Alison Young (1965), The Research and Development Effort in Western Europe, North America and the Soviet Union, OECD, Paris.
Foreword xvii happy to draw from this the lesson that it is essential to invest more in research and development activities, but the technology gap quickly gave way in people’s minds and in the media to the managerial gap, a gap that is more soundly based, and which would rapidly lead all European business schools to try to fill it by taking inspiration from the best American business schools. In brief, with wrong ideas, you can always make good policy . . . Placed in charge of the international scientific cooperation dossier in 1963 for this first ministerial conference, and immediately afterward named head of the newly created Science and Technology Policy Division, I directed and guided studies, in particular the series of examinations by country and the three volumes of The Research System, which were filled with international statistical comparisons, while aiming to transcend such comparisons. My training as a philosopher and a science historian was not for nothing, evidently, in this conception and practice of studies intended for the Science and Technology Policy Committee. And indeed, between these works and those of Yvan Fabian’s unit, there was a genuine competition (although, it should be noted, it was always a friendly rivalry): on the one hand, the bias in favor of qualitative analyses of a historical and sociological character, conducted using consultants from outside the administrations charged with scientific affairs, and on the other hand, the mathematical weight of the inquiries, compilations, and statistical projections of which the sources depended on contributions coming directly from the member countries. As Benoît Godin has clearly brought to light, the stumbling block in the debate: it was—and still remains—the impossibility of measuring the profitability of fundamental research and, more generally, the productivity of university institutions. How, in this field, one can establish a mathematical relationship between the input (investments and structural adaptation measures) and the output (scientific discoveries and innovations) was obviously the objective dreamed of by the bureaucrats in charge of managing scientific affairs—the inaccessible dream of planner-technocrats. The members of the CSTP echoed the tensions of this debate, those concerned about profitability coming from the ministries of financial or economic affairs, and those from the universities concerned with seeing researchers flourish in the most favorable working conditions possible. Overall, the perspective of the managers and the policy-makers against that of the researchers and the laboratories. I showed in Science and Politics how in the United States in the sixties, people did not give up hope of mastering this issue: for the National Science Foundation with the Traces project, and for the Department of Defense with the Hindsight project, statistical data were called to the rescue to attempt (but in vain!) to prove the cause-and-effect relationship between the support granted to the pursuit of knowledge, and the aptitude of entrepreneurs, industrialists, and managers to convert scientific discoveries into practical applications.3 Moreover, there was
3 J.-J. Salomon (1973), Science and Politics, Chapter 4: “The Scientific Research System”, Cambridge, MA: MIT Press.
xviii
Foreword
almost a paradox, seeing the American public institutions, by definition sensitive to the dogma of the liberal economy, given over so strictly to the Marxist idea of fundamental research, subjected to the performance criteria to which the entire productive system must submit! In fact, the statistical work in matters of research/development, like the progress which the sociology of science has witnessed for more than a quartercentury, has enabled us to escape from being condemned to the alternative of seeing in fundamental research either an entirely autonomous activity, independent of any constraint or social pressure, absent from any intent to operate within the realm of economics, and thus a stranger to conflicts of interest and of values which feed political or financial affairs or, on the contrary, in the Marxist sense, a sub-product of production activities which in no way escape all the constraints of the economy and of politics. The realities of the “hard core” of fundamental science require navigation between these two extreme interpretations, internalism, and externalism: we will never fail to recognize the cognitive dimensions of the discovery that is truly scientific, no more than we could in the future ignore its involvement, come what may, in social processes. As Robert Merton, the founding father of the sociology of the sciences, has long maintained, the theme of autonomy has nourished the ideology of science, coming back, on the part of scientists, “to repudiate the application of utilitarian standards to their work, and it has as its main function to avoid the risk of a too strictly exercised control by the agencies that fund them.” And he adds, with a good dose of irony: “Tacit recognition of this function may be the source of the no-doubt apocryphal toast of Cambridge scientists: ‘To pure mathematics, and may they never be of any use to anyone!’ ”4 But the utilitarian theme inspired by research/development statistics likening fundamental research to any production activity, on the condition it does not refer to another form of ideology, is revealed to be a constant impasse. We understand that university researchers, who assert the cognitive privileges of fundamental research, have always been reluctant to see their work evaluated strictly on the basis of “objective” criteria, notably those of bibliometric data: it clearly comes across as dispossessing it, and thus as a threat of putting it under guardianship. Playing on every sense of the word “indicator,” one of my students, in a brilliant DEA-level paper (the diploma required in the preparation of a doctoral thesis), went so far as to speak of the surveillance of researchers that the recourse to indicators implies, as the work of an “informer,” that is, a variant of the General Information file inquiring into “who has published in which journal, who has mentioned whose name and in what context . . . But an indicator can be turned upside down by those who are observed, and become in fact a double-agent.” This is what he illustrated with regard to peer evaluations and bibliometrics, while emphasizing the uncertainty of that which is informed on: “an indicator does not only transmit what one wished it to convey, and however little the persons examined by it may
4 R. K. Merton (1957), Social Theory and Social Science, The Free Press of Glencoe, Chapter XV, p. 556.
Foreword xix be aware of the process that is occurring, they will quickly see to filtering the information to the best of their own interests.”5 It is true that if one is content to pay attention only to these apparently objective data, one can only obtain from the reality of research and its results a vision that is as naïve as it is skewed. So we must be careful of indicators as informers which capture intimate secrets in order to report them to their sponsors, which must disguise themselves in order to adopt the particular language of the population they spy on, and might just as easily transmit rumours, if not false information! Overall, this is the warning that Robert Merton himself, in the preface to Eugene Garfield’s famous Citation Index, believed necessary to send out despite his penchant for a utilitarian interpretation of scientific activity: “The forensic use of citations counts to compare the impact of scientific contributions by individuals only provides an extreme type of occasion for subjecting such practices to the organized scepticism that is one of the fundamental characteristics of science.”6 Even more so, if it concerns comparisons between research institutions or national research efforts, the impact of their activities as “an extreme type of occasion” for such “organized scepticism” must be set out. It is surely the great merit of Benoît Godin’s book that it questions the meaning of these limitations of research/development indicators, and produces answers that are as context-sensitive as they are convincing: how can it be that the statistical authorities, known for their objective work, could be so “plugged into politics” as to act like “lobbies in action,” among others? “The view of statistics and indicators as information for decision-making,” he says, “derives its power and legitimacy from economic theory: the belief that people will act rationally, if they have perfect information on which to base their decisions.” But, on one hand, there is never perfect information, and it is not because we are dealing with scientific activity that decision-making processes are required to be more “scientific” than others, and on the other hand, we should never underestimate the legitimizing function played by statistics aimed at enlightening the management of business. Must we remind the reader? One attributes the coining of the term “statistics” to a Professor from Göttingen, G. Achenwall, who in 1746 had created the word Statistik, derived from the concept of Staatskunde: that is to say that beyond the original and descriptive function of censuses of population or of production, which themselves date back to the age of the first States, the new functions of evaluation and of projection, eminently normative functions, can never be dissociated from the political/ideological aims of nation-states. Research/development indicators, far from escaping from this rule (or this fate), illustrate most particularly this drift (or this ambition) of statistical series to serve as a mobilizing rhetoric in political stakes which often transcend them, with 5 M. Waintreter (1992), La mesure de la science: L’évaluation de la recherche, Un essai de présentation, Mémoire de DEA STS, Centre Science, Technologie et Société, Conservatoire National des Arts et Métiers, Paris, January 22 (mimeographed, in the CNAM library), pp. 103 et seq. 6 R. K. Merton (1979), Foreword to E. Garfield, Citation Index: Its Theory and Application in Science, Technology and Humanities, New York: John Wiley and Sons, p. x.
xx
Foreword
bureaucratic authorities who use them as propaganda or as an alarm, and in which the administrations in charge of collecting, classifying and analyzing them are used to construct the measurements, which do not really measure everything they pretend to examine. So, as Benoît Godin underlines in the chapter devoted to these “missing links,” research/development indicators are far from including all that which scientific and technical research makes their daily routine, training, information, design, etc., all essential ingredients, however, in the process of innovation. Nevertheless, in societies in which science plays the role of religion, these indicators are most often encountered as articles of faith. Statistics is the activity which consists of bringing together the data concerning, in particular, knowledge of the situation of States, what Napoleon called “the budget for things.” But this knowledge is never free of presuppositions nor of normative intentions: the budget for things deals, in fact, with the management of men and societies, and it has not remained silent; on the contrary, it has a lot to say about the fantasies of management. Even more so, when we consider postindustrial societies, where there is no longer simply, in the sense of Auguste Comte, application of science to production, but a systematic organization of all social structures in view of scientific production. From the start of the game, the budget for things has been overwhelmed by the weight of ideas, interests and values, on which feed the political landscape and the management of relationships of force in the competition among institutions, as among States, for power, fortune, or glory. Jean-Jacques Salomon Honorary Professor Chair of Technology and Society Conservatoire National des Arts et des Métiers Paris
Acknowledgments
Four chapters in this book have been previously published in the following journals. I sincerely thank the editors for permission to use the material published in their pages. Chapter 1: B. Godin (2002), The Number Makers: Fifty Years of Official Statistics on Science and Technology, Minerva, 40(4), pp. 375–397; with kind permission of Kluwer Academic Publishers. Chapter 6: B. Godin (2003), The Emergence of Science and Technology Indicators: Why Did Governments Supplement Statistics with Indicators?, Research Policy, 32(4), April, pp. 679–691; with kind permission of Elsevier. Chapter 12: B. Godin (2002), Technological Gaps: An Important Episode in the Construction of S&T Statistics, Technology in Society, 24, pp. 387–413; with kind permission of Elsevier. Chapter 14: B. Godin (2003), Measuring Science: Is There Basic Research Without Statistics?, Social Science Information, 42(1), March, pp. 57–90; with kind permission of Sage Publications Ltd and Foundation of the Maison des Sciences de l’Homme.
Introduction
Modernity today is measured by the yardstick of scientific and technological development. Formerly a nation’s strength depended on its military power and social prestige. The Royal Court therefore developed various tools to collect money to finance its activities.1 In the last two centuries, however, the strength and health of the population have become the signs of civilization. Governments thus started collecting statistics on their subjects to protect or manipulate them, or simply as a symbol of national identity.2 Then in the nineteenth century, countries began comparing themselves in economic terms. Economic performance indicators like the Gross National Product (GNP)3 were adopted by almost every country as evidence of national achievement.4 Today, modern societies display their science and technology (S&T) performance: “the strength, progress and prestige of countries are today measured in part by their achievements in S&T, scientific excellence is more and more becoming an important national goal. National resources are therefore increasingly devoted in research and development,”5 and research and development (R&D) expenditure as a percentage of GNP is the main indicator for measuring these efforts.
1 J. B. Henneman (1971), Taxation in Fourteenth Century France: The Development of War Financing 1322–1356, Princeton: Princeton University Press; G. L. Harris (1975), King, Parliament, and Public Finance in Medieval England, Oxford: Clarendon Press. 2 D. V. Glass (1973), Numbering the People: The 18th Century Population Controversy and the Development of Census and Vital Statistics in Britain, Farnborough: D. C. Heath; P. C. Cohen (1982), A Calculating People: The Spread of Numeracy in Early America, Chicago: University of Chicago Press; J.-C. Perrot and S. J. Woolf (1984), State and Statistics in France, 1789–1815, Glasgow: Harwood Academic Publishers; A. A. Rusnock (1995), Quantification, Precision, and Accuracy: Determinations of Population in the Ancien Régime, in N. Wise (ed.), The Values of Precision, Princeton: Princeton University Press, pp. 17–38; S. Patriarca (1996), Numbers and Nationhood: Writing Statistics in 19th Century Italy, Cambridge: Cambridge University Press; B. Curtis (2001), The Politics of Population: State Formation, Statistics and the Census of Canada 1840–1875, Toronto: University of Toronto Press. 3 Or Gross Domestic Product (GDP). 4 P. Studenski (1958), The Income of Nations: Theory, Measurement, and Analysis: Past and Present, New York: New York University Press; F. Fourquet (1980), Les comptes de la puissance: histoire de la comptabilité nationale et du plan, Paris: Encres; M. Perlman (1987), Political Purpose and the National Accounts, in W. Alonso and P. Starr, The Politics of Numbers, New York: Russell Sage Foundation, pp. 133–151; A. Vanoli (2002), Une histoire de la comptabilité nationale, Paris: La Découverte. 5 OECD (1963), Science and the Policies of Government, Paris, p. 15.
2
Introduction
Industrial countries’ governments and researchers have measured S&T for more than eighty years. The statistics now used derive from two sources. First, the wide availability of S&T statistics is largely due to groundwork by government organizations like the US National Science Foundation (NSF) in the 1950s, and intergovernmental organizations like the Organization for Economic Co-operation and Development (OECD) in the 1960s. There were doubtless highly systematic attempts at measuring S&T before the 1950s, but these were confined to Eastern Europe.6 Second, much is owed to Joseph Schmookler7 and Derek J. deSolla Price,8 who in the 1950s and 1960s directed the attention of university researchers to measuring S&T. Following this work, the fields of scientometrics, and especially bibliometrics (counting publications and citations), united many researchers globally, yielding various data for countless users. Given the centrality of S&T statistics in science studies (economics, policy, sociology) and government discourses, it is surprising that there has been no historical examination of their construction. Many manuals summarize the field, and a voluminous literature consisting of articles that discuss or criticize S&T indicators, but there is nothing approaching a true history.9 The measurement of S&T may not exactly fit the categories of social statistics studied by historians. First, it does not concern the general population and its socioeconomic characteristics, but rather researchers and the knowledge or innovations they produce. Second, S&T measurement does not constitute economic statistics, since it is not concerned, at least historically, with measuring “economic goods” produced by science, but rather with the activities of producers of knowledge, such as the universities. Despite these differences, the measurement of S&T is, like social and economic statistics, a statistic produced by the state (among others). Examining it in light of social statistics literature would thus enable discernment of the specific characteristics of science measurement from other governmentconducted measurements.10 At least two central characteristics define the development of government S&T statistics. First, and contrary to what the history of social statistics would
6 C. Freeman and A. Young (1965), The Research and Development Effort in Western Europe, North America and the Soviet Union: An Experimental International Comparison of Research Expenditures and Manpower in 1962, Paris: OECD, pp. 27–30, 99–152. 7 J. Schmookler (1950), The Interpretation of Patent Statistics, Journal of the Patent Office Society, 32 (2), pp. 123–146; J. Schmookler (1953), The Utility of Patent Statistics, Journal of the Patent Office Society, 34 (6), pp. 407–412; J. Schmookler (1953), Patent Application Statistics as an Index of Inventive Activity, Journal of the Patent Office Society, 35 (7), pp. 539–550; J. Schmookler (1954), The Level of Inventive Activity, Review of Economics and Statistics, pp. 183–190. 8 D. D. S. Price (1963), Little Science, Big Science, New York: Columbia University Press, 1963; D. D. S. Price (1961), Science since Babylon, New Haven: Yale University Press, 1961; D. D. S. Price (1956), “The Exponential Curve of Science,” Discovery, 17, pp. 240–243; D. D. S. Price (1951), “Quantitative Measures of the Development of Science,” Archives internationales d’histoire des sciences, 5, pp. 85–93. 9 For an overview of statistics currently in use, see: E. Geisler (2000), The Metrics of Science and Technology, Westport: Quorum Books. 10 B. Godin (2002), Outlines for a History of Science Measurement, Science, Technology, and Human Values, 27 (1), pp. 3–27.
Introduction 3 11
12
suggest based on political arithmetic or biometrics, the development of S&T measurement was not originally motivated by some desired social control of the researchers or laboratories. The purpose of the first science policies and their statistics was to fund, without much intervention, ever more research hoping that this served the economy. Only later would orienting (not controlling) publicly funded research toward specific national needs become another goal. I take seriously Ian Hacking’s remark that statistical offices were established more for counting than controlling: governments did not succeed in putting a moral economy into practice; instead, statistics departments developed according to their own internal logic.13 Second, S&T measurement was an exercise defined almost simultaneously both internationally and nationally. It was not really subject to the passage from contested diversity to universality and standardization of definitions and methods that characterize other branches of statistics (and many technological standards).14 When the OECD published the first international methodology manual in 1963, few countries collected S&T statistics. Most countries conducted their first surveys according to OECD standards. We owe most of the quantitative analysis of S&T to economists. However, economists were for years uninterested in S&T, the latter being simply a residual in econometric models. The two fundamental variables (capital and labour) explained economic progress.15 Today, S&T are recognized as at the very heart of the knowledge-based and new economies.16 Governments now trust S&T for the well-being of industry and the economy. But measuring the contribution of S&T to the economy creates important difficulties.17 The most recent example of this is called the productivity paradox: new technology investments do not seem to translate into measurable productivity gains. So we still don’t know how to assess the contribution of S&T to economic progress, although S&T statistics have existed for more than eighty years.
11 P. Buck (1977), Seventeenth-Century Political Arithmetic: Civil Strife and Vital Statistics, ISIS, 68 (241), pp. 67–84; P. Buck (1982), People Who Counted: Political Arithmetic in the 18th Century, ISIS, 73 (266), pp. 28–45; J. Mykkanen (1994), To Methodize and Regulate Them: William Petty’s Governmental Science of Statistics, History of the Human Sciences, 7 (3), pp. 65–88; J. Hoppit (1996), Political Arithmetic in 18th Century England, Economic History Review, 49 (3), pp. 516–540. 12 D. MacKenzie (1981), Statistics in Britain, 1865–1930, Edinburgh: Edinburgh University Press. 13 I. Hacking (1982), “Biopower and the Avalanche of Numbers,” Humanities in Society, 5 (3–4), pp. 279–295. See also: H. Le Bras (1986), La statistique générale de la France, in P. Nora (ed.), Les lieux de mémoire: la Nation, Tome 2, Paris: Gallimard, pp. 317–353. 14 I do not mean that national statistics were all alike, but that all OECD countries shared the primary concepts and methods. 15 M. Abramovitz (1956), Resource and Output Trends in the United States Since 1870, American Economic Review, 46, May, pp. 5–23; R. M. Solow (1957), Technical Change and the Aggregate Production Function, Review of Economics and Statistics, 39, August, pp. 312–320. 16 OECD (2001), The New Economy: Beyond the Hype, Paris; OECD (2001), Drivers of Growth: Information Technology, Innovation and Entrepreneurship, Paris. 17 P. Howitt (1996), On Some Problems in Measuring Knowledge-Based Growth, in P. Howitt, The Implications of Knowledge-Based Growth for Micro-Economic Policies, Calgary: University of Calgary Press, pp. 9–29; A. P. Carter (1996), Measuring the Performance of a Knowledge-Based Economy, in OECD, Employment and Growth in the Knowledge-Based Economy, Paris, pp. 61–68.
4
Introduction
This book is about the history of official S&T statistics and indicators in western countries. Statistical collection started in the 1920s, but only in the 1950s did the first systematic national surveys appear. Thereafter, S&T statistics spread to most countries via the OECD Frascati manual18 and, less prominently, UNESCO’s efforts. Three factors explain the emergence of these statistics and the kind of measurement performed. First, World War II severely limited the availability of scientists and engineers, just as the Cold War impacted the widespread recognition of the USSR as a great power. In Science: The Endless Frontier, Vannevar Bush commented: “We have drawn too heavily for nonscientific purposes upon the great natural resource which resides in our young trained scientists and engineers. For the general good of the country too many such men have gone into uniform . . . There is thus an accumulating deficit of trained research personnel which will continue for many years.”19 Governments and academics’ “lobbies” then devoted significant efforts toward inventorying scientists and engineers to forecast the number of highly specialized people needed, and to compare the two blocs by the qualifications of their personnel. Second, S&T statistics emerged concurrently with science policy, since fact-finding was explicitly a condition for informed policy decisions: “The tasks of a Science and Policy Office will naturally divide into information gathering on the one hand, and advisory and co-ordinating activities on the other. The latter will require a sound factual basis,” recommended the Organization for European Economic Co-Operation (OEEC)—the predecessor of the OECD—in 1960.20 Economics was central through all these developments, and is the third factor explaining the uniqueness of S&T statistics. As early as 1962, the OECD Committee on Scientific Research (CSR) decided “to give more emphasis in the future to the economic aspects of scientific research and technology.”21 In fact, the organization suggested governments assess the contribution of S&T to economic growth and productivity, so official measurement was definitely oriented toward economics. Although this emphasis would be questioned ten years later,22 it would never entirely disappear—au contraire. Although policy-motivated, S&T statistics did not precede, but followed, these policies. We hear that developing official statistics is motivated by the will to guide government action. In fact, the statisticians, economists and policy-makers’ rhetoric should not be taken at face value. Certainly, the historical development and institutionalization of statistics was intimately linked to the state.23 The modern 18 OECD (1963), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Development, DAS/PD/62.47. 19 V. Bush (1945), Science: The Endless Frontier, North Stratford: Ayer Co., 1995, p. 24. 20 OEEC (1960), Co-operation in Scientific and Technical Research, Paris, p. 39. 21 OECD (1962), Minutes of the 4th Session, Committee for Scientific Research, SR/M (62) 2, p. 17. 22 OECD (1971), Science, Growth and Society: A New Perspective, Paris. 23 T. M. Porter (1997), The Management of Society by Numbers, in J. Krige and D. Pestre, Science in the 20th Century, Amsterdam: Harwood Academic Publisher, pp. 97–110; E. Brian (1994), La mesure de l’État: administrateurs et géomètres au XVIIIe siècle, Paris: Albin Michel; A. Desrosières (1993), The Politics of Large Numbers: A History of Statistical Reasoning, Cambridge, MA: Harvard University Press, 1998.
Introduction 5 state has always been an important patron, and is now a producer, of statistics. Its apparent objectivity24 made the science of statistics highly useful to governments. First, statistics could serve the primary function of science according to Max Weber— to enlighten and inform. Governments could use science—and statistics—for instrumental purposes. Expanding upon this, Michel Foucault suggested we live in a “biopower” era where statistics are used to control populations in complex and often unpredictable ways.25 The history of social statistics devotes considerable attention to this idea: medical data, civil registers and population censuses have been called “technologies” of human control.26 But most often, governments constructed statistics to serve symbolic, ideological and political ends. Statistics certify the user’s credibility and serious-mindedness. It objectifies political choices, that is, eliminates arbitrariness and inherent subjectivity in political action by portraying decisions as transparent and lending legitimacy to political discourse.27 S&T statistics is no exception. Economists have postulated a need for (perfect) information—generally quantitative—to assist rational decision-making, and held the same discourses while participating in government efforts to develop S&T statistics. Chris Freeman, author of the first OECD Frascati manual, suggested: Trying to follow a science policy, to choose objectives and to count the costs of alternative objectives, without such statistics is equivalent to trying to follow a full employment policy in the economy without statistics of investment or employment. It is an almost impossible undertaking. The chances of getting rational decision-making are very low without such statistics. (C. Freeman (1968), Science and Economy at the National Level, in OECD, Problems of Science Policy, Paris, p. 58) 24 T. M. Porter (1995), Trust in Numbers: The Pursuit of Objectivity in Science and Public Life, Princeton: Princeton University Press. 25 M. Foucault (1980), Knowledge/Power, New York: Pantheon Book. 26 I. Hacking (1982), Biopower and the Avalanche of Numbers, Humanities in Society, op. cit.; I. Hacking, Making Up People, in T. C. Heller (ed.) (1986), Reconstructing Individualism, Stanford: Stanford University Press, pp. 222–236; N. Rose (1988), Calculable Minds and Manageable Individuals, History of the Human Sciences, 1 (2), pp. 179–200; A. Hanson (1993), Testing Testing: Social Consequences of Examined Life, University of California Press; P. Carroll (1996), Science, Power, Bodies: The Mobilization of Nature as State Formation, Journal of Historical Sociology, 9 (2), pp. 139–167; M. Donnelly (1998), From Political Arithmetic to Social Statistics: How Some 19th Century Roots of the Social Sciences Were Implanted, in J. Heilbron, The Rise of the Social Sciences and the Formation of Modernity, Kluwer Academic Publishers, pp. 225–239; A. Firth (1998), From Oeconomy to the Economy: Population and SelfInterest in Discourses on Government, History of the Human Sciences, 11 (3), pp. 19–35; A. A. Rusnock (1999), Biopolitics: Political Arithmetic in the Enlightenment, in W. Clark, J. Golinski, S. Schaffer (eds), The Sciences in Enlightened Europe, Chicago: University of Chicago Press, pp. 49–68. 27 Y. Ezrahi (1990), The Descent of Icarus: Science and the Transformation of Contemporary Democracy, Cambridge, MA: Harvard University Press; K. M. Baker (1990), Science and Politics at the End of the Old Regime, in Inventing the French Revolution, Cambridge: Cambridge University Press, pp. 153–166; K. Prewitt, Public Statistics and Democratic Politics, in W. Alonso and P. Starr (eds) (1987), The Politics of Numbers, New York: Russell Sage Foundation, pp. 261–274; T. M. Porter (1992), Quantification and the Accounting Ideal in Science, op. cit.; T. M. Porter (1992), Objectivity as Standardization: The Rhetoric of Impersonality in Measurement, Statistics, and Cost-Benefit Analysis, Annals of Scholarship, 9, pp. 19–59.
6 Introduction However, this belief did not apply perfectly to S&T statistics, because policies often appeared before specific and systematic statistics became available. The dream persisted, however, because, as John R. Searle noted, people make two persistent mistakes about rationality: First, many people believe that (. . .) rationality should provide them with an algorithm for rational decision making. They think they would not be getting their money’s worth out of a book on rationality unless it gave them a concrete method for deciding whether or not to divorce their spouse, which investments to make in the stock market, and which candidate to vote for in the next election. (. . .) A second mistake that people make about rationality is to suppose that if standards of rationality were universal and if we were all perfectly rational agents, then we would have no disagreements (. . .). One of the deepest mistakes in our social background assumptions is the idea that unresolvable conflicts are a sign that someone must be behaving irrationally or worse still, that rationality itself is in question. ( J. R. Searle (2001), Rationality in Action, Cambridge, MA: MIT Press, pp. xiv–xvi) This book suggests that official statistics reflect already-held views, rather than offering information for choosing among alternatives. As T. M. Porter suggested: “The key purpose of calculation is to expound and justify choices, rather than to make them.”28 This is why statistics are used and “useful,” despite considerable limitations: they help construct discourses on how we want things to be. Between statistics and action, then, there is discourse: statistics are a rhetorical resource used to convince for a course of action already selected. Statistics crystallize choices and concepts that support them. In the last eighty years, what view—or rhetoric—was held by governments on S&T? Primarily that S&T should contribute to economic progress, and particularly to productivity. It is commonplace to argue that World War II suggested governments use science to achieve specific national objectives,29 or to argue that the Cold War and Sputnik were important catalysts for increased government funding.30 I suggest that productivity issues and economic growth were probably as important as these two factors. In fact, governments never really believed that S&T generate automatic benefits to society and the economy. What they believed, following the United States, was that S&T could be deliberately used to solve productivity issues. It was in the context of transatlantic productivity gaps that European S&T were widely discussed in the 1950s. As we will see, the economic orientation of these statistics, and the debates in which they have been used, clearly confirm this thesis. 28 T. M. Porter (1991), Objectivity and Authority: How French Engineers Reduced Public Utility to Numbers, Poetics Today, 12 (2), p. 252. 29 B. L. R. Smith (1990), American Science Policy Since World War II, Washington: Brookings Institution. 30 R. L. Geiger (1997), What Happened after Sputnik? Shaping University Research in the United States, Minerva, 35, pp. 349–367; A. J. Levine (1994), The Missile and Space Race, Westport: Praeger, pp. 57–72.
Introduction 7 This book highlights the impact of the OECD as the organization responsible for “international” standardization of S&T statistics, considerably influencing their measurement in all western countries. In studying OECD statistics, however, we also look at national activities, since the OECD played an influential role within member countries on several levels. First, the organization was the catalyst for development of national science policies in the 1960s. It acted as a think-tank on science policy issues, and produced several documents that oriented and continue to influence national S&T policies (see Appendix 1). The OECD annually produces over 13,000 reports, 300 books, and 25 journals, read worldwide and widely cited by bureaucrats. The history of the OECD’s contribution to science policy has yet to be written, however. Second, the organization remains the main producer of internationally-comparable statistics, to the point where: “if the OECD were to close its doors tomorrow, the drying up of its statistics would probably make a quicker and bigger impact on the outside world than would the abandonment of any of its other activities.”31 To support this work, the OECD developed several methodological manuals to guide national statisticians in measuring S&T. To understand the OECD’s role, however, we have to look at the few national experiences that preceded and influenced its work. Three countries were pioneers in the field of S&T measurement: the United States, Canada, and the United Kingdom. The statistical activities of these three countries are studied to the extent that they help explain developments toward the standardization of statistics. Two other international organizations are also examined. First, UNESCO played an important role from 1968 to 1984, spreading OECD norms to lessdeveloped countries, and developing new methodologies. Second, the European Commission started producing its own statistics in the 1990s. We can identify three stages in the measurement of S&T, from autonomous statistics to problem-oriented indicators. From the early 1920s onward, governments conducted sectoral surveys to find out what was happening in S&T: what was the stock of scientists and engineers, who performed R&D, how much, etc. Input statistics, as these statistics came to be called, were standardized in 1963 through the OECD Frascati manual. The United States had an enormous influence on its concepts and the methodologies. In fact, the Frascati manual was largely inspired by methodological thinking conducted at the NSF in the 1950s. A second stage began, in the 1970s, when work on indicators appeared alongside R&D statistics. Several indicators were developed to assess the economic output of S&T. The OECD was the instigator, but it was the NSF that would develop the concept of S&T indicators further. These two stages reflect corresponding stages in science policy development. The first, often called “policy for science,”32 was concerned with developing scientific activities, while the second—“policy through science”—centered on using
31 OECD (1994), Statistics and Indicators for Innovation and Technology, DSTI/STP/TIP (94) 2, p. 3. 32 OECD (1963), Science and the Policies of Government, Paris, p. 18.
8
Introduction
science to achieve national objectives. Despite their fit with policy, however, statistics and indicators rapidly gained a certain autonomy, as reflected in special and periodic statistical repertories. So a third stage saw the development of more policy-oriented indicators in the 1990s, due to, among others, the OECD Technology/Economy Program (TEP). This book addresses four questions: What main historical events led to the development of S&T statistics? What were the main sociopolitical stakes behind science measurement? What philosophical and ideological conceptions drove this measurement? What statistics and indicators were constructed, and how? The book has two parts. The first concentrates on the development of S&T statistics from 1920 to 2000, the principles involved, and the agendas and issues affecting that development. It concerns first the steps leading to the production of numbers, namely choosing and naming an entity, defining its components, and measuring. The thesis of Part I is that governments defined S&T in a specific way, and imposed their vision via their monopoly on national surveys, and their rejection of alternative measurements. Furthermore, from the start, official statistics on S&T were driven by economic considerations. If there was ever a “policy for science” epoch, a period when science was considered a cultural good, no such priorities or values existed among statisticians, economists and policy-makers, including those in international organizations. The first section of Part I is historical, introducing the main events and issues. It demonstrates that standardization of S&T statistics was a crucial step in institutionalizing the survey as instrument for measuring S&T. This standardization built on the United States’ experience, but also on Canada’s and the United Kingdom’s (Chapter 1). It also owes its existence to the National Experts on Science and Technology Indicators (NESTI), which assisted the OECD Secretariat throughout the period, and developed a relative consensus concerning what to measure (Chapter 2). Section 2 examines the choices and values behind S&T measurements, explaining how the concept of R&D arose, and why it became the main measurement of S&T (Chapter 3). It explains that the industrial R&D survey was the model used to measure S&T, and analyzes the semantics of R&D to document the historical drift in our modern understanding of research because of the way it is measured. Chapter 4 explains why other scientific activities than R&D were extensively defined in methodological manuals and surveys: not for measurement, but to better separate and eliminate them from S&T statistics. Only UNESCO took on, with difficulty, measurement of these activities, because of their importance for developing countries. UNESCO failed, however, because the task was too ambitious (Chapter 5). Section 3 goes beyond R&D statistics, documenting the development of new S&T indicators in the 1970s and 1980s. Chapter 6 documents the pioneering role of the NSF and the OECD on output indicators, and analyzes the dialectic between the two organizations. Chapter 7 concerns output indicators, discussing the economic orientation of official statistics, culminating, as discussed in Chapter 8, in the innovation survey. These two chapters also present conceptual clarifications between the output and impact (or outcome) of S&T and innovation as a product or process to show how
Introduction
9
conceptual confusion between the two led to dismissal of some statistics as official indicators. Section 4 concludes Part I, reflecting the limitations of S&T statistics. Statistics are said to be about facts, and are usually presented as “hard” facts themselves. According to the main producers of S&T statistics themselves, however, their numbers concern trends only. As Statistics Canada suggested recently: The GERD [Gross Domestic Expenditures on R&D], like any other social or economic statistic, can only be approximately true (. . .). Sector estimates probably vary from 5 to 15 per cent in accuracy. The GERD serves as a general indicator of S&T and not as a detailed inventory of R&D (. . .). It is an estimate and as such can show trends (. . .). (Statistics Canada (2002), Estimates of Total Expenditures on R&D in the Health Fields in Canada, 1988–2001, 88F0006XIE2002007) This confession is unique in official statistics. Certainly, there are many real problems in measuring S&T: conceptual difficulties in identifying and separating research activities, the intangible nature of scientific output, etc. But official practices often complicate the matter further: the use of proxies instead of direct indicators, estimating numbers instead of direct measurement. As D. Bosworth et al. stated about the GERD: it is not a national budget but “a total constructed from the results of several surveys each with its own questionnaire and slightly different specifications.”33 To document and analyze the limitations of S&T statistics, Chapter 9 shows how footnotes were used to make national statistics comparable, but how, at the same time, they made a “black box” of statistics. Chapter 10 analyzes the classification system used in S&T measurement, and its inadequacies, since it is not S&T oriented, but borrowed from the System of National Accounts (SNA) and other existing classifications. It will become clear that the limitations of statistics never prevented governments from using them without qualification. Part II analyzes how statistics were used, and the confidence with which participants used statistics to document their case—any case—or to promote their agenda. The main thesis is that political uses predominated, although the rhetoric of rationalizing public action prevailed: governments used statistics to display their performance, and scientists to lobby for increased funding. Although we sometimes read that “many indicators do not warrant a strong (normative) interpretation, that is, to suggest that high or low values of the indicator are good or bad,”34 they were generally read that way. Chapter 11 starts the demonstration, analyzing the development of the main S&T indicator (GERD) and the studies the OECD conducted with it. It shows how ranking countries by GERD became a leitmotif and how the best performer—the United States—became the OECD benchmark. Chapter 12 continues this analysis, examining an important public debate, the first in history to use S&T indicators 33 D. L. Bosworth, R. A. Wilson and A. Young (1993), Research and Development, Reviews of United Kingdom Statistical Sources Series, Vol. XXVI, London: Chapman and Hill, p. 29. 34 CEC (2000), Structural Indicators, COM(2000) 594, p. 6.
10
Introduction
extensively, on the technological gaps between Europe and the United States. It documents how one side used numbers to support its case, while the other turned to qualitative arguments. Chapter 13 analyzes the use of statistics in constructing discourses on shortages of qualified manpower and the “brain drain.” What characterized these discourses was that numbers greatly contributed to creating the problem. Although far from accurate, statistics were used without qualification by participants because, as Robert McNamara once suggested: “All reality can be reasoned about. Not to quantify, classify, measure what can be dealt with in this way is only to be content with something less than the full range of reason.”35 Chapter 14 shows the links between political discourses on basic research and the role statistics played in them. It suggests that statistics helped crystallize the concept of basic research, but only temporarily. When governments’ interest in basic research per se faded, the statistics were progressively contested and abandoned. Finally, Chapter 15 asks how much governments used S&T statistics mechanically, concluding that official statistics are mainly contextual information, information presenting a very general portrait of S&T. Such statistics could rarely answer policy-makers’ fundamental questions, being used mainly for rhetorical purposes. The research behind this book is based on archival material. The OECD kindly permitted me to consult all documents produced by the Directorate for Scientific Affairs (DSA) and its successor—the Directorate for Science, Technology and Industry (DSTI)—over the entire OECD period 1961–2000. Three distinct sub-periods should be distinguished. First, for 1961–1969, all material deposited by the OECD at the Institut universitaire européen in Florence—the official depositor for European organizations’ archives—was consulted. For 1970–1989, the OECD Records Management and Archives Service sent all declassified material to Florence specifically for this research project. After 1990, I was granted temporary access to OLISnet, the OECD’s electronic information network. The material is of two types. First, over 500 reports and notes by the OECD Secretariat—published and unpublished—were analyzed. These were produced for either NESTI or the Committee for Scientific and Technological Policy (CSTP). The archives cover 1970–1989 (although missing 1971 and some documents submitted to the CSTP between 1970 and 1975). The other two periods were less well-covered. Many documents are missing for 1961–1969, particularly for 1961, 1964, and 1967–1969. The early 1990s are poorly covered because of the transition from paper format to electronic. I had to rely on personal contacts for many missing documents. The second type of document consulted was minutes and summary records of NESTI meetings. These were sporadic before the mid-1980s, but began to appear yearly afterward. Correspondence with national authorities and memoranda were completely missing from the archives, so I interviewed some of the then-main 35 R. McNamara, Extract from a Seminar given in Jackson (Mississippi), in J.-J. Servan-Schreiber (1968), The American Challenge, translated from the French by R. Steel, New York: Athenaeum House, p. 80.
Introduction 11 participants, and created an electronic network allowing them to exchange views on the book in progress, and on the working papers that form its basis. I sincerely thank the 21 people who assisted with this project: K. Arnow, J. Bond, H. Brooks, J. Dryden, C. Falk, C. Freeman, D. Gass, P. Hemily, A. King, B. Martin, G. McColm, G. Muzart, K. Pavitt, I. Perry, A. Seymour, G. Sirilli, H. Stead, G. Westholm, A. Wycoff, and A. Young. I owe a particular debt to Jean-Jacques Salomon for reading the chapters as they were produced, and for providing the inside story on OECD science policy. Finally, three institutions greatly facilitated my work. First, the UNESCO Institute for Statistics (UIS) and the Division of Information Services were able to supply all UNESCO material needed for the research. I sincerely thank J. Boel and the staff of the UIS for their help. Second, the NSF in Washington agreed to open its files, and helped recover documents from the US National Archives. I sincerely thank S. Bianci, L. Jensen, and R. Lehming. Finally, J.-M. Paleyret and M. Carr from the Institut universitaire européen in Florence deserve special thanks for their support and hospitality.
Part I
Constructing science and technology statistics From the beginning, statisticians measuring S&T faced choices. A statistic is never neutral. It involves at least three decisions: (1) what to measure; (2) how to categorize the reality and its entities; and (3) which instrument to use. In recent history, some of these choices have simply reflected current conceptions of science, or past practices, while others have brought forth totally new perspectives.
Naming and defining One of the most difficult choices confronting statisticians was deciding and defining what S&T is. How to define science is an old question dating back to the last century. Science in the Anglo-Saxon (including French) sense differs from the Wissenschaft German sense (much wider knowledge and including humanities). If social sciences seem to be acceptable as sciences now,1 they are still irregularly included in R&D surveys, and in several countries, most of the humanities are neither accepted as science nor measured.2 Early on, official statisticians opted for a specific definition of science, concentrating on measuring research.3 But again, what defined research, or R&D as it is commonly called today, demanded choices. These were threefold. First, statisticians decided to concentrate on research activities, neglecting related scientific activities (RSA), among them innovation. We had to wait for UNESCO in the late 1970s for a positive assessment and the beginning of RSA measurement, and for the 1990s to see systematic surveys of innovation activities appear. This late arrival was due to values and hierarchies in types of activities. As Nelson 1 For the history of the funding of the social disciplines among the sciences, see for example: O. N. Larsen (1992), Milestones and Millstones: Social Science at the NSF, 1945–1991, New Brunswick: Transaction Publishers; G. M. Lyons (1969), The Uneasy Partnership: Social Science and the Federal Government in the 20th Century, New York: Sage. 2 For a time, a similar debate occurred concerning engineering. Engineering had a status second to that of science, and for this reason did not get the same level of funding. See: D. O. Belanger (1998), Enabling American Innovation: Engineering and the NSF, West Lafayette: Purdue University Press. 3 Certainly, as we will see later, measurements of the stock of scientists and engineers were conducted in the 1950s and 1960s, but soon abandoned to statisticians concerned with education, only to come back in the 1990s.
Goodman once remarked: “we dismiss as illusory or negligible what cannot be fitted into the architecture of the world we are building.”4 Second, statisticians opted to measure R&D using expenditures and human resource numbers. These two sets of statistics are now called input measures of S&T. Although new statistics would appear in the 1980s, input measures remain the most widely used by governments. Third, statisticians concentrated on formal R&D, namely institutional research. This was for methodological reasons, as discussed below, but also due to a drift in the definition of scientific research toward “systematic” research. While scientific research is usually understood as systematic, being methodological, statisticians nevertheless equated systematic research with institutions having specific laboratories dedicated to R&D. Small and mediumsized enterprises (SMEs) were therefore poorly surveyed, and the social sciences and humanities excluded until 1975, qualified as “not organized” research. Defining innovation was another difficult exercise for statisticians, yet the choices made were based on the same philosophy used to define R&D. In early attempts to measure innovation, academic researchers referred to products and processes (outputs) resulting from R&D and related activities. To officials, however, R&D meant the activities themselves, and over time, national and international surveys focused on innovation as an activity. This was only one of many choices made. Another dealt with whether an innovation is new on the domestic or world scale, or from the point of view of the company. For methodological reasons, the latter option prevailed officially. A further choice was whether to consider only the production or also the use of technologies. A company is usually described as innovative for inventing new products or processes, yet some argue it could be so qualified for its use of new technologies (i.e. to improve its operations). Overall, innovation remains a fuzzy concept. Statistical definitions thus “created” objects by way of representations, but contrary to what is argued by A. Desrosières, the statistics did not necessarily give these objects permanent solidity,5 because imposing a world vision is a constant struggle6 and, above all, a matter of credibility. I prefer T. M. Porter’s suggestion that “no matter how rigorous its methods, a discipline cannot make convincing objectivity claims when it has strong rivals.”7 For example, in the 1980s, statisticians began developing new statistics and indicators, this time concerned with output. These numbers covered patents, technological balance of payments, and high-technology trade. Unlike statistics from R&D surveys, however, the numbers were contested by official statisticians. What characterized these numbers, and explained the criticisms, was that they derived from non-survey instruments, administrative databases over which official statisticians had no control. 4 N. Goodman (1978), Ways of Worldmaking, Indianapolis: Hackett, p. 15. 5 A. Desrosières (1990), How to Make Things Which Hold Together: Social Science, Statistics and the State, in P. Wagner, B. Wittrock and R. Whitley (eds), Discourses on Society, Kluwer Academic Publishing, pp. 195–218. 6 P. Bourdieu (1985), The Social Space and the Genesis of Groups, Theory and Society, 14, pp. 723–744. 7 T. M. Porter (1995), Trust in Numbers: The Pursuit of Objectivity in Science and Public Life, op. cit., p. 215.
Classifying A second choice statisticians had to make concerned constructing statistics that allow discussion of current issues and problems in S&T. This necessitated classifying R&D into categories representing relevant user dimensions. Nowhere else than in R&D statistics can the principles of classification defined by Goodman be seen at work:8 categories separate the world into different entities that are subsequently aggregated to give indicators. These reduce reality to its dominant dimensions, frequently neglecting its heterogeneity. The OECD Frascati manual, a wonderful work of classification, provided the main choices made by national statisticians over the years. Three characteristics are noteworthy. First, the manual abounds in taxonomies. Some are specifically dedicated to (the measurement of ) science. An example is the types of research activities: basic, applied and development. Others are adapted from existing taxonomies generally used in other contexts, such as system of national accounts (SNA) economic sectors for which R&D is tabulated: government, industry and non-profit (the manual adds the university sector). Other taxonomies are borrowed virtually unmodified, such as the industrial classification of firms. The manual’s classification work also defines concepts as opposites, one side possibly negative: R&D is not RSA, research not development, basic research not applied research. Dichotomies reign, and no hybrids are accepted because “danger lies in transitional states, simply because transition is neither one state nor the next, it is undefinable.”9 For example, no category (such as “oriented research”) is measured between basic and applied research in most national statistics, despite many efforts. Finally, several definitions are constructed by exclusion. For example, until the third edition of the Frascati manual, social sciences were excluded from the definition of R&D. Even when included, “some deviations from the standards may still have to be accepted,” stated the manual.10 Another example is related scientific activities (RSA). In 1963, R&D was defined negatively, by excluding RSA, which was considered routine activity. While coming editions will better define scientific and technological activities (STA), the exclusion of RSA from S&T measurement persists today.
8 The five principles of Goodman are: Consolidation and decomposition, weighting, ordering, deletion and supplementation, deformation. N. Goodman (1978), op. cit., pp. 7–17. See also: M. Douglas and D. Hull (1992), How Classification Works: Nelson Goodman Among the Social Sciences, Edinburgh: Edinburgh University Press. 9 M. Douglas (1966), Purity and Danger, London: Routledge, 2001, p. 97. See also: H. Ritvo (1997), The Platypus and the Mermaid and Other Figments of the Classifying Imagination, Cambridge, MA: Harvard University Press. 10 OECD (1981), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, Paris, p. 17.
Measuring Once concepts are defined and categories constructed, usually concurrently, the measurement instrument has to be developed. When S&T measurement began, statisticians used existing data sources not specifically designed for the task. For example, industrial laboratory repertories were the first sources—and remain the basis for conducting surveys. Similarly, proxies often supplanted direct measurement: for decades, innovation was measured by R&D and patents, for example. Dedicated surveys considerably improved on measurements from the 1930s to 1940s. They conducted systematic measurements with specific definitions of S&T that respondents could rely on. From then on, statistics became (more) comparable between institutional sectors of the SNA. A further improvement in the data was the survey’s international standardization. Before the OECD manuals, no valid comparisons were possible among national data—and few were conducted. The OECD introduced two kinds of methodological manual to standardize collection of S&T statistics. First, the Frascati (R&D) and Oslo (innovation) manuals presented new definitions and new methodological norms. Second, manuals were developed defining standards for using existing statistics on education, patents and technological payments. The road toward standardization, however, did not mean the data became totally “accurate” or comparable. If it seemed so, it was because methodological limitations and countries’ individualities had been “black-boxed.” The tools developed for this were the methodological appendix, which isolated limitations of tabular statistical data, and, above all, the footnote. How were these choices made? Part I of this book postulates that the OECD was the real catalyst for the choices made by national statisticians. The OECD Secretariat, however, did not dictate this choice. Rather, following the organization’s rules, it institutionalized choices made together by member countries. One specific mechanism worked throughout the period 1961–2000 to that end: the National Experts on Science and Technology Indicators (NESTI). This group of experts united official producers and users of statistics, who defined standards and submitted them to the Committee for Science and Technology Policy (CSTP) for approval. However, some countries were more influential than others. The United States was one. American standards and indicators became the OECD’s and, thereafter, other countries’. In fact, before the OECD, it was the United States that had developed the most precise methodological thinking concerning S&T statistics. This was because of one mandate specifically given to the NSF by the government as early as 1951: assess the state of S&T in the country. The United Kingdom was another very influential country at the OEEC, which dealt primarily with issues concerning the stock of scientists and engineers. In fact, the first measurement of S&T in international organizations began with education, not R&D. What united OECD countries were their economic ideologies. Consensus rapidly emerged about orientation: the international organization developed economically oriented statistics. From the start, the OECD Directorate for Scientific Affairs (DSA) gave its policy and statistical programs the objective of finding ways
for research to contribute to the targeted 50 per cent GNP increase over the next 10 years. In the 1960s, however, not all countries were convinced. Germany, for example, “expressed doubts as to the value of attempting to assess the economic repercussions of scientific research.”11 It also reminded other countries that: “the convention of the OECD puts special emphasis on the importance of S&T to economic development [but] this does not imply any primacy of economy but only stresses the importance.”12 Similar doubts were expressed by the United Kingdom about work on technological gaps, and by Belgium about work on innovation, both questioning the appropriateness of such preoccupations in a directorate concerned with science.13 But over time, every country joined the club. In measuring S&T activities, monetary expenditures became the norm at the OECD—although statistics on human resources were often said to be more accurate. And when trying to assess the output and impacts (or outcomes) of S&T, statisticians chose to measure the economic output and impacts. By the end of Part I, it should become clear that choices made by statisticians on labeling, defining, categorizing, and measuring S&T intimately meshed with questions of methodology, politics, and ideology.
11 OECD (1962), Minutes of the 4th Session, Committee for Scientific Research SR/M (62) 2, p. 17. 12 OECD (1963), The OECD Ministerial Meeting on Science (Item III): Science and Economic Growth, DAS/PD/63.61, Annex III, p. 17. 13 Letter from the UK delegate E. A. Cohen, OECD (1968), OECD’s Work on Science and Technology, C (68) 81, p. 2; Letter to the Secretary General, OECD (1968), Summary of Discussions of the Steering Group of the CSP at its Meeting on 27th and 28th May 1968, C (68) 91, Annex, p. 5.
Section I The number makers
1
Eighty years of science and technology statistics
We owe much of the development of S&T measurement in Western countries to the United States. It was there that the first experiments emerged in the 1930s. Two factors explained this phenomenon: the need to manage industrial laboratories, and the need to plan government scientific activities, particularly if they might be needed for war (mobilizing scientists). Canada followed a decade later, with the same objectives, and Great Britain in the following decade. It seems that before the early 1960s, collecting S&T statistics was mainly an Anglo-Saxon phenomenon,1 and spread elsewhere principally through OECD involvement in standardizing definitions and methods. This chapter concerns these early experiments (1920–1960) and the OECD’s subsequent involvement in S&T statistics (1961–2000). The first part presents early experiments in measurement of S&T—before 1961—in the United States, Canada and Great Britain (for a brief overview of the surveys conducted during this period, see Appendix 2). The second part discusses an international organization that played a key role: the OECD. The reason the OECD constructed statistics on S&T has to do with science policy. From the beginning, the OECD aligned its thinking on science policy toward economics. To better understand relationships between science and the economy, it developed a study program on the economy of research in which the main tool was statistics. The chapter ends with a brief discussion of the recent entry of the European Commission into the field, and of the disappearance of UNESCO.
The forerunners The development of S&T statistics in Western countries began in the United States, breaking down into three periods. The first, before World War II, focused on
1 At least among non-communist countries. France only entered the field in 1961. Sporadic measurement existed before the late 1960s in the Netherlands and Japan, for example, but rarely in a systematic way. Only Germany had (partial) annual industrial R&D statistics back to 1948, but other sectors were surveyed only after the 1960s. C. Freeman listed similar forerunners in the 1960s, but forgot Canada among pioneers: C. Freeman (1966), Research, Technical Change and Manpower, in B. G. Roberts and J. H. Harold (eds), Manpower Policy and Employment Trends, London: Bell, p. 53.
22
Eighty years of S&T statistics
measuring industrial R&D. These measurements were a spin-off of the National Research Council (NRC) campaign promoting research in industry. The second, from World War II to 1953, saw several US government departments get involved in data collection for their own purposes. In addition to industrial R&D, government R&D was measured more frequently. The third period began when the National Science Foundation (NSF) entered the field. From then on, regular surveys of all economic sectors (industry, government, university, non-profit) were conducted. The NRC’s research information service During World War I, the US National Academy of Sciences (NAS) convinced the federal government to give scientists a voice in the war effort. Thus the NRC was created in 1916 as an advisory body to the government.2 A research information committee, which became the Research Information Service (RIS), was rapidly implemented. RIS handled the inter-allied exchange of scientific information.3 RIS began “as a vehicle for the exchange of scientific information through diplomatic channels with the counterparts of the Research Council abroad.”4 RIS looked, via scientific attachés in London, Paris, and Rome, at research progress in various countries, disseminating the information in the United States. After the war, however, these activities terminated, and RIS reoriented its work, becoming a national center of information concerning American research work and research workers, engaged in preparing a series of comprehensive card catalogs of research laboratories in this country, of current investigations, research personnel, sources of research information, scientific and technical societies, and of data in the foreign reports it received. (R. C. Cochrane (1978), The National Academy of Sciences: The First Hundred Years 1863–1963, op. cit., p. 250) As part of these activities, RIS constructed the first R&D directories. Social researchers have shown how social statistics began with calculations performed on population registers. Similarly, measurement of S&T also began with directories (see Appendix 3). Beginning in 1920, RIS regularly compiled five types of directories, raw data from which were published extensively in the NRC Bulletin, sometimes with statistical distributions. One such concerned industrial R&D laboratories.5 The first edition listed approximately 300 laboratories, with information 2 A. H. Dupree (1957), Science in the Federal Government: A History of Policies and Activities to 1940, New York: Harper and Row. 3 R. C. Cochrane (1978), The National Academy of Sciences: The First Hundred Years 1863–1963, Washington: National Academy of Sciences, pp. 240–241; R. MacLeod (1999), Secrets among Friends: The Research Information Service and the Special Relationship in Allied Scientific Information and Intelligence, 1916–1918, Minerva, 37 (3), pp. 201–233. 4 R. C. Cochrane (1978), The National Academy of Sciences: The First Hundred Years 1863–1963, op. cit., p. 250. 5 National Research Council (1920), Research Laboratories in Industrial Establishments of the United States of America, Bulletin of the NRC, Vol. 1, Part 2, March.
Eighty years of S&T statistics 23 on fields of work and research personnel. A second catalogued PhD holders.6 A third examined available sources of research funds,7 a fourth fellowships and scholarships,8 and a fifth societies, associations, and institutions (universities), covering the United States and Canada.9 NRC directories were used to conduct the first R&D surveys, particularly industrial R&D surveys. The NRC itself conducted two such surveys, one in 1933 by the Division of Engineering and Industrial Research, which assessed the effect of the Great Depression on industrial laboratories,10 and another in 1941 for the National Resources Planning Board (NRPB).11 The latter, in cooperation with the National Association of Manufacturers, reported that “the median expenditure of the companies for industrial research was (. . .) two percent of gross sales income.”12 This was the only R&D expenditure number provided, however, because the questionnaire concentrated on personnel data (man-years), which were easier to obtain. The Government’s advisers Besides the NRC itself, government departments and institutions also used NRC industrial directories to survey research; among them the Work Progress Administration (WPA), which examined the employment impact of new industrial technologies,13 the President’s Scientific Research Board,14 the Office of Education,15 and the NSF.16 However, the US government quickly began conducting its own surveys, all but one appearing after World War II. These measurements were also preceded by directories. At the NRC’s suggestion, a roster of scientific and specialized personnel had been created during the war. It aimed to facilitate recruitment of specialists for war research. The NRPB established the national roster plan in 1940. 6 National Research Council (1920), Doctorates Conferred in the Sciences in 1920 by American Universities, Reprint and Circular Series, November. 7 National Research Council (1921), Funds Available in 1920 in the United States of America for the Encouragement of Scientific Research, Bulletin of the NRC, Vol. 2, Part I, No. 9. 8 National Research Council (1923), Fellowships and scholarships for Advanced Work in Science and Technology, Bulletin of the NRC, November. 9 National Research Council (1927), Handbook of Scientific and Technical Societies and Institutions of the United States and Canada, Bulletin of the NRC, No. 58, May. 10 M. Holland and W. Spraragen (1933), Research in Hard Times, Division of Engineering and Industrial Research, National Research Council, Washington. 11 National Resources Planning Board (1941), Research: A National Resource (II): Industrial Research, Washington: USGPO. 12 Ibid., p. 124. 13 G. Perazich and P. M. Field (1940), Industrial Research and Changing Technology, Work Projects Administration, National Research Project, Report No. M-4, Pennsylvania: Philadelphia. 14 President’s Scientific Research Board (1947), Science and Public Policy, President’s Scientific Research Board, Washington: USGPO. 15 National Research Council (1951), Research and Development Personnel in Industrial Laboratories—1950, Report to the National Scientific Register, Office of Education. 16 NSF (1957), Growth of Scientific Research in Industry, 1945–1960, Report prepared by Galaxy Inc., Washington.
24
Eighty years of S&T statistics The task was an enormous one—to compile a list of all Americans with special technical competence, to record what those qualifications were, and to keep a current address and occupation for each person. (. . .) Questionnaires were sent out, using the membership lists of professional societies and subscription lists of technical journals, and the data were coded and placed on punched cards for quick reference. (C. Pursell (1979), Science Agencies in World War II: The OSRD and its Challenges, in N. Reingold (ed.), The Sciences in the American Context, Washington: Smithsonian, pp. 367–368)
By 1944, the roster had detailed data on 690,000 individuals.17 Considered of little use by many,18 the roster was transferred to the NSF in 1953 and abandoned in 1971.19 By then, surveys had begun systematically replacing repertories. The federal government R&D survey effort began in 1938, when the National Resources Committee, the National Resources Board’s successor, published the first systematic analysis of government research intended to document planning and coordination of government scientific activities.20 The report, based on a survey of government R&D, including universities, concluded that research, particularly academic research, could help the nation emerge from the Depression. For the first time, a research survey included the social sciences, which would later become the norm for government R&D surveys in several OECD countries.21 Not until 1945 were there new research measurements in the United States. Two of these deserve mention. First, V. Bush offered some data in Science: The Endless Frontier, the blueprint for United States science policy.22 But these were based on previously published numbers, like the NRC’s, or numbers of dubious quality, like his basic research estimates.23 Slightly better were numbers from a second experiment, the President’s Scientific Research Board (PSRB) report. In 1946, President Truman named economist J. R. Steelman, then Director of the Office of War Mobilization and Reconstruction, chairman of the PSRB, asking him to report on what to do for science in the country. Ten months later, the PSRB’s Science and Public Policy became the basis for S&T policy in the 17 R. C. Cochrane (1978), The National Academy of Sciences: The First Hundred Years 1863–1963, op. cit., p. 406. 18 “Those charged with recruiting chemists and physicists for OSRD and its contractors knew the outstanding men in each field already and through them got in touch with many young men of brilliant promise.” J. P. Baxter (1946), Scientists Against Time, Boston: Little, Brown and Co., p. 127. 19 National Science Board (1971), Minutes of the 142nd meeting, 14–15 November. 20 National Resources Committee (1938), Research: A National Resource (I): Relation of the Federal Government to Research, Washington: USGPO. 21 Two years later, the National Resources Committee—now the National Resources Planning Board—published a Social Science Research Council (SSRC) study examining social research in industry—but without statistics. See: National Resources Planning Board (1941), Research: A National Resources (III): Business Research, Washington: USGPO. 22 V. Bush (1945), Science: The Endless Frontier, North Stratford, Ayer Company Publishers (1995), pp. 85–89. 23 See: Chapter 14.
Eighty years of S&T statistics 25 24
United States. The PSRB tried to measure R&D in every economic sector: industry, government, and universities. To estimate the importance of research in the economy at large, it collected statistics wherever it could—and whatever their quality—adding very few numbers of its own, as Bush had done.25 With no time for an original survey, since the report had to be delivered ten months after the President’s order, the PSRB nevertheless innovated on several fronts: definition of research categories, GERD/GNP as an indicator of R&D effort,26 original manpower estimates for discussing shortages. He also suggested numerical science policy targets for the next ten years. We return to these issues in Part II. Other compilations were better, but limited to government R&D. Senator H. M. Kilgore estimated the government’s wartime research effort for a Congressional Committee,27 and the Office of Scientific Research and Development (OSRD) measured its own activities for 1940–1946.28 Finally, the Bureau of Budget (BoB) started compiling a government “research and development budget” in 1950.29 The DoD’s R&D Board The US Department of Defense (DoD), and predecessors, engaged in several data collections this century: the Naval Consulting Board (NCB) and Council of National Defense (CND), for example, developed inventories and indexes of industrial researchers and scientific personnel in the mid-1910s.30 But after World War II, the DoD started conducting more regular measurements, contracting out inventories and surveys of scientists to various groups.31 It also estimated national resources invested in science using concepts, R&D sources and performers, that would soon influence the NSF.32 Its major contribution, however, was to development of the
24 President’s Scientific Research Board (1947), Science and Public Policy, op. cit. 25 Most of the new numbers concern university research. See also: V. Bush (1945), Science: The Endless Frontier, op. cit., pp. 122–134. 26 Gross Expenditures on R&D as a percentage of GNP. 27 H. M. Kilgore (1945), The Government’s Wartime Research and Development, 1940–44: Survey of Government Agencies, Subcommittee on War Mobilization, Committee on Military Affairs, Washington. 28 OSRD (1947), Cost Analysis of R&D Work and Related Fiscal Information, Budget and Finance Office, Washington. 29 Bureau of Budget (1950), R&D Estimated Obligations and Expenditures, 1951 Budget ( January 9, 1950), Washington. Data for 1940–1949 also appear in The Annual Report of the Secretary on the State of the Finances for the Fiscal Year ended June 30, 1951, Washington, p. 687. 30 D. F. Noble (1977), America by Design: Science, Technology and the Rise of Corporate Capitalism, Oxford: Oxford University Press, pp. 149–150. 31 Bureau of Labor Statistics (1951), Employment, Education, and Earnings of American Men of Science, Bulletin No. 1027, Washington: USGPO; Engineering College Research Council (1951), University Research Potential: A Survey of the Resources for Scientific and Engineering Research in American Colleges and Universities, American Society for Engineering Education. 32 Department of Defense (1953), The Growth of Scientific R&D, Office of the Secretary of Defense (R&D), RDB 114/34, Washington.
26
Eighty years of S&T statistics
modern R&D survey. In the early 1950s, its Research and Development Board (RDB) asked Harvard Business School and the Bureau of Labor Statistics to survey industrial R&D. The institutions coordinated their efforts, conducting three surveys published in 1953.33 The surveys collected R&D numbers and information on effects of military affairs on personnel, factors affecting R&D, and rate of return on investments. There were two rationales behind these surveys. First, industries were entering a phase of R&D expense growth and needed to know how to manage the new research laboratories. Corporate associates of Harvard University, and the Industrial Research Institute (IRI), were actually partners in the Harvard Business School survey. C. E. K. Mees and J. A. Leermakers, in an influential book, summarized the problem: In the early days of industrial research, these laboratories were organized into departments on the model of the factory organization. (. . .) The departments themselves were analogous to the departments of a university (. . .). As the organization of industrial research has developed, however, the academic department system has been largely displaced by an organization based on function. The departments include men trained in different branches of science but applying their knowledge so that their work converges upon one field of work. (C. E. K. Mees and J. A. Leermakers (1950), The Organization of Industrial Scientific Research, New York: McGraw-Hill, p. 35) The authors then distinguished problems of industrial research organization from academic ones (team versus individual work, applied versus fundamental research) and examined factors that should determine organizational form, such as objectives, time, and costs. For R. N. Anthony, author of the study, statistics were a tool for managing by comparison: “There seems to be no way for measuring quantitatively the performance of a research laboratory,” he wrote. But “a comparison of figures for one laboratory with figures for some other laboratory (. . .) may lead the laboratory administrator to ask questions about his own laboratory.”34 While Anthony discussed the limitations of such comparisons and made several caveats, he nevertheless suggested a series of ratios and data breakdowns as yardsticks for laboratory performance assessment. The second rationale behind industrial surveys was to locate available resources in case of war, to “assist the military departments in locating possible contractors
33 D. C. Dearborn, R. W. Kneznek, and R. N. Anthony (1953), Spending for Industrial Research, 1951–1952, Division of Research, Graduate School of Business Administration, Harvard University; US Department of Labor, Bureau of Labor Statistics, Department of Defense (1953), Scientific R&D in American Industry: A Study of Manpower and Costs, Bulletin No. 1148, Washington. 34 R. N. Anthony and J. S. Day (1952), Management Controls in Industrial Research Organizations, Boston: Harvard University Press, p. 288.
Eighty years of S&T statistics 27 35
for R&D projects.” After the 1944 roster, this objective was reiterated every year. But the statistics never served the purpose for which they were designed.36 The real challenge was knowing what was coming out of government R&D investments. Indeed, the Bureau of Labor Statistics survey showed that 50 percent of industrial R&D was financed by the federal government.37 Assessing the return on these investments would be assigned to the NSF. The NSF’s office of special studies The peculiarity of S&T statistics in the United States since 1950 is that production of these measurements was not by a statistical agency. It was the NSF, responsible for funding university research, which conducted or sponsored most surveys within its Office of Special Research (OSR).38 When the NSF appeared in the early 1950s, R&D statistics had been available in the United States for three decades.39 But it became increasingly difficult to compare data from different sources or to develop historical series.40 Research definitions differed, as did data-collection methodologies.41 The NSF standardized R&D surveys by monopolizing official R&D measurement and imposing its own criteria. The Harvard Business School and the Bureau of Labor Statistics surveys were influential here. They developed concepts and definitions that the NSF reproduced— like research, basic research, and non-research activities—plus methodologies. The NSF began measuring S&T in 1953 (see Table 1.1 and Appendix 4).42 At first, it used existing expertise from the Bureau of Labor Statistics, Bureau of Budget or Office of Education. Eventually, however, the NSF chose partners, like the Bureau of Census instead of Labor Statistics,43 or developed its own expertise. By 1956, it had surveyed all economic sectors at least once: government, (federal funds to) universities and non-profit institutions, and industry. Surveys of doctorate recipients were added in the late 1950s. The decade ended with two NSF exercises. First, the NSF invited the United States, Canada, and United Kingdom to attend a session on R&D statistics at 35 Bureau of Labor Statistics (1953), Scientific R&D in American Industry: A Study of Manpower and Costs, op. cit., pp. 1, 51–52. 36 See: J. P. Baxter (1946), Scientists Against Time, op. cit., pp. 126–135. 37 Bureau of Labor Statistics (1953), Scientific R&D in American Industry: A Study of Manpower and Costs, op. cit., p. 3. 38 Renamed the Division of Science Resources Studies in the early 1960s. 39 K. Sanow (1963), Survey of Industrial R&D in the United States: Its History, Character, Problems, and Analytical Uses of Data, OECD, DAS/PD/63.38, p. 2. 40 US Department of Commerce and Bureau of Census (1957), Research and Development: 1940 to 1957, in Historical Statistics of the United States, pp. 609–614. 41 See: Chapter 9. 42 National Science Foundation (1953), Federal Funds for Science: Federal Funds for Scientific R&D at Nonprofit Institutions 1950–1951 and 1951–1952, Washington. 43 This change was motivated, according to K. Sanow, by the need to relate R&D to other economic statistics. K. Sanow (1959), Development of Statistics Relating to R&D Activities in Private Industry, in NSF (1959), The Methodology of Statistics on R&D (NSF 59-36), Washington, p. 22.
28
Eighty years of S&T statistics
Table 1.1 NSF surveys and censuses Survey
Frequency
First year for which
data are available Survey of federal support to universities, colleges, and non-profit institutions Survey of industrial research and development Survey of federal funds for research and development Survey of earned doctorates National survey of college graduates Survey of graduate students and post-doctorates in science and engineering Integrated post-secondary education data system completions survey Immigrant scientists and engineers Survey of scientific and engineering expenditures at universities and colleges (R&D expenditures) Survey of doctorate recipients National survey of recent college graduates Occupational employment statistics survey Survey of public attitudes National survey of academic research instruments and instrumentation needsa Survey of academic research facilities
Annual
1950
Annual Annual Annual Biennial Annual
1953 1955 1957 1962 1966
Annual
1966
Annual Annual
1968 1972
Biennial Biennial Triennial Biennial Triennial
1973 1976 1977 1979 1983
Biennial
1988
a This survey and the following one were abandoned in the early 1990s.
the 1958 Meeting of the American Statistical Association.44 The topic was measurement limitations and difficulties, and future methodological work for improving statistics. Second, the NSF produced its first policy-oriented document: Basic Research: A National Resource.45 The report used R&D statistics for policy purposes for the first time, and pointedly disclosed NSF philosophy for the decades to come. The document made a plea for basic research in the name of balance (and explicit contrast) between applied and basic science. The NSF said the numbers showed that “basic research [was] underemphasized in the United States” (p. 47) and should therefore be better funded by the federal government and industry. This was part one of the philosophy. The second was that “the returns [on basic research] are so large that it is hardly necessary to justify or evaluate the investment” (p. 61). Thereafter, numbers for the NSF would become a rhetorical tool for lobbying for university funding, but not for evaluating research. Why was measurement of S&T in the United States located within the NSF? This was not planned, as S&T measurement would likely otherwise have gone to the Bureau of Labor Statistics or of Census. Indeed, these two organizations 44 National Science Foundation (1959), Methodological Aspects of Statistics on R&D: Costs and Manpower, Papers presented at a session of the American Statistical Association Meetings, December 1958, NSF 59-36, Washington. 45 National Science Foundation (1957), Basic Research: A National Resource, Washington.
Eighty years of S&T statistics 29 conducted, and still conduct, regular surveys for the NSF. The localization of S&T measurement was in fact the result of a Bureau of Budget (BoB) compromise. The BoB had always been skeptical of federal government S&T funding, particularly basic research funding.46 President Truman’s adviser Harold Smith, director of the BoB, once argued that the title of Bush’s Science: The Endless Frontier should be Science: The Endless Expenditure.47 In order to accept the autonomy requested by the NSF, the BoB required regular evaluations of money spent. According to the BoB’s W. H. Shapley, it was mainly interested in identifying overlap among agencies and programs.48 In 1950, therefore, the law creating the NSF charged the organization with funding basic research, but also gave it a role in science measurement. The NSF was directed to evaluate scientific research programs undertaken by the Federal Government (. . .) [and] to maintain a current register of scientific and technical personnel, and in other ways provide a central clearinghouse for the collection, interpretation, and analysis of data on scientific and technical resources in the United States. (Public Law 507 (1950)) In 1954, an executive order further specified that the NSF should “make comprehensive studies and recommendations regarding the Nation’s scientific research effort and its resources for scientific activities” and “study the effects upon educational institutions of Federal policies and the administration of contracts and grants for scientific R&D.”49 Despite these demands, the NSF remained autonomous and guided by its own interests. S&T measurement became partly measurement for the NSF to lobby for university funds. This explained, as we document later, the emphasis on measurement of basic research, its biased statistical view of the science system (focused mainly on university research), and its discourses on shortages of scientists and engineers. This also explained the NSF’s involvement in comparative statistics. In 1955, in collaboration with the NRC, the NSF started measuring eastern countries’ investments in science, showing that the Soviet Union produced two to three times the scientific and technical graduates of the United States.50 46 J. M. England (1982), A Patron for Pure Science: The NSF’s Formative Years, 1945–1957, Washington: NSF, p. 82; H. M. Sapolsky (1990), Science and the Navy: The History of the Office of Naval Research, Princeton: Princeton University Press, Chapter 4, pp. 43, 52; L. Owens (1994), The counterproductive management of science in the second world war; Vannevar Bush and the OSRD, Business History Review, 68: pp. 533–537; National Resources Committee (1938), op. cit., pp. 18, 74. 47 C. E. Barfield (1997), Science for the 21st Century: The Bush Report Revisited, Washington: AEI Press, p. 4. 48 W. H. Shapley (1959), Problems of Definition, Concept, and Interpretation of R&D Statistics, op. cit., p. 8. 49 Executive Order 19521 (1954). 50 N. De Witt (1955), Soviet Professional Manpower: Its Education, Training, and Supply, Washington: NSF; N. De Witt (1961), Education and Professional Employment in the USSR, NSF 61-40, Washington: NSF; L. A. Orleans (1961), Professional Education in Communist China, NSF 61-3, Washington: NSF; C Y. Cheng (1965), Scientific and Engineering Manpower in Communist China, 1949–1963, NSF 65-14, Washington: NSF.
30
Eighty years of S&T statistics
The impact of the study was, according to A. T. Waterman, first director of the NSF, enormous: One result of these findings was that the Congress sharply increased Foundation funds for education in the sciences. The Foundation appropriation for fiscal year 1957, $40 million, more than doubled that of the preceding year. The next large increment came in 1959 when $130 million was appropriated in the wake of intense national concern over the Russian sputnik and all that it implied. Funds available for fiscal year 1960 total more than $159 million. (A. T. Waterman (1960), Introduction, in V. Bush, Science: The Endless Frontier, North Stratford, Ayer Company Publishers (1995), p. xxv) The Dominion Bureau of Statistics (Canada) Canada was second in measuring S&T before the 1960s. In 1917, the Canadian NRC conducted an influential survey of research in the country in collaboration with five organizations: the Canadian Manufacturers’ Association, Canadian Society of Civil Engineers, Canadian Mining Institute, Society of the Chemical Industry, and Toronto Joint Committee of Technical Organizations.51 Four questionnaires were prepared: (1) universities, colleges, and technical institutions; (2) government departments (federal and provincial); (3) industries; and (4) scientific, professional, and technical societies. Few archives exist, but we know that the survey determined science policy in Canada for the next 50 years: 2,400 questionnaires were returned, of which 37 reported research activity and a further 83 reported technical activity, largely routine quality control.52 Briefly, the survey showed there was very little research in the country, either in industry or universities. The obvious lesson, thought the NRC, was to fund universities to produce graduates to then be hired in industry. As Bruce Doern has shown, the NRC thus developed programs to fund university research, not industrial research as in its original mandate.53 The first real survey of R&D in Canada was by the NRC in 1939, in a Dominion Bureau of Statistics publication.54 As in the United States, it concerned industrial R&D. The aim was “to mobilize the resources of the Dominion for the prosecution of the war,” to build a directory of potential contractors.55 The survey asked for data on personnel and expenditures for research and testing. The report included a directory of laboratories classified by province, sector, and research field. 51 Advisory Council for Scientific and Industrial Research, Annual Report, 1918, pp. 20–28; M. Thistle (1966), The Inner Ring: The Early History of the National Research Council of Canada, Toronto: University of Toronto Press, pp. 159–160. 52 J. P. Hull and P. C. Enros (1988), Demythologizing Canadian Science and Technology: The History of Industrial R&D, in P. K. Kresl (ed.), Topics on Canadian Business, Vol. X (3), Association for Canadian Studies, pp. 1–21. 53 B. Doern (1982), Science and Politics in Canada, Toronto: Queens University Press. 54 Dominion Bureau of Statistics (1941), Survey of Scientific and Industrial Laboratories in Canada, Ottawa. 55 Ibid. p. 1.
Eighty years of S&T statistics 31 This was followed in 1947 by a Department of Reconstruction and Supply survey on government R&D.56 Three compilations were published: two for federal government activities (1938–1946 and 1946–1947) and one for provincial activities (1946–1947). Unlike the US National Resources Committee, its survey did not include social sciences, but did include “data collection and dissemination of information.” This allowed it to develop the concept of “scientific activities,” which would be appropriated by the NSF in the 1960s and UNESCO in the 1970s.57 Regular and periodic surveys by the Dominion Bureau of Statistics on industrial R&D resumed in 1956,58 and systematic government R&D surveys followed in 1960.59 The industrial survey was conducted because of industry complaints (the Financial Post was the main vehicle for criticisms) that the NRC spent too much in-house.60 In all the bureau’s efforts, the NRC was a valuable collaborator on, and sometimes the instigator of, the R&D surveys. From 1957, the NRC specifically dedicated G. T. McColm to advise the bureau, among other things, on correcting defects from previous surveys. Only after the OECD Frascati manual appeared in 1963 did the bureau continue on its own, without the NRC. The ACSP (Great Britain) As in the United States, British S&T measurement began with directories, and the American roster was inspired by a British Royal Society experiment.61 By 1939, the British register had collected 80,000 scientists’ names, and would soon be taken over by the Ministry of Labour. A few years previously, the Association of Scientific Workers (ASW) created a directory of over 120 industrial laboratories, based on the NRC model,62 including details on disciplines, research character, personnel, floor space, publications, and patents. The British government, for its part, was from the start involved in estimating R&D. The Department of Scientific and Industrial Research’s (DSIR’s) annual reports from 1930 and after measured its own research contribution. Starting in 1953, the Advisory Council on Science Policy (ACSP) published annual data on government civil R&D funding, and starting in 1956 it undertook triennial national R&D expenditure surveys.63 The “national” statistics aggregated numbers from
56 Department of Reconstruction and Supply (1947), Research and Scientific Activity: Canadian Federal Expenditures 1938–1946, Government of Canada: Ottawa. 57 See: Chapters 4 and 5. 58 Dominion Bureau of Statistics (1956), Industrial Research–Development Expenditures in Canada, 1955, Ottawa. 59 Dominion Bureau of Statistics (1960), Federal Government Expenditures on Scientific Activities, Fiscal Year 1958–1959, Ottawa. 60 G. T. McColm, Personal conversation, October 13, 2000. 61 C. Pursell, Science Agencies in World War II: The OSRD and its Challenges, op. cit., p. 367. 62 Association of Scientific Workers (1936), Industrial Research Laboratories, London: George Allen and Unwin. 63 Published in Annual Reports of the ACSP 1956–1957 to 1963–1964, London: HMSO.
32
Eighty years of S&T statistics
different sources. Government R&D data came from budget information of what at the time constituted the four “research councils,” the DSIR, the Medical Research Council (MRC), the Agricultural Research Council (ARC), and Nature Conservancy. Industrial R&D expenditures came from the DSIR, which conducted the first official industrial R&D survey in 1955, modeled on the NSF’s survey.64 The ACSP, through its committee on scientific manpower, also pioneered collection of statistics on British supplies of scientists and engineers, including estimated demand.65 Such numbers appeared yearly until 1963–1964, and would be vehemently criticized,66 but the studies would considerably influence OECD work on the subject via Alexander King, the first ACSP committee secretary and later director of OECD’s Directorate of Scientific Affairs. All these measurements were preceded by those of the Federation of British Industries (FBI), which thrice surveyed industrial R&D in the 1940s.67 In 1958, the Federation conducted a fourth survey.68 Christopher Freeman, from the National Institute of Economic and Social Research (London), was assigned to the survey when E. Rudd, from the DSIR, sent him to the OECD to write what would become the Frascati manual.
The OECD National surveys conducted before the 1960s collected few international statistics. Countries usually surveyed their own R&D efforts, not yet interested in benchmarking.69 We owe to the OECD the standardization of national statistics, the diffusion of methodologies to other countries, and the development of comparative national R&D statistics. Four types of statistics would soon be produced: on scientific and technical manpower; R&D; technology; and S&T indicators. The office of scientific and technical personnel At the European Productivity Agency (EPA), created in 1953 as part of the OEEC—the predecessor to the OECD—measurement of science began. One of the EPA’s self-appointed tasks was measurement of productivity and improvement
64 DSIR (1958), Estimates of Resources Devoted to Scientific and Engineering R&D in British Manufacturing Industry, 1955, London; DSIR (1960), Industrial R&D Expenditures, London: HMSO. 65 ACSP (1955), Report on the Recruitment of Scientists and Engineers by the Engineering Industry, Committee on Scientific Manpower, London: HMSO. 66 See: Chapter 13. 67 Federation of British Industries (1943), Research and Industry, London; Federation of British Industries (1947), Scientific and Technical Research in British Industry, London; Federation of British Industries (1952), Research and Development in British Industry, London. 68 Federation of British Industries (1961), Industrial Research in Manufacturing Industry: 1959–1960, London. 69 The only effort in the United States before the 1970s was comparisons with Communist countries (see: Chapter 13, footnote 69) and a table in NSF (1969), National Patterns of R&D Resources, 1953–1970, NSF 69-30, p. 4.
Eighty years of S&T statistics 33 of methodologies to that end. In doing so, the EPA conducted inter-firm surveys in several industrial sectors, participated in development of methodological manuals on productivity, operated a small measurement advisory service and, from 1955 to 1965, published the Productivity Measurement Review quarterly. In 1957, the EPA Committee of Applied Research (CAR) began meetings to discuss methodological problems concerning R&D statistics.70 An ad hoc group of experts was set up to study existing R&D surveys. The secretary, J. C. Gerritsen (consultant to the OEEC), prepared two case studies on definitions and methods, one in 1961 (United Kingdom and France),71 and the other in 1962 (United States and Canada).72 But everything started with measurement of qualified human resources, and shortages, since human resources are the heart of productivity issues. Spurred by the United States, recently shaken by Sputnik, the OEEC created the Office of Scientific and Technical Personnel (OSTP) in 1958 as part of the EPA. The OSTP, pursuing the work of its predecessor, the Scientific and Technical Personnel Committee, conducted three large surveys of S&T personnel in member countries.73 The third found a growing gap between the United States and Canada on one hand, and European countries on the other, and projected bigger discrepancies for 1970. These surveys were the first systematic international S&T measurements, and were guided by what would become the repeated lacunae of current statistics:74 Few member nations had adequate statistics on current manpower supply; fewer still on future manpower requirements. Furthermore, there were no international standards with regard to the statistical procedures required to produce such data. (OEEC (1960), Forecasting Manpower Needs for the Age of Science, Paris, p. 7) In the 1960s, the OECD’s Committee of Scientific and Technical Personnel (CSTP) continued the work of the OSTP—now abolished. Parallel to surveys on shortages of qualified resources, the CSTP measured, for the first time, the migration of scientists and engineers between member countries, the United States and Canada.75 Brain drain was in fact a popular topic in the 1960s.76 It took five years before such a survey, first suggested in 1964, became reality.
70 Two meetings were held: one in June 1957 and a second one in March 1960. 71 OEEC (1961), Government Expenditures on R&D in France and the United Kingdom, EPA/AR/4209. Lost. 72 OEEC (1963), Government Expenditures on R&D in the United States of America and Canada, DAS/PD/63.23. 73 OECD (1955), Shortages and Surpluses of Highly Qualified Scientists and Engineers in Western Europe, Paris; OECD (1957), The Problem of Scientific and Technical Manpower in Western Europe, Canada and the United States, Paris; OECD (1963), Resources of Scientific and Technical Personnel in the OECD Area, Paris. 74 OEEC (1960), Forecasting Manpower Needs for the Age of Science, Paris, p. 7. 75 OECD (1969), The International Movement of Scientists and Engineers, Paris, STP (69) 3. 76 See: Chapter 13.
34 Eighty years of S&T statistics Besides conducting surveys, the OEEC (and OECD) also got involved in forecasting S&T, a direct relique (and dream) of operational research.77 Several forecasting exercises were conducted from 1960 to mid-1980: scientific and technical information, technological assessment, and human resources. The latter was motivated by the realization that policy makers are better guided by a comprehensive and strategic approach than by mere numbers on shortages of scientists and engineers. The OEEC consequently organized different symposia on methods of forecasting specialized manpower resource needs.78 With numbers generated within the OSTP and CSTP, the OECD developed a discourse that became its trademark: gaps between Europe and the rest of the world. The organization identified a gap between the United States and Europe in specialized personnel as mentioned earlier, and gaps between Europe and the USSR in science and engineering graduates.79 These were the second such gaps measured—the first being the productivity gap. More would follow.
The science resources unit With the 1961 creation of the OECD, the organization increasingly turned to policy. Science was now recognized as a factor in economic growth, at least by OECD bureaucrats. In order that science might optimally contribute to progress, however, S&T policies were invented. And to inform these, statistics were essential, or so thought the OECD, which then launched a program on the economics of research.80 From the beginning, the OECD’s S&T statistical activities were located within a policy division, the Directorate of Scientific Affairs (DSA). This was because S&T measurement at the OECD developed as a tool for S&T policies: “Informed policy decisions (. . .) must be based on accurate information about the extent and forms of investment in research, technological development, and scientific education,” argued the Piganiol report.81
77 According to the OECD, operational research would provide information to government officials in reaching decisions in specific areas. See OECD (1964), Committee for Scientific Research: Draft Programme of Work for 1967, SR (66) 4, p. 20. 78 See: Chapter 1, footnote 78. 79 OEEC (1960), Producing Scientists and Engineers: A Report on the Number of Graduate Scientists and Engineers produced in the OEEC member countries, Canada, the United States and the Soviet Union, Paris, OSTP/60/414. 80 The field was emerging mainly in the United States, at RAND and the National Bureau of Economic Research (NBER); See D. A. Hounshell, The Medium is the Message, or How Context Matters: the RAND Corporation Builds an Economics of Innovation, 1946–1962, in A. C. Hughes and T. P. Hughes (2000), op. cit., pp. 255–310; NBER (1962), The Rate and Direction of Inventive Activity: Economic and Social Factors, Princeton: Princeton University Press. 81 OECD (1963), Science and the Policies of Government, Paris, p. 24.
Eighty years of S&T statistics 35 The OECD initiated western reflections on science policy,82 which were, from the start, aligned with economic issues. In 1962, the Committee for Scientific Research (CSR) recommended that Secretariat “give considerable emphasis in its future program to the economic aspects of scientific research and technology.”83 This was in line with the OECD’s 50 percent economic growth target for the decade. The CSR recommendation was reiterated during the first ministerial conference in 196384 and the second in 1966.85 The CSR proposal assumed that there “is an increasing recognition of the role played by the so-called third factor [innovation] in explaining increases in GNP.”86 But, the CSR continued, “the economist is unable to integrate scientific considerations into his concepts and policies because science is based largely on a culture which is anti-economic.”87 Thus, the OECD gave itself the task of filling the gap. The CSR document identified a series of policy questions: ●
●
● ●
Through what political and administrative mechanisms can harmonization of science and economic policies be achieved? By what criteria can decisions concerning allocation of resources to research be made? In what ways can private sector research be stimulated and monitored? What should be the public sector priorities?
The document recommended three actions.88 First, that the OECD produce a statement on S&T in relation to economic growth for a ministerial conference. The document was produced as a background document for the first ministerial conference held in 1963.89 It contained one of the first international comparisons of R&D efforts in several countries based on existing statistics, conducted by C. Freeman et al.90 Second, it recommended the OECD assist countries in development of 82 See: B. Godin (2002), Outlines for a History of Science Measurement, op. cit.; J.-J. Salomon (2000), L’OCDE et les politiques scientifiques, Revue pour l’histoire du CNRS, 3, pp. 40–58. 83 OECD (1962), Economics of Research and Technology, SR (62) 15, p. 1. 84 OECD (1963), Ministers Talk about Science, Paris: La Documentation française; OECD (1963), Science, Economic Growth and Government Policy, Paris. 85 See OECD (1966), The Technological Gap, SP(66) 4. 86 OECD (1962), Economics of Research and Technology, op. cit., p. 2. 87 Ibid., p. 5. 88 See also: OECD (1962), 1964 Policy Paper: Note by the Secretariat, SR (62) 39; OECD (1962), Ad Hoc Consultative Group on Science Policy: Preliminary Recommendations, C (62) 29; OECD (1963), Economics of Science and Technology, SR (63) 33; OECD (1964), Meeting of the Ad Hoc Group on the Economics of Science and Technology, SR (64) 12. 89 OECD (1963), Science, Economic Growth and Government Policy, op. cit. 90 Two other international statistical comparisons, again based on existing statistics, would soon follow: A. Kramish (1963), Research and Development in the Common Market vis-à-vis the UK, US, and USSR, report prepared by the RAND Corporation for the Action Committee for a United Europe (under the chairmanship of J. Monnet); C. Freeman et al. (1965), The Research and Development Effort in Western Europe, North America and the Soviet Union: An Experimental International Comparison of Research Expenditures and Manpower in 1962, Paris: OECD.
36 Eighty years of S&T statistics science policies through annual reviews. The first national review appeared in 1962 (Sweden), and pilot teams were created for less-developed countries in Europe.91 Third, the CSR suggested the OECD conduct studies on the relationships between investment in R&D and economic growth. Indeed, “comprehensive and comparable information on R&D activity are the key to [1] a clearer understanding of the links between science, technology and economic growth, [2] a more rational formulation of policy in government, industry and the universities, [3] useful comparisons, exchange of experience, and policy formation internationally.”92 The main obstacle to the last suggestion, however, was the inadequacy of available data.93 To support policies, the CSR thus recommended developing a methodological manual: The main obstacle to a systematic study of the relationship between scientific research, innovation and economic growth is the inadequacy of available statistical data in member countries on various aspects of scientific research and development (. . .). The Secretariat is now preparing a draft manual containing recommendations defining the type of statistical data which should be collected, and suggesting methods by which it can be obtained. (OECD (1962), Draft 1963 Programme and Budget, SR (62) 26, p. 19) The manual was prepared by C. Freeman, adopted in 1962 and discussed by member countries at a 1963 meeting in Frascati (Italy).94 It proposed standardized definitions, concepts and methodologies for R&D surveys.95 It also conventionalized an indicator used for over twenty years in assessing R&D efforts: Gross Domestic Expenditures on R&D as a percentage of GNP (GERD/GNP). With time, the indicator acquired two functions: descriptive and prescriptive. For one, the indicator was a comparison of R&D effort between countries. But it was also a statistical tool used by every science department in every science policy document to help their case for increased funding. A country not investing the “normal” or average percentage of GERD/GNP always set the highest ratios as targets, generally those of the best performing country—the United States.96
91 Pilot teams were teams of economists and technologists charged “to examine the role of scientific research and technological changes in the context of long term economic development and to help less industrialized countries to assimilate the scientific knowledge from more developed countries” (OECD (1963), CSR: Minutes of the 8th Session, SR/M (63) 3, p. 11). See also: OECD (1963), Pilot Teams on the Development of Scientific Resources in Relation to Economic Growth, SR (63) 34; OECD (1965), Science Planning and Policy for Economic Development, DAS/SPR/65.5. 92 OECD (1963), A Progress and Policy Report, SR (63) 33, pp. 4–5. 93 OECD (1962), Economics of Research and Technology, op. cit., p. 10. 94 OECD (1962), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, DAS/PD/62.47. 95 The manual was restricted to the natural and engineering sciences until the third edition in 1976, which included the social and human sciences for the first time. 96 See: Chapters 11 and 12.
Eighty years of S&T statistics 37 In line with the manual’s conventions, the first international R&D survey was conducted in 1963. Sixteen countries participated.97 The results were published in 1967 and 1968,98 and were a catalyst for the continuation of OECD work in measurement. The survey documented, for the third time in twenty years, gaps between America and Europe, this time in R&D: “There is a great difference between the amount of resources devoted to R&D in the United States and in other individual member countries. None of the latter spent more than one-tenth of the United States expenditure on R&D.”99 The survey—along with an important follow-up study on technological gaps between the United States and European countries100—convinced member countries of the usefulness of statistical data in policy issues. In 1967, the Committee for Science Policy (CSP) of the DSA thus recommended that: “research and development accounting should be established within OECD on a regular basis.”101 This meant that international R&D surveys would henceforth be conducted periodically, and the Frascati manual be revised regularly in light of survey performance.102 Despite the OECD’s success in these early applications of statistics to policy questions, the DSA proposed budget cuts in the Science Resources Unit (SRU) in 1972. Total resources employed in the SRU were about 112 man-months. The 1973 budget proposal would have reduced this to slightly more than 55 manmonths.103 The proposed cuts were based on the assumption that SRU efforts were self-promoted for the sole needs of statisticians, with little value to science officials.104 But there was also a completely different argument: there was some reluctance in the United States to see the SRU get too involved in controversial comparative analysis between the United States and Europe105 rather than simply collecting data, especially new data on outputs of S&T instead of only inputs.106 Consequently, an ad hoc review group was established in 1972 following “reservations expressed by some member countries about the suggestions that substantial cuts should be made in the budget for R&D statistics work in 1973 to free resources for new work.”107 British delegate Cyril Silver chaired the group, and was probably the source of the controversial decision, according to people I interviewed. The hypothesis that the United Kingdom was at the center of the
97 Remember that the OECD did not, then or now, itself collect data, but relied on member countries’ surveys for national statistics. 98 OECD (1967), A Study of Resources Devoted to R&D in OECD Member Countries in 1963/64: The Overall Level and Structure of R&D Efforts in OECD Member Countries, Paris; OECD (1968), A Study of Resources Devoted to R&D in OECD member countries in 1963/64: Statistical Tables and Notes, Paris. 99 Ibid., p. 19. 100 OECD (1968), Gaps in Technology: General Report, Paris. 101 OECD (1967), Compte-rendu de la 4e session: 27–28 juin 1967, SP/M (67) 2, p. 4. 102 OECD (1967), Future Work on R&D Statistics, SR (67) 16. 103 OECD (1973), Report of the Ad Hoc Review Group on R&D Statistics, STP (73) 14, p. 12. 104 Personal conversations with P. Hemily (March 10, 2001) and J.-J. Salomon (March 12, 2001). 105 See: Chapter 12. 106 Personal conversation with J.-J. Salomon, September 25, 2000. 107 OECD (1973), Report of the Ad Hoc Review Group on R&D Statistics, p. 7.
38
Eighty years of S&T statistics
proposed cuts is probably true, since Silver wrote, in the introductory remarks to its report: I started my task as a skeptic and completed it converted—converted that is, to the view that policy makers use and even depend on R&D statistics and particularly on those giving comparisons of national efforts in particular fields. What I beg leave to question now is whether perhaps too much reliance is placed on these all-too-fallible statistics. (OECD (1973), Report of the Ad Hoc Review Group on R&D Statistics, p. 6) Before arriving at this conclusion, however, the group studied different options, among them: The emphasis of the work of the Science Resources Unit has shifted from providing support to the remainder of the Science Affairs Directorate to providing much valued service to member countries. We considered whether within OECD itself it might not in consequence now be more appropriate administratively for the Science Resources Unit to be associated with the general statistical services of the Organization. The group finally decided that “on balance, the Science Resources Unit was best left administratively within the Science Affairs Directorate,” but that “the liaison between the Science Resources Unit and the Divisions of the Scientific Affairs Directorate be improved by appointing each of the members of the Science Resources Unit as a liaison officer for one or more specialist activities within the Secretariat.”108 The Science and Technology Indicators Unit With the future of the statistical unit confirmed, OECD work on measurement of S&T expanded. Following the successful publication of the NSF’s Science Indicators (SI) in 1973,109 a second ad hoc review group, proposed by the NSF’s C. Falk, was created to consider expanding the statistics used for measuring S&T.110 Indeed, in the 1970s, the SRU limited itself to R&D statistics, whereas the gaps study and NSF’s SI suggested using several indicators, and the CSP recommended new indicators as early as 1967.111 One major discussion focused on output indicators. Not long after the NSF published the second edition of SI (1975), the US Congress held hearings concerning the document.112 The debates were entirely devoted to outputs.
108 109 110 111 112
Ibid., p. 11. Discussed at length in Chapter 6. OECD (1978), Report of the Second Ad Hoc Review Group on R&D Statistics, STP (78) 6. OECD (1967), Future Work on R&D Statistics, op. cit., p. 5. USGPO (1976), Measuring and Evaluating the Results of Federally Supported R&D: Science Output Indicators, Hearings Before the Committee of Congress on Science and Technology, Washington.
Eighty years of S&T statistics 39 Congressmen asked the NSF what the links were between inputs and outputs, that is, whether the outputs measured using available indicators were really outcomes of the inputs. Clearly, politicians wanted output indicators, while statisticians were satisfied with input indicators, and with hypotheses and broad correlations between the two. Before the Committee of Congress, Robert Parke, from the Social Sciences Research Council (SSRC), summarized the issue as follows: (. . .) we’re asking questions about what happens as a result of all this activity [research] in terms of the output of science. We are going to be asking those questions in the year 2000, not just 1976. They are pervasive public concerns. (USGPO (1976), Measuring and Evaluating the Results of Federally Supported R&D: Science Output Indicators, Hearings Before the Committee of Congress on Science and Technology, Washington, p. 72) A similar demand for output indicators existed at the OECD. As early as 1963, measuring outputs was on the agenda.113 Thereafter, the statistical unit, mainly through director Y. Fabian, tried every year to convince national governments and their statisticians to extend measurement of S&T to outputs. They experienced some success, but also much frustration. National statisticians always offered arguments about methodological difficulties or resource constraints. But a major factor underlying this hesitation was that outputs challenged the state statisticians’ monopoly: the data proposed were not their own, were not based on surveys proper, coming from other institutions’ administrative or bibliographic databases.114 Nonetheless, while continuing its work on inputs, the OECD gradually entered the field of output measurement in the 1980s:115 it organized workshops and conferences that led to development of new methodological manuals in the following decade (see Appendices 6 and 7). About the same time, the OECD started publication of a regular series of statistics that owed their existence to a new database on indicators begun in 1981.116 Thereafter, statistics produced by the SRU, renamed the Science and Technology Indicators Unit (STIU) in 1977, had two audiences: the traditional one, the policy-makers, to whom statistics would now be distributed more rapidly,117 and the general public, which could purchase paper or electronic versions of the databases.
113 Y. Fabian, Note on the Measurement of the Output of R&D Activities, DAS/PD/63.48. 114 See: Chapter 7. 115 OECD (1983), State of Work on R&D Output Indicators, STP (83) 12; OECD (1984), Secretariat Work on Output Indicators, STP (84) 8. 116 OECD (1978), General Background Document for the 1978 Meeting of NESTI, DSTI/SPR/78.39; OECD (1981), The Science and Technology Indicators Data Bank, DSTI/SPR/81.38; OECD (1983), The Science and Technology Indicators Data Bank: Progress Report, DSTI/SPR/83.17; OECD (1994), STAN Databases and Associated Analytical Work, DSTI/EAS/STP/NESTI (94) 7. 117 OECD (1976), Methods of Accelerating the Collection and Circulation of R&D Data, DSTI/SPR/76.52.
40
Eighty years of S&T statistics
Faithful to the mission the DSA gave itself at the beginning of the 1960s, the STIU concentrated on specific types of output indicators: patents, technological balance of payments, and international trade in high-tech industries.118 What was peculiar about these indicators was the dimension of S&T they measured: all were concerned with economics. This all happened after the DSA became the Directorate of Science, Technology and Industry (DSTI) in 1975. The Science, Technology, and Industry Indicators Division In the 1980s, people began asking the STIU for more policy analyses and interpretations of data collected. In the 1960s, the statistical unit’s work had been primarily methodological. In the following decade, the STIU began producing analyses of data it collected, but with few real policy concerns addressed. The analyses centered around the STIU’s own interests: results of biennial R&D surveys (see Appendix 7). With its new database, the STIU began producing more analyses in the 1980s. It published three reports on S&T indicators with texts and analyses, for example, but national delegates, grouped under NESTI,119 continually stressed the “importance of reports, not only because of the trends they revealed, but because their preparation highlighted problems with the quality and comparability of the data.”120 The situation changed considerably in the 1990s. While the work on indicators was mostly framed, until then, in terms of inputs and outputs, categorizing indicators by the dimensions of the S&T activities they measured, the statistical unit increasingly contributed to discussion of policy problems with whatever indicators it had or could develop. This was the result of two events. First, in 1987, as a contribution to the secretary-general’s technology initiative, the DSTI launched the Technology/ Economy Project (TEP). The TEP aimed to define an integrated approach to S&T. Ten conferences were held, resulting in Technology in a Changing World (1991) and Technology and the Economy: The Key Relationships (1992). Among these was a conference on indicators that brought users and producers together to discuss needs for new indicators and set priorities.121 The conference also saw the statistical unit review its work for the fourth time in twenty years—the first three were ad hoc review groups like the Silver group. 118 See: Chapter 7. 119 OECD work on S&T statistics was undertaken by the statistical unit together with the DSTI’s Group of National Experts on Science and Technology Indicators (NESTI), created in 1962. NESTI is a subsidiary body of the OECD Committee for Scientific and Technological Policy (CSTP), representing both users and producers of statistics, with half of its principal delegates from ministries of science and technology or associated bodies, and half from central statistical offices or similar producer agencies. It holds annual meetings with each OECD country represented, plus some observers. See: Chapter 2 for more details. 120 OECD (1987), Summary of the Meeting of NESTI, STP (87) 8, p. 5. 121 OECD (1991), Summary Record of the Meeting of Experts on the Consequences of the TEP Indicators Conference, DSTI/STII/IND/STP (91) 2.
Eighty years of S&T statistics 41 The director of the DSTI stated that “TEP could be judged a success if it resulted in new indicators.”122 The new goal of the statistical unit should be to “anticipate policy makers priorities rather than merely responding to them retrospectively.”123 Indeed, an ambitious plan of work covered eight new topics:124 Technology and economic growth Globalization Competitiveness and structural adjustment Investment, innovation, and diffusion of technology Technology and human resources Innovation-related networks Knowledge base for technological innovation Technology and the environment. Second, and to better align STIU work toward policy, the DSTI made certain organizational changes. The statistical unit became a division. A Science, Technology and Industry Indicators Division (STIID) was created in 1986 (renamed the Economic Analysis and Statistics Division, or EAS, in 1993). The statisticians125 finally received formal recognition for their work, but constrained recognition: included were industry concerns that would indeed guide their future efforts. NESTI would still counsel the division on S&T indicators, but would now have to collaborate more closely on work with the Industry Committee and the Group of Experts on ICC (Informatics, Computers, and Communication) statistics.126 The explicit aim of the restructuring was to improve policy relevance of statistics and the capability of the DSTI to perform deeper quantitative analyses: “New activities should be planned in line with emerging policy needs, e.g.: the outcome of the meeting of the CSTP at Ministerial level.”127 At the same time, it gave more visibility and relevance to the statistical division: “in consequence [so said NESTI], the work of the group was becoming increasingly visible in policy terms.”128 From then on, the EAS division got involved in several economic and policy exercises with the DSTI and other OECD directorates, including: the growth project (new economy), the knowledge-based economy, the information society,
122 OECD (1991), Summary Record of the Meeting of Experts on the Consequences of the TEP Indicators Conference, DSTI/STII/IND/STP (91) 2, p. 2. 123 OECD (1993), Summary of the Meeting of NESTI, STP (93) 2, p. 4. 124 OECD (1990), A Draft Medium Term Plan for the Work of the STIID, DSTI/IP (90) 22; OECD (1990), Demand for New and Improved Indicators: Summary of Suggestions for new Work for the STIID, DSTI/IP (90) 30; OECD (1992), Technology and the Economy: The Key Relationships, Paris, Annex. 125 I call people working with statistics at the OECD “statisticians” although few of them were trained as such. 126 OECD (1986), Proposal for a Combined Statistical Working Party on Scientific, Technological, Industrial and ICC Indicators, STP (86) 10. 127 OECD (1988), Summary of the Meeting of NESTI, STP (88) 2, p. 3. 128 OECD (1995), Summary Record of the NESTI Meeting Held on 24–25 June 1995, DSTI/EAS/STP/NESTI/M (95) 1, p. 4.
42
Eighty years of S&T statistics
intangible investments, globalization, national innovation systems, new technologies (biotechnology, information and communication technologies), and highly qualified manpower (stocks and flows).
The European Commission One of the most important outputs of the 1990s for the OECD statistical division was unquestionably the Oslo manual on innovation (1992), revised jointly with the European Commission in 1997.129 Innovation surveys were first suggested, at least for the OECD, in 1976,130 but it was the 1990s before they were widely conducted. The manual idea was strongly supported by Scandinavian countries, which had a “project to organize coordinated surveys of innovation activities in four Nordic countries and to develop a conceptual framework for the development of indicators of innovation.”131 The Oslo manual was an important output of the statistical division for four reasons. First, the manual was one of the first concrete examples of the alignment of the statistical division toward new policy priorities: technology, industry, and innovation. Second, the manual for the first time extended measurement of S&T beyond R&D. Third, surveys conducted using the manual posed important methodological challenges: numbers from the innovation survey and those in the R&D survey were different with respect to a common variable—expenditures on R&D.132 Last, the manual was one of the first steps toward increased OECD collaboration with other players: the manual’s second edition (1997) was developed in collaboration with the European Union, and the first international surveys were coordinated, and their results widely used, by the European Commission. As early as the 1960s, Europeans from different horizons were ardent promoters of the thesis of technological gaps between Europe and the United States.133 In fact, at the request of Jean Monnet’s Action Committee for a United States of Europe, a RAND Corporation study was commissioned on research organization and financing in the six EEC countries in the early 1960s. The study concluded that although the rate of increase of R&D expenditures in every nation of the European Commission was rising at several times the rate of growth of GNP, “the effort in the Common Market countries is still about half of that of the United States or the USSR” (p. vi).134 These fears, still present today, probably explain the interest of the European Commission in measuring innovation: to track its own technological innovation efforts vis-à-vis those of the United States.
129 OECD/Eurostat, Proposed Guidelines for Collecting and Interpreting Technological Innovation Data (Oslo manual), Paris, 1997. 130 See: Chapter 8. 131 OECD (1989), Summary Record of the Meeting of NESTI, STP (89) 27, p. 2. 132 See: Chapter 8. 133 See: Chapter 12. 134 A. Kramish (1963), R&D in the Common Market vis-à-vis the UK, the US and the USSR, Institut de la Communauté européenne pour les etudes universitaires, RAND, P-2742.
Eighty years of S&T statistics 43 Table 1.2 European commission series in science and technology statistics Statistics in brief Statistics in focus Key figures R&D: annual statistics Statistics on S&T in Europe European report on S&T Indicators Innovation scoreboard Eurobarometer Innobarometer
It was not until the 1990s, however, that the European Commission systematically got into the field of S&T statistics with several initiatives (Table 1.2). Its Enterprise Directorate contributed to the Oslo manual. The Research Directorate (DG XIII) followed, producing jointly with the OECD the Canberra manual on measuring human resources in S&T.135 The Research Directorate also produced a regional manual on R&D and innovation statistics,136 and started publishing, with Eurostat, a biennial report on S&T, inspired by the NSF’s Science and Engineering Indicators.137 More recently, the Commission developed an Innovation Scoreboard aimed at following the progress made towards achieving the objective of devoting 3 percent of GDP to R&D.138 According to the Commission, the European Union “can now be regarded as one of the world’s leading institutions in the field of statistics on R&D and innovation.”139 Despite these efforts, and recent legislation for collection of S&T statistics at the European Community level,140 entering the third millennium, the OECD has succeeded in establishing a quasi-monopoly. The European Union would like a share of it, to be sure, and financial pressures at the OECD are certainly helping bring this objective nearer to fruition. The DSTI (and the DSA), under budget constraints for 30 years now, urged in 1972 greater collaboration with nonOECD organizations,141 and looked for new ways of financing their work and
135 136 137 138 139
OECD (1995), Manual on the Measurement of Human Resources in Science and Technology, Paris. Eurostat (1996), The Regional Dimension of R&D and Innovation Statistics, Brussels. European Union (1994), European Report on S&T Indicators, Brussels. CEC (2000), Innovation in a Knowledge-Driven Economy, COM(2000) 567. CEC (1996), Interim Report According to Article 8 of the Council Decision Establishing a Multi-annual Programme for the Development of Community Statistics on R&D and Innovation, Brussels, COM(96) 42, p. 3. 140 Commission des Communautés Européennes (2001), Proposal for a Decision of the European Parliament and of the Council Concerning the Production and Development of Community Statistics on Science and Technology, COM (2001) 490. 141 OECD (1972), Meeting of the Ad Hoc Review Group on R&D Statistics, DAS/SPR/72.46; OECD (1973), Results of the Meeting of the Ad Hoc Group of Experts on R&D Statistics, DAS/SPR/73.61.
44 Eighty years of S&T statistics expanding their measurements.142 As a result, a third of the EAS division’s work is now performed using external contributions,143 with the European Union as a major contributor. The fact remains that, in NESTI’s own terms, while Eurostat certainly has achievements to its credit, the OECD has the leading role.144
UNESCO In the OECD’s view, it also predominates over UNESCO, which was clearly qualified in the same document as having “some” experience. In fact, UNESCO’s activities in S&T statistics started in the early 1960s. The organization set up a Division of Statistics in 1961, and a section devoted to science statistics in 1965. As with other statistical offices, UNESCO first developed repertories of institutions active in R&D, then collected statistical information wherever it could get it to develop a picture of the world’s efforts in S&T. Its very first R&D questionnaire (1964) was developed at the same time as the OECD’s, and dealt with Latin American countries. What always characterized UNESCO’s work was its international character. UNESCO aimed to cover every country in the world. In fact, UNESCO can be credited for having conducted the first “worldwide” S&T survey in 1968, covering Eastern and Western Europe.145 Problems quickly appeared, however, when the organization tried to extend the measurement to developing countries: few had the infrastructure to collect data and produce statistics. UNESCO therefore developed several documents aimed at guiding statisticians in collecting data. A manual146—based on the OECD standards—and a guide147 were published in 1968–1969. UNESCO also worked to better measure specifics of developing countries: in 1978, member countries adopted a recommendation on scientific and technological activities (STA), suggesting broadening statistics on R&D to include scientific and technical education and training (STET), and
142 Today, the OECD prefers to measure new dimensions of science and technology by way of links between existing data rather than by producing new data, partly because of budget constraints— linking existing data is far less expensive than developing totally new indicators. See: OECD (1996), Conference on New S&T Indicators for a Knowledge-Based Economy: Summary Record of the Conference Held on 19–21 June 1996, DSTI/STP/NESTI/GSS/TIP (96) 5; OECD (1996), New Indicators for the Knowledge-Based Economy: Proposals for Future Work, DSTI/STP/NESTI/GSS/TIP (96) 6. 143 OECD (1999), A Strategic Vision for Work on S&T Indicators by NESTI, A. Wyckoff, DSTI/EAS/STP/NESTI (99) 11. 144 OECD (1997), Some Basic Considerations on the Future Co-Operation Between the OECD Secretariat and Eurostat with UNESCO in the Field of Science and Technology Statistics, DSTI/EAS/STP/NESTI (97) 12, p. 2. 145 UNESCO (1969), An Evaluation of the Preliminary Results of a UNESCO Survey on R&D Effort in European member countries in 1967, COM/CONF.22/3; UNESCO (1970), Statistiques sur les activités de R&D, 1967, UNESCO/MINESPOL 5; UNESCO (1972), Recent Statistical Data on European R&D, SC.72/CONF.3/6. 146 C. Freeman (1969), The Measurement of Scientific and Technical Activities, ST/S/15, Paris: UNESCO. 147 UNESCO (1968), Provisional Guide to the Collection of Science Statistics, COM/MD/3, Paris.
Eighty years of S&T statistics 45 science and technology services (STS)—often called related scientific activities (RSA).148 A manual followed precisely defining these components of STA.149 Measurement of RSA specifically drove important efforts at UNESCO. In fact, RSA were at UNESCO considered essential to S&T development. Although not strictly R&D but routine activities, they remained necessary to a nation’s R&D infrastructure. UNESCO conducted studies defining these activities, developed a guide on a specific kind of RSA—Scientific and Technical Information and Documentation (STID)—and conducted a pilot survey. The efforts failed, however, because few countries seemed interested in measuring these activities. They preferred to concentrate on core activities, like R&D, which were also easier to measure. UNESCO’s intensive activities in science statistics lasted until the early 1980s. Since then, only one survey on R&D has been conducted, and at irregular intervals. Recently, the organization has faced at least two major difficulties. First, member countries have varying levels of competence in statistics, and of socioeconomic development: to produce statistics at the world standard was, for many, a huge effort. For others, it required adapting their own standards, an effort few member countries were prepared to make. The second difficulty dealt with financial resources. In 1984, the United States left the organization, and the division’s innovative activities stopped entirely due to financial constraints.
Conclusion The OECD was responsible for a major achievement in S&T measurement: standardizing, somewhat, heterogeneous national practices. It succeeded in this without any opposition from member countries. This is quite different from the history of other standards and statistics. Dissemination of the French meter outside France, for example, has not been easy, and it is still not universally used.150 Similarly, the standardization of time units for a while, saw its English proponents opposed to the French.151 At least three factors contributed to easy acceptance of the Frascati manual among OECD countries. First, few countries collected data on S&T in the early 1960s. The OECD offered a ready-made model for those who had not developed the necessary instruments. For the few countries that already collected data, mainly Anglo-Saxon countries, the manual reflected their own practices fairly: it carried views they already shared. Second, the standardization was proposed by an international organization and not by one country, unlike the case of the meter or the time unit. This was perceived as evidence of neutrality, although the United States exercised overwhelming influence. Third, the OECD introduced
148 UNESCO (1978), Recommendation Concerning the International Standardization of Statistics on Science and Technology, Paris. 149 UNESCO (1980), Manual for Statistics on Scientific and Technological Activities, ST-80/WS/38, Paris. 150 D. Guedj (2000), Le mètre du monde, Paris: Seuil. 151 E. Zerubavel (1982), The Standardization of Time: A Socio-Historical Perspective, American Journal of Sociology, 88 (1), pp. 1–23.
46 Eighty years of S&T statistics the manual with a step-by-step strategy. First step: like the first edition, it began as an internal document (1962). It would not be published officially before the third edition (1976). Second step: the manual was tested (1963–1964) in numerous countries. Third step: it was revised in light of experience gained from the surveys. Regular revisions followed, and the manual is now in its sixth edition. The philosophy of the OECD was explicitly stated in 1962, in the following terms: It would be unrealistic and unwise to expect certain Member governments to adapt completely and immediately their present system of definition and classification of research and development activity to a proposed standard system of the OECD. However, it should perhaps be possible for governments to present the results of their surveys following a proposed OECD system, in addition to following their own national systems. Furthermore, governments could take account of a proposed OECD system when they are considering changes in their own system. Finally, those government who have yet to undertake statistical surveys of R&D activity could take account of, and even adopt, a proposed OECD system. (OECD (1962), Measurement of Scientific and Technical Activities: The Possibilities for a Standard Practice for Statistical Surveys of R&D Activity, SR (62) 37, p. 2) Despite member consensus on the Frascati manual, however, there were and still are “conflicts” on certain issues, practical implementation difficulties with the guidelines, and differences vis-à-vis other international bodies, so some recommendations are poorly followed. To take a few examples, basic research is a concept that more countries are dissatisfied with, and several have stopped measuring it.152 Certain countries (like Canada) preferred to measure government funding of industrial R&D using data from the funder, rather than the performer as recommended by the OECD.153 Last, countries like the United States and Japan until recently never wanted to adapt their statistics to OECD standards—they thought the OECD should adapt to theirs. In sum, the OECD manuals have never been imperative documents. They suggested conventions, but each country was free to apply them or not. The OECD Secretariat itself, in collaboration with the national experts, had to strive to harmonize or estimate national data. Certainly, the construction of international comparisons generally means, for a specific country, abandoning national specifics. In the case of S&T statistics, however, standardization was facilitated by the fact that national statistics were already “international”: a select group (of Anglo-Saxon countries) had already defined how others should collect their data.
152 See: Chapter 14. 153 See: Chapter 9.
2
Taking demand seriously NESTI and the role of national statisticians
Before going into a detailed analysis of statistics developed from 1920 to 2000, it is necessary to understand the mechanisms that helped OECD member countries develop a consensus on the statistics and indicators. Chapter 1 proposed three factors to explain why OECD member countries had little difficulty in accepting the Frascati manual. First, since few countries had begun to collect data on S&T in the early 1960s, the OECD offered a ready-made model. Second, standardization was perceived to be relatively neutral since it was proposed by an international organization, not a single country. Third, the OECD introduced the manual using a step-by-step strategy. Here, I offer an additional factor explaining the relative consensus of member countries toward standardization of S&T statistics: national statisticians’ involvement in construction of OECD statistics and methodological manuals. This took three forms. First, a group of National Experts on Science and Technology Indicators (NESTI) was created to guide OECD activities. Second, ad hoc review groups were set up to align OECD statistical work to national statisticians and users’ needs. Third, member countries actively collaborated in developing specific indicators. I first discuss each mechanism, then present the ways in which the OECD responded to challenges identified by the ad hoc review groups. Although the OECD Secretariat had already started work toward improving the situation before the ad hoc review groups arrived at their conclusions, the latter often served as catalyst. Finally, I conclude with brief reflections on the nature and role of national statistics producers in OECD activities.
The NESTI group OECD activities are organized around a threefold work structure. The OECD Secretariat is responsible for day-to-day work. It is divided into directorates that are divided into divisions. The Secretariat’s work is supported by committees of national delegates from member countries. Each directorate has its own committee(s), advises the Secretariat on the program of work, and reports to the OECD Council of Ministers. Committees are in turn advised by two kinds of groups, again composed of national delegates. The first consists of working groups usually
48 NESTI and the role of national statisticians
Directorate of Scientific Affairs (1961–1976) Directorate for Science, Technology and Industry (1975) Science Resources Unit (1965–1977) Science and Technology Indicators Unit (1977–1986) Science, Technology and Industry Indicators Division (1986–1993) Economic Analysis and Statistics Division (1993)
Committee on Scientific Research (1961–1966) Committee on Science Policy (1966–1970) Committee for Scientific and Technological Policy (1970)
Group of National Experts on R&D Statistics (1962–1983) NESTI (1983)
Figure 2.1 Evolution of OECD structures in science and technology.
set up on a temporary basis to deal with a specific issue. The second consists of advisory groups that work with the Secretariat and report to the committees. Both groups sometimes develop combined work programs. In the case of S&T, the directorate of the OECD Secretariat responsible for statistics is the Directorate for Science, Technology and Industry (DSTI). It includes a Division specifically dedicated to statistical work—the Economic Analysis and Statistics Division (EAS). The Committee for Scientific and Technological Policy (CSTP) deals with S&T policy, as well as statistics and indicators. NESTI is a subsidiary body of that committee (Figure 2.1).1 NESTI2 was established in 1962, essentially to finalize the Frascati manual and organize the first international R&D survey. Until 1988, its mandate “was merely a compilation of extracts from past decisions of the CSTP”3 concerning organization of the first Frascati meeting and the surveys based thereon,4 and included extension of membership and competence to cover output as well as input indicators. The mandate was first explicitly defined in 1988 (and slightly updated in 1993) as follows:5 ●
To ensure continued improvement of collection methodology for internationally comparable R&D data as specified in the Frascati manual, encourage its 1 For committees involved directly or indirectly in science and technology over the OEEC/OECD period, see: Appendix 5. 2 It got its name in 1983. 3 OECD (1988), Summary of the Meeting of the Group of NESTI, STP (88) 2, p. 4. 4 OECD (1963), Committee for Scientific Research: Minutes of the 6th Session, SR/M (63) 1. 5 OECD (1988), Revised Mandate for the Group of NESTI, STP (88) 5, p. 5; OECD (1993), The Revised Mandate of the Group of NESTI, DSTI/EAS/STP/NESTI (93) 9.
NESTI and the role of national statisticians 49
●
●
●
use in member countries and prepare similar methodologies for measuring the output of S&T. To ensure the continued timely availability of internationally comparable R&D data, notably via biennial OECD surveys, and promote the development of data collection and dissemination systems for S&T output indicators. To assist in interpreting S&T indicators in light of policy changes or other special member-country characteristics and advise the committee on the technical validity of reports based on such indicators. To pursue any other work needed to provide the Committee for Scientific and Technological Policy or its subsidiary bodies with requested S&T indicators.
To fulfill this mandate, the group met annually for two or three days to discuss work in progress and plan activities. It also met at irregular intervals to conduct revisions to the Frascati manual. Five revisions have been conducted so far— 1970, 1976, 1981, 1993, and 2002. Full members of NESTI include delegates from all OECD countries, the European Commission and Korea, plus observers from Israel, South Africa, Eastern European countries, and UNESCO. For some time, most countries have sent two principal delegates to NESTI, one from a science and/or technology agency (representing data users) and the other from a survey agency, usually the central statistics office (representing data producers). Today, about half its principal delegates come from ministries of science and technology or associated bodies, and the other half from central statistics offices or similar statistics-producing agencies. Over the last forty years, NESTI has overseen the conducting of regular international R&D surveys and regular methodological improvements for collecting internationally comparable S&T data. It assisted in developing and interpreting indicators in light of policy changes, and advised the CSTP on the technical validity of reports on such indicators. Finally, it acted as a clearinghouse through which member countries could exchange information and experience on methods of collecting, compiling, analyzing, and interpreting data. Over the period 1961–2000, NESTI and the OECD Secretariat produced impressive work: seven regularly updated methodological manuals; more than twenty workshops and conferences; biannual and biennial statistical series; and several documents and policy studies (see Appendices 6 and 7). But above all, NESTI was a forum where national experts exchanged ideas, took decisions, and reached consensus.
Ad hoc review groups NESTI was only one of the mechanisms through which national statistics producers were involved in OECD work. A second was ad hoc review groups. Over the period 1970–1990, the CSTP created three such groups to orient activities of the OECD statistical unit. Each group based its recommendations on responses
50 NESTI and the role of national statisticians of national producers and users to a questionnaire, and responses of the Secretariat to questions regarding needs, priorities, and future work.6 The first group’s mandate was, among other things, to “make a realistic assessment of the needs of the main users of R&D statistics in member countries and in OECD itself, [and] to consider the extent to which the fulfillment of these needs would be prejudiced by the proposed cuts (. . .).”7 The Directorate of Scientific Affairs (DSA) had in fact proposed cuts to the Science Resources Unit (SRU) in 1972. As discussed in Chapter 1, the review group helped stabilize the statistical unit’s existence. Three years later, a second group was established, chaired by Canadian J. Mullin. The financial context had not really changed: “the group should assume that there will be no net increase in the resources available for the compilation of R&D statistics within the Secretariat or within member countries,” stated the CSTP.8 But the real issue was new indicators: “the statistical information provided by OECD was considered to be necessary background information for those making decisions; in no case [however] was it considered to be a sufficient basis for such decisions,” reported the group.9 And it continued: “It is obvious to the group that one cannot forever expect to continue consideration of policy measures whose output are unmeasured.”10 By that time, the Secretariat had already chosen the NSF experience as the model: Science indicators are a relatively new concept following in the wake of longestablished economic indicators and more recent social indicators. So far, the main work on this topic has been done in the United States, where the National Science Board has published two reports: Science Indicators 1972 (issued 1973) and Science Indicators 1974 (issued 1975). (OECD (1976), Science and Technology Indicators, DSTI/SPR/76.43, p. 3) The Secretariat analyzed indicators contained in Science Indicators in depth, comparing them to available statistics, to what could be collected, and at what cost.11 The ad hoc review group was asked “to draw some lessons for future work in member countries and possibly at OECD.” In line with the Secretariat’s views,12 the group’s final report suggested a threestage program for new indicator development.13 The recommendations, as well as those of the following group, led to the launching of a whole program on new indicators. This work would again be expanded following two further review 6 7 8 9 10 11 12 13
For the mandates of each group, see Appendix 8. OECD (1973), Report of the Ad Hoc Review Group on R&D Statistics, STP (73) 14, p. 4. OECD (1976), Summary Record of the 13th Session of the CSTP, STP/M (76) 3. OECD (1978), Report of the Second Ad Hoc Review Group on R&D Statistics, STP (78) 6, p. 11. Ibid., p. 12. See particularly the annex of OECD (1976), Science and Technology Indicators, op. cit. OECD (1977), Response by the Secretariat to the Questions of the Ad Hoc Group, DSTI/SPR/77.52. See: Chapter 7.
NESTI and the role of national statisticians 51 exercises in the 1990s: the Technology/Economy Program (TEP)14 and the Blue Sky project on indicators for the knowledge-based economy.15 A third ad hoc review group (chairman: N. Hurst, Australia) was established in 1984. It dealt with the same issues as the previous two. First, it preferred not “to see the STIU [Science and Technology Indicators Unit] pushed into new areas of responsibility without the guarantee of necessary resources.”16 Second, it recommended establishing a regular schedule for producing output indicators and publishing manuals based thereon. Priority should be given to “output indicators, especially those with an economic context and notably measures of different aspects of the innovation process.”17
Just-in-time numbers The timeliness of the statistical unit’s information was a recurring concern in the three ad hoc reviews. The first group argued that “comparative R&D statistics were indeed a much valued and widely used tool directly used by policymakers themselves in many countries,” but “there was criticism that data are frequently and unnecessarily out-of-date.”18 The second group also discussed the problem of timeliness, concluding that: “tradeoffs have to be made between timeliness and accuracy of data. An acceptable balance has to be struck,”19 while the third group felt that “STIU output was not reaching the widest range of potential users.”20 Why the delays? According to the Secretariat, there were two reasons.21 First, delays in member countries’ responses: “it is rare that more than four countries respond in time to the International Statistical Year (ISY) surveys.” Second, delays at the Secretariat itself in data processing, documenting work done and service activities (like typing). The OECD concluded, “(. . .) improvements in the rapidity with which all the ISY results are issued cannot be hoped for if the present format of five volumes of data, each containing footnoted figures for the majority of OECD countries and accompanied by country notes, etc. is retained.”22 Over time, the Secretariat came up with three solutions to the problem. First, it would rearrange publication of R&D data from the ISY survey. Data would be “arranged country by country with only the main indicators in an international
14 OECD (1991), Summary Record of the Meeting of Experts on the Consequences of the TEP Indicators Conference, DSTI/STII/IND/STP (91) 2. 15 OECD (1995), The Implications of the Knowledge-Based Economy for Future Science and Technology Policies, OECD/GD (95) 136. 16 OECD (1985), Report of the Third Ad Hoc Review on Science and Technology Indicators, STP (85) 3, p. 14. 17 Ibid., p. 12. 18 OECD (1973), Report of the Ad Hoc Review Group on R&D Statistics, op. cit., p. 9. 19 OECD (1978), Report of the Second Ad Hoc Review Group on R&D Statistics, op. cit., p. 16. 20 OECD (1985), Report of the Third Ad Hoc Review Group on Science and Technology Indicators, op. cit., p. 9. 21 OECD (1976), Methods of Accelerating the Collection and Circulation of R&D Data, DSTI/SPR/76.52; See also: OECD (1977), Response by the Secretariat, op. cit. 22 OECD (1976), Methods of Accelerating, op. cit., p. 4.
52 NESTI and the role of national statisticians format,”23 as was the case elsewhere in the OECD (notably in national accounts and labour force data). This solution sped up publication, since international tables could now be produced without waiting for countries to provide complete data. Second, it would publish a newsletter containing the most recent data,24 and “Rapid Results” was made available as soon as national data became available. Third, it would gradually create databases from which to issue its basic international statistical series. These three decisions led to publication of several official series in the following decade (see Appendix 7). Besides early publication of results, another OECD task regarding timeliness was forecasting R&D expenditures. From the beginning, the OECD, in collaboration with national authorities, estimated missing data from national statistics. The most notable corrections concerned business R&D.25 But there was also the substantial time lag between production of data and publication of results in the OECD series. The average lag was two to three years, and with several variables combined in the early 1990s to create new databases, the problem of timeliness was compounded—sometimes data were up to six years late.26 The objective, then, was to reduce the lag to one year from the current time period. It would simultaneously, according to some, protect users from themselves: “users are often not so very particular about the quality of the data, they are prepared to use any information which is available.”27 Most member countries had forecasting procedures. Some countries based their projections on the past, such as straight-line extrapolations (based on regression or exponential models). Several problems plagued these techniques, however: R&D time series were relatively short, and in several cases there were breaks in the series.28 Other countries, like Canada, based estimates on respondents’ spending intentions.29 But the opinions of respondents (R&D managers) varied considerably with the economic climate: estimates were less accurate, for example, during economic recessions. Finally, there was “nowcasting”: extending time series based on other relevant statistical data for the period collected elsewhere.30 Mostly, however, the methods countries used were a mystery: only a third published their methods, restricting any evaluation of data quality.31
23 24 25 26 27 28 29 30
31
Ibid., pp. 5–6. The newsletter was issued biannually between 1976 and 1988. See: Chapter 9. OECD (1994), Updating the STAN Industrial Database Using Short Term Indicators, DSTI/EAS/IND/WP9 (94) 13, p. 2. Eurostat (1995), Nowcasting R&D Series: Basic Methodological Considerations: Part A, DSTI/EAS/ STP/NESTI (95) 8, p. 2. Ibid., p. 3. Statistics Canada (1995), Nowcasting: Comments From Statistics Canada, DSTI/EAS/STP/NESTI (95) 4. OECD (1995), Nowcasting R&D Series: Basic Methodological Considerations: Part B, DSTI/EAS/STP/ NESTI (95) 28; OECD (1995), Nowcasting R&D Expenditures and Personnel for MSTI, DSTI/ EAS/STP/NESTI (95) 20; Eurostat (1995), Eurostat’s Experience with Nowcasting in the Field of R&D Statistics, DSTI/EAS/STP/NESTI (95) 19. OECD (1995), Nowcasting R&D Series, op. cit.
NESTI and the role of national statisticians 53 To improve the situation, the OECD Secretariat began thinking about forecasting techniques in the early 1980s.32 But it was 1993 before a general framework was introduced as an appendix to the Frascati manual. The appendix aimed to sensitize countries to forecasting techniques, suggesting broad data estimation principles for recent and current years. Meanwhile, the OECD increased its efforts to estimate missing national data, even creating entire databases based on estimates, such as STAN (Structural Analysis) and ANBERD (Analytical Business Enterprise R&D).33
Sharing work between leading countries Besides NESTI and ad hoc review groups, there was another means by which national producers got involved in OECD work: new indicator development. The model used came partly from the Frascati manual revisions. The interesting thing about these revisions was the division of work: a national expert took the lead for a specific topic, produced a discussion document, and suggested corresponding modifications to the manual. This approach was recently extended to new indicator development. OECD budget constraints were partly responsible for this choice:34 leading countries would agree to start a new series or conduct pilot surveys, to build initial momentum for new topics that might be embraced by other countries. According to NESTI, “these ad hoc arrangements are likely to become the norm for new work.”35 This approach was applied to the Blue Sky project’s knowledge-based economy indicators in the late 1990s. The Secretariat believed the willingness of member countries to assume leadership was essential to the project’s success: “the success of this [ project] is largely dependent on a strong involvement of countries (. . .). It would be highly desirable that leading countries be ready to commit resources to the projects in which they would be more particularly involved.”36 Thus six countries and organizations took the lead in specific projects: Italy and Eurostat on innovative capacity of firms, Sweden on mobility of human resources, Germany and France on internationalization of industrial R&D, and Australia and Canada on government support for industrial R&D and innovation.37 Two projects were canceled for lack of a lead country or a shortage of Secretariat resources.
32 OECD (1981), Problems of Forecasting R&D Expenditure in Selected Member Countries, DSTI/SPR/81.50. 33 OECD (1994), STAN Databases and Associated Analytical Work, DSTI/EAS/STP/NESTI (94) 7. 34 A. Wycoff (1999), A Strategic Vision for Work on S&T Indicators by NESTI, DSTI/EAS/STP/NESTI (99) 11, p. 8. 35 OECD (2001), Report on the Activities of the Working Party of NESTI, DSTI/STP (2001) 37, p. 3. 36 OECD (1996), New Indicators for the Knowledge-Based Economy: Proposals for Future Work, OECD/STP/NESTI/GSS/TIP (96) 6, p. 8. 37 OECD (1997), Progress Report on the New S&T Indicators for the Knowledge-Based Economy Activity, DSTI/EAS/STP/NESTI (97) 6.
54 NESTI and the role of national statisticians
Conclusion Unlike several episodes in the history of official statistics, like the census,38 S&T measurement was not really an arena of conflict at the international level. From the beginning, the OECD, via NESTI and committees, included both national statisticians and policy-makers in planning its activities. Over the years, these players developed a relative consensus that averted major controversies. In the period 1961–2000, only three debates occurred pitting some countries against others or against the Secretariat. We already discussed the proposed cuts to the statistical unit in 1972, and we will deal later with the Gaps study on technological disparities in the late 1960s, and the measurement of strategic or oriented research in the early 1990s. Each was resolved fairly rapidly: ●
● ●
The first ad hoc review group confirmed the value and importance of the statistical unit’s work; The Gaps study has been the only one of its kind; Specifications on basic research (concerning a distinction between pure and oriented research) were added in the 1993 edition of the Frascati manual, requested by Australia and Great Britain.
Helping achieve this relative consensus was the fact that delegates constituted a specific demographic: most were officials from statistical agencies or government departments. At times, member countries invited academics or consultants to NESTI meetings, but always under the “superintendence” of the national delegate. To the OECD, experts meant “official experts.” Neither the institutions surveyed (nor their representatives), nor the academics working on the statistics were consulted during ad hoc reviews—although academics drafted several methodological manuals, and were invited to present papers during workshops and conferences. One reason for this practice was probably that users’ needs were known to the delegates, who dealt with them regularly at the national level. More probably, it was because official statistics had always been considered the preserve of government and its agencies. Official statistics were originally developed specifically for government uses: sufficient incentive for taking official producers and users seriously.
38 M. J. Anderson and S. E. Fienberg (1999), Who Counts? The Politics of Census-Taking in Contemporary America, New York: Russell Sage; B. Curtis (2001), The Politics of Population: State Formation, Statistics and the Census of Canada 1840–1875, Toronto: University of Toronto Press.
Section II Defining science and technology
3
Is research always systematic?
S&T measurement is based upon a model, often implicit, of inputs and outputs (Figure 3.1).1 Investments (inputs) are directed at research activities which produce results (outputs) and, ultimately, impacts (or outcomes). It is an idealized model: it identifies principal dimensions of S&T, but the statistics do not measure all of these in the same way. Indeed, and until the early 1980s, official S&T statistics rarely measured outputs (the “goods” produced) and impacts. Measurements were chiefly of inputs, of investments in S&T. This was the dimension of S&T for which governments had produced the earliest and longest time series of data, and the only dimension for which there was an international methodological manual before the 1990s. Official statistics made two measurements of S&T investments: financial resources invested in research, which enable calculation of the Gross Domestic Expenditure on R&D (GERD)—the sum of R&D expenditures in the business, university, government, and non-profit sectors2—and human resources devoted to these activities. Each of the measures was, as per the Frascati manual, analyzed in terms of the following dimensions. First, the nature of the research, either basic, applied or concerned with the development of products and processes. Second, monetary and human resources were classified by discipline for universities, by industrial sector or product for firms, and by function or socioeconomic objective for government departments. More recently, official statistics have extended to output indicators, but most of the effort over the period covered in
Input
Activities
Output
Figure 3.1 The input/output model.
1 OECD (1993), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, op. cit., p. 18. 2 See: Chapter 11.
58 Is research always systematic? this book was directed at measurement of inputs. A glance at the set of indicators used reveals that the farther we move from inputs toward outputs and impacts (or outcomes), the fewer indicators are available. In this chapter, we consider the most central concept of S&T statistics, that of research, or R&D. The Oxford Dictionary says the term “research” has French origins dating to the sixteenth century.3 It is rooted in the term “search,” invented in the fourteenth century and defined as “examine thoroughly.” Research meant an “act of searching closely and carefully,” or “intensive searching.” The term was first applied to science in 1639, defined as “scientific inquiry,” but it was rarely used in that context before the end of the nineteenth century. These definitions all included the essential idea of systematicness. Current definitions also focus on this. Whether in twentieth century dictionaries or international conventions on R&D (OECD, UNESCO), definitions of research always contain this idea of systematicness. The 1939 edition of the Webster dictionary, for example, defined research as “diligent inquiry or examination in seeking facts or principles,”4 while more recent definitions often specify “diligent and systematic.” Similarly, the OECD’s definition of R&D uses the word “systematic” explicitly: R&D is “creative work undertaken on a systematic basis to increase the stock of scientific and technical knowledge, including knowledge of man, culture and society and the use of this stock of knowledge to devise new applications.”5 Despite these parallels, the current concept of “systematic” research has changed radically. This chapter discusses a change in the notion of research in the twentieth century resulting from using a specific sense of the term “systematic.” There are three parts to the thesis. First, the meaning of systematic in definitions of research—and the statistics based there on—have drifted from an emphasis on the scientific method to an emphasis on institutionalized research. Second, this drift was closely related to the (modern) research measurement instrument, the survey, and its limitations. Third, the definition had important consequences on the numbers generated, the most important being undercounting of R&D.
Institutionalized research The OECD international R&D standards suggest that governments survey institutional research: institutions are counted, not individual researchers.6 Measurement
3 C. T. Onions (ed.) (1966), Oxford Dictionary of English Etymology, Oxford: Clarendon Press; W. Little, H. M. Fowler, and J. Coulson (1959), The Shorter Oxford English Dictionary, Oxford: Clarendon Press. 4 Webster’s 20th Century Dictionary of English Language (1939), New York: Guild Inc. 5 OECD (1993), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, op. cit., p. 29. 6 During discussions in 2000 about revision of the Frascati Manual, the OECD began, however, to examine how to measure R&D performed by consultants, which amounts to 13 percent of all current costs for R&D in the business sector in Sweden, for example. See: A. Sundstrom (2000), Ad Hoc Meeting of the Revision on the Frascati Manual: How to Report R&D Performed by Consultants?, DAS/EAS/STP/NESTI (2000) 17; A. Sundstrom (2001), Improve the Quality of R&D Personnel Data, Especially in Respect to Consultants, DSTI/EAS/STP/NESTI (2001) 14/PART 15.
Is research always systematic? 59 of research is thus based on classification of institutions by economic sector as per the System of National Accounts (SNA). The original sectors included government, industry, and households (individuals) but, for R&D statistics, individuals are eliminated and the university sector added. The OECD Frascati manual suggests two approaches for surveying R&D: one only surveys a few known R&D performers (or institutions), as most countries do, while the other surveys a random sample of all R&D performers: There are at least two feasible approaches for establishing the survey population of the business enterprise sector. One is to survey a sample drawn from the entire sector, choosing the sample on the basis of the company data available to the methodologists, such as employees and sales, by industry and region. The other is to try and survey only firms supporting R&D. (OECD (1993), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, op. cit., p. 107) Until recently, there was no standard on which firms to include in a survey. Therefore, member countries interpreted the text differently, making international comparisons difficult. This also meant small and medium-sized enterprises (SMEs) were usually poorly surveyed, R&D being thought of as “a statistically rare event in smaller units,” that is, not systematic.7 In fact, the OECD distinguishes R&D as continuous or ad hoc:8 R&D by business enterprises may be organized in a number of ways. Core R&D may be carried out in units attached to establishments or in central units serving several establishments of an enterprise. In some cases, separate legal entities may be established to provide R&D services for one or more related legal entities. Ad hoc R&D, on the other hand, is usually carried out in an operational department of a business such as the design, quality or production department. (OECD (1993), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, op. cit., p. 51) The manual recommends concentrating on continuous R&D only:9 R&D has two elements. R&D carried out in formal R&D departments and R&D of an informal nature carried out in units for which it is not the main
7 OECD (1981), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, Paris, p. 72. 8 OECD (1993), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, op. cit., p. 51. 9 Ibid., pp. 105–106.
60
Is research always systematic? activity. In theory, surveys should identify and measure all financial and personnel resources devoted to all R&D activities. It is recognized that in practice it may not be possible to survey all R&D activities and that it may be necessary to make a distinction between “significant” R&D activities which are surveyed regularly and “marginal” ones which are too small and/or dispersed to be included in R&D surveys. (. . .) This is mainly a problem in the business enterprise sector where it may be difficult or costly to break out all the ad hoc R&D of small companies. (OECD (1993), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, op. cit., pp. 105–106)
From these citations, one may conclude that systematic R&D is understood as research performed on an institutionalized and, above all, continuous basis. The main requirements to that end appeared in the 1993 edition of the Frascati manual, but they clearly reflected statisticians’ past and current practices.
The semantics of “systematic” The definition of research as an organized, formal, and continuous activity was an important drift in the standard conception. One historical use of “systematic” research was associated with positivism, which defined science as a search for comprehensive regularity and general law.10 Inductivism was closely associated with this definition. For example, here is the understanding of the Canadian Department of Reconstruction and Supply in its 1947 survey of government R&D: “(. . .) with the growth of modern scientific methods (. . .) which proceed by observation and experiment, and by the systematizing of the resulting facts and relationships into truth or laws, the search for new knowledge, especially in the scientific and technical fields has become more and more institutionalized and professionalized.”11 This meaning gave rise to, and is incorporated into the institutional definition of, pure research as seeking for general knowledge of nature and its laws: science is an activity beginning with observations and ending in truth and general laws.12 This meaning of systematic is closely related to a second, that of scientific method, stated explicitly in UNESCO documents, for example. The first edition of Guide to the Collection of Statistics on Science and Technology defined scientific research with four elements, among them “the use of scientific methods, or work in
10 C. Hempel and P. Oppenheim (1948), Studies in the Logic of Confirmation, Philosophy of Science, 15 (135), pp. 135–175. 11 Department of Reconstruction and Supply (1947), Research and Scientific Activity: Canadian Federal Expenditures 1938–1946, Government of Canada: Ottawa, p. 5. 12 See, for example: V. Bush (1945), Science: the Endless Frontier, op. cit., p. 81.
Is research always systematic? 61 13
a systematic way.” Or: “An activity can be said to be scientific, in fact, when it is based on a network of logical relationships which make it possible to obtain reproducible and measurable results. The methods used to obtain these results may be considered as techniques when the skills they employ are also systematic, when these skills are based on numerical measurements, and when the results which these measurements give are reliable.”14 The model behind this understanding of research is, of course, natural sciences, which proceed via (laboratory) experimentation.15 The model was so pervasive that “E” (experimentation) sometimes replaced “D” in R&D.16 The model also suggested, for some time, exclusion of the social sciences and humanities from the definition of research as not “organized,” but individual, research. Despite these meanings, UNESCO documents contained the third and newest meaning of systematic science in OECD documents, but in more explicit terms: An activity to be considered at the international level of science statistics must be properly structured, i.e.: it must meet the minimum requirements of a systematic activity such as: the person(s) exercising this activity must work during a significant number of hours per year; there must exist a program of work; a certain amount of financial resources must be specifically allocated to the work. This means that diffused, discontinued or scattered S&T activities, i.e.: activities carried out sporadically, or from time to time, within the various services of an institution, thus not meeting the above-mentioned minimum requirements of a systematic activity, should not be taken into account. There follows, therefore, that non-institutionalized, individual and/or discontinued, diffused or scattered activities are to be excluded for the presentation of international statistics. (K. Messman (1977), A Study of Key Concepts and Norms for the International Collection and Presentation of Science Statistics, op. cit., p. 10) Where did this third meaning of systematic research come from? To my knowledge, it came from industrialists, assisted by the US National Research Council.
13 UNESCO (1977), Guide to the Collection of Statistics in Science and Technology, Paris, p. 18; see also: K. Messman (1977), A Study of Key Concepts and Norms for the International Collection and Presentation of Science Statistics, Paris: UNESCO, p. 20. 14 J.-C. Bochet (1974), The Quantitative Measurement of Scientific and Technological Activities Related to R&D Development, CSR-S-2, Paris: UNESCO, p. 1. 15 Since the 1970 edition of the Frascati Manual, the adjective “experimental” is added to “development” to avoid confusion between development, a phase of R&D, and the same term in economics, and to be consistent with Eastern Europe and UNESCO. 16 Such as in Canadian and United States tax legislation. For the latter, see: H. R. Hertzfeld (1988), Definitions of R&D for Tax Purposes, in O. D. Hensley (ed.), The Classification of Research, Lubbock (Texas): Texas Tech University Press, pp. 136–137.
62
Is research always systematic?
The more developed argumentation, however, came from the US Work Projects Administration ( WPA), created in 1935 with a mandate of economic recovery, reemployment, and national planning. Industrial research expanded after World War I. Most big firms were persuaded to invest in research, and began building laboratories for conducting R&D:17 R&D had to be “organized and systematized.” Organizing industrial research “systematically” was on every manager’s lips: The Organization of Industrial Scientific Research (C. E. K. Mees, Kodak), The Organization of Scientific Research in Industry (F. B. Jewett, ATT), Organized Industrial Research (C. D. Coolidge, General Electric), Organized Knowledge and National Welfare (P. G. Nutting, Westinghouse) are only some of the titles by industrialists between 1915 and 1935. The US NRC was part of this “movement.”18 Numerous similar discourses were published in the NRC Reprint and Circular Series in the 1920s–30s. In 1932, for example, the NRC organized a conference in which industrialists, among them W. R. Whitney from General Electric, talked of science as systematized knowledge and research as systematized search,19 urging that “America must be foremost in systematic, organized research, or we shall be outdistanced by other countries.”20 The following year, M. Holland, from NRC’s Division of Engineering and Industrial Research, in analyzing the last biennial NRC industrial research laboratory survey, concluded: “scientific research has made of invention a systematic, highly efficient process.”21 The NRC was here recalling the new interest of industrialists in organizing research within their firms. It adopted the task of promoting these ideas.
17 On the emergence of industrial research, see: M. A. Dennis (1987), Accounting for Research: New Histories of Corporate Laboratories and the Social History of American Science, Social Studies of Science, 17, pp. 479–518; J. K. Smith (1990), The Scientific Tradition in American Industrial Research, Technology and Culture, 31, pp. 121–131; D. F. Noble (1977), America by Design: Science, Technology and the Rise of Corporate Capitalism, op. cit. On statistics on industrial R&D for the beginning of the 20th century, see: D. E. H. Edgerton (1987), Science and Technology in British Business History, Business History, 29, pp. 84–103; D. E. H. Edgerton and S. M. Horrocks (1994), British Industrial R&D Before 1945, Economic History Review, 47, pp. 213–238; D. C. Mowery (1983), Industrial Research and Firm Size, Survival, and Growth in American Manufacturing, 1921–1946: An Assessment, Journal of Economic History, 43, pp. 953–980. 18 For the movement or “propaganda” campaign in Great Britain, especially the support of industrial research associations by the DSIR, see: Committee on Industry and Trade (1927), Factors in Industrial and Commercial Efficiency, Part I, Chapter 4, London: Majesty’s Stationery Office; D. E. H. Edgerton and S. M. Horrocks (1994), British Industrial R&D Before 1945, op. cit., pp. 215–216. 19 W. R. Whitney and L. A. Hawkins (1932), Research in Pure Science, in M. Ross, M. Holland and W. Spraragen, Profitable Practice in Industrial Research: Tested Principles of Research Laboratory Organization, Administration, and Operation, New York: Harper and Brothers Publishers, p. 245. Whitney and Hawkins seem to oscillate between two meanings of systematic. One is the meaning of generic facts and principles (p. 245) discovered by experiments (p. 249); the other, that of a system, mainly the European system of free men devoting their entire time to research with the assistance of students (pp. 247–248). 20 Ibid., p. 253. 21 M. Holland and W. Spraragen (1933), Research in Hard Times, Division of Engineering and Industrial Research, National Research Council, Washington, p. 13.
Is research always systematic? 63 The close links between the NRC and industry go back to the preparedness of the council (1916). Industrialists were called on in World War I research efforts coordinated by the NRC. After the war, the NRC, “impressed by the great importance of promoting the application of science to industry (. . .), took up the question of the organization of industrial research, (. . .) and inaugurated an Industrial Research Section to consider the best methods of achieving such organization (. . .).”22 “In the 1920s, the division had been a hotbed of activity, preaching to corporations the benefits of funding their own research. The campaign contributed to a fivefold increase from 1920 to 1931 in the number of US industrial labs.”23 The Division conducted special studies on industrial research, arranged executive visits to industrial research laboratories, organized industrial research conferences, helped establish the Industrial Research Institute—which still exists today24—and compiled a biennial directory of laboratories from 1920 until the mid-1950s.25 We also owe to the NRC one of the first historical analyses of industrial research in the United States. In its voluminous study on industrial research, published in 1941 by the National Resources Planning Board (NRPB), the NRC (and MIT historian H. R. Bartlett) narrated the development of industrial research as follows: until the twentieth century, industrial research remained largely a matter of the unorganized effort of individuals. Early in the 1900s, a few companies organized separate research departments and began a systematic search not only for the solution of immediate problems of development and production, but also for new knowledge that would point the way to the future.26 (H. R. Bartlett (1941), The Development of Industrial Research in the United States, in NRPB (1941), Research: A National Resource II: Industrial Research, op. cit., p. 19) However, it was the WPA that developed the most full-length argument, defining research as systematic in the third sense. In 1935, the WPA started a project on Reemployment Opportunities and Recent Changes in Industrial Techniques “to inquire, with the cooperation of industry, labour, and government, into the extent of recent changes in industrial techniques and to evaluate the effects of these changes on
22 NRC 1918–1919 report to the Council of National Defense; cited in A. L. Barrows (1941), The Relationship of the NRC to Industrial Research, in National Resources Planning Board (NRPB), Research: A National Resource II: Industrial Research, Washington: USGPO, p. 367. 23 G. P. Zachary (1997), Endless Frontier: Vannevar Bush, Engineer of the American Century, Cambridge, MA: MIT Press, 1999, p. 81. 24 It was launched in 1938 as the National Industrial Research Laboratories Institute, renamed in 1939 the Industrial Research Institute. It became an independent organization in 1945. 25 See A. L. Barrows (1941), The Relationship of the NRC to Industrial Research, op. cit.; R. C. Cochrane (1978), The National Academy of Sciences: The First Hundred Years 1863–1963, op. cit., pp. 227–228, 288–291, 338–346. 26 A similar argument appeared in NRC’s study of 1933 (M. Holland and W. Spraragen, Research in Hard Times, op. cit., pp. 12–13), but it was far less developed and articulated.
64 Is research always systematic? employment and unemployment.”27 Out of this project came, among some 60 studies, some measures of R&D in industry. The WPA used NRC directories of industrial laboratories to assess the scope of industrial R&D and innovation in the country, publishing its analysis in 1940.28 The report began with the following: “The systematic application of scientific knowledge and methods to research in the production problems of industry has in the last two decades assumed major proportions” (p. xi). The authors contrasted colonial times when research was random, haphazard, and unorganized, realized by independent inventors (pp. 46–47), with modern times when, between 1927 and 1938 for example, “the number of organizations reporting research laboratories has grown from about 900 to more than 1,700 affording employment to nearly 50,000 workers” (p. 40). And the report continued: “Industry can no longer rely on random discoveries, and it became necessary to organize the systematic accumulation and flow of new knowledge. This prerequisite for the rise of industrial research to its present proportions was being met by the formation of large corporations with ample funds available for investment in research” (p. 41). “The facilities available in these laboratories make it possible for the scientist to devote his time exclusively to work of a professional caliber. He is not required to perform routine tasks of testing and experimentation but is provided with clerical and laboratory assistants who carry on this work” (p. 43). Two years later, J. Schumpeter published Capitalism, Socialism and Democracy (1942) in which he contrasted individual entrepreneurs to large corporations. This is the rationale behind the current meaning of systematic in R&D definitions. Research is organized research, and organized research is laboratory research. The meaning spread rapidly through R&D surveys. For example, the first industrial R&D survey in the United States conducted by the National Research Council in 1941 described industrial research as “organized and systematic research for new scientific facts and principles (. . .) and presupposes the employment of men educated in the various scientific disciplines.”29 Similarly, the second survey of industrial R&D by the Federation of British Industries (FBI) defined research as “organized experimental investigations,”30 while the influential Harvard Business School survey talked of “planned search for new knowledge.”31
27 On this project and the debate on technological unemployment, see A. S. Bix (2000), Inventing Ourselves Out of Jobs? America’s Debate over Technological Unemployment, 1929–1981, Baltimore: Johns Hopkins University Press, pp. 56–74. 28 G. Perazich and P. M. Field (1940), Industrial Research and Changing Technology, Work Projects Administration, National Research Project, Report No. M-4, Pennsylvania: Philadelphia. 29 NRPB (1941), Research: A National Resource II: Industrial Research, op. cit., p. 6. 30 Federation of British Industries (FBI) (1947), Scientific and Technical Research in British Industry, London, p. 4. 31 D. C. Dearborn, R. W. Kneznek, and R. N. Anthony (1953), Spending for Industrial Research, 1951–1952, Division of Research, Graduate School of Business Administration, Harvard University, p. 44.
Is research always systematic? 65 The National Science Foundation (NSF) and the OECD generalized the concept. As early as its first R&D survey in 1953 (on non-profit institutions), the NSF defined research and development as “systematic, intensive study directed toward fuller knowledge of the subject studied and the systematic use of that knowledge for the production of useful materials, systems, methods, or processes.”32 The OECD followed with the 1970 edition of the Frascati manual.33
Industrialized research Why did the third meaning of systematic prevail over others? Why focus on research organization rather than method (or content)? Certainly in dictionaries, “systematic” involves a system, and when the system concerns intellectual matters, systematic means deduction and logic. The everyday use, on the other hand, means to proceed with method. “Organized” and “sustained” are mentioned as pejorative meanings only. I offer three factors to explain the appearance of the term systematic in the definition of research. The first is the influence of industrial surveys on R&D surveys. Industrial surveys influenced the whole methodology of questionnaires, including those surveying government and university R&D, equating the notion of research with systematized research and large organizations. Again, the NRC was the main link. Its Research Information Service regularly surveyed industrial research laboratories starting in 1920, and its directories of laboratories were used to compile the first statistics on industrial R&D, whether in NRC surveys—the first in America34— or in those of the WPA,35 the Bush report,36 the President’s Scientific Research Board report,37 the Harvard Business School 38 or the Bureau of Labor Statistics.39 Then the NSF took over, developing surveys according to a questionnaire modeled on industrial R&D contracted out in 1952 by the Department of Defense (DoD) to researchers at Harvard University Business School and the Bureau of Labor Statistics.40 The Harvard Business School study aimed explicitly “to recommend a definition of R&D that can be used for statistical and accounting
32 National Science Foundation (1953), Federal Funds for Science, Washington, p. 3. 33 The OECD definition, with its reference to systematicness, only appeared in the second edition of the OECD Frascati Manual (1970), and not the first (1963). In fact, the first edition did not include any definition of research. 34 M. Holland and W. Spraragen (1933), Research in Hard Times, op. cit.; NRPB (1941), Research: A National Resource II: Industrial Research, op. cit. 35 G. Perazich and P. M. Field (1940), Industrial Research and Changing Technology, op. cit. 36 V. Bush (1945), Science: The Endless Frontier, op. cit. 37 President’s Scientific Research Board (1947), Science and Public Policy, op. cit. 38 D. C. Dearborn, R. W. Kneznek, and R. N. Anthony (1953), Spending for Industrial Research, 1951–1952, op. cit.; R. N. Anthony and J. S. Day (1952), Management Controls in Industrial Research Organizations, Boston: Harvard University Press. 39 US Department of Labor, Bureau of Labor Statistics, Department of Defense (1953), Scientific R&D in American Industry: A Study of Manpower and Costs, Bulletin No. 1148, Washington. 40 Surprisingly, the NSF never included the term “systematic” in its industrial survey.
66
Is research always systematic?
purposes.”41 The study was an outgrowth of another on research control practices in industrial laboratories conducted by the same author (R. N. Anthony).42 The survey showed that firm size was a primary variable explaining R&D investment. Consequently, the authors suggested: The fact that there are almost 3,000 industrial research organizations can be misleading. Most of them are small. (. . .) Over half employ less than 15 persons each, counting both technical and non-technical personnel. Many of these small laboratories are engaged primarily in activities, such as quality control, which are not research or development. [ Therefore] this report is primarily concerned with industrial laboratories employing somewhat more than 15 persons. (R. N. Anthony and J. S. Day (1952), Management Controls in Industrial Research Organizations, op. cit., pp. 6–7) The second factor specifying research as systematic relates to the fact that statisticians recognized from the beginning that activities surveyed and included in definitions of research varied considerably. Delimitations had to be drawn and conventions defined in order to measure research activities properly. One way this was done to exclude routine work (like testing) from research, defining the latter as systematic activity.43 The third factor relates to the cost of conducting surveys. Because there are tens of thousands of firms in a country, units surveyed must be limited to manageable proportions. This was done, introducing a bias in industrial surveys: they identified and surveyed all major R&D performers, big firms with laboratories (“organized” research), but selected only a sample of smaller performers, if any. This decision was supported by the fact that only big firms had precise bookkeeping practices on R&D, the activity being located in a distinct, formal entity, the laboratory.44 Industrial research also led to a semantic innovation: the addition of “development” to “research” created the modern acronym R&D. Before this century, people simply spoke of science, sometimes of inquiry or investigation. The term research became generalized through regular use in industry, whereas “science” was often a contested term when applied to industry: industrial research blurred pure and applied boundaries. The term was rapidly incorporated into names of
41 D. C. Dearborn, R. W. Kneznek, and R. N. Anthony (1953), Spending for Industrial Research, 1951–1952, op. cit. 42 R. N. Anthony and J. S. Day (1952), Management Controls in Industrial Research Organizations, op. cit. According to Anthony and Day (p. ix), the idea for this survey came from industrialists, among them J. M. Knox, Vice President, Research Corporation, New York. 43 See Chapter 4. 44 O. S. Gellein and M. S. Newman (1973), Accounting for R&D Expenditures, American Institute of Certified Accountants, New York; S. Fabricant, M. Schiff, J. G. San Miguel, and S. L. Ansari (1975), Accounting by Business Firms for Investments in R&D, Report submitted to the NSF, New York University.
Is research always systematic? 67 public institutions like the National Research Council (US and Canada), and the Department of Scientific and Industrial Research (UK). Development is a term first introduced into research taxonomies by industry. The first NRC directory of industrial laboratories (1920) distinguished research and development (without breakdown because, it argued, “no sharp boundary can be traced between them”).45 The inspiration was clearly industrial, the whole inquiry being conducted with the aid of industrialists and professional societies, the NRC specifying that it did not innovate, accepting the data provided by companies. Soon, every manager talked of research and development: in the 1930s, many companies’ annual reports brought both terms together.46 The US government imitated industry with the creation of the Office of Scientific Research and Development (OSRD) in 1941. The category of development was coupled to that of research for two reasons. First, there were problems during the war getting innovations rapidly into production.47 As Stewart (1948) noted: Between completion of research and initiation of a procurement program was a substantial gap the armed services were slow to fill. It was becoming increasingly apparent that for research sponsored by NDRC [OSRD’s predecessor] to become effective, the research group must carry projects through the intermediate phase represented by engineering development. (Organizing Scientific Research for War, New York: Arno Press, 1980, p. 35) In fact, firms experienced many problems with production, and universities were often called on for development help (pilot plants, large-scale testing).48 Then, in 1943, as a partial response to those promoting operational research,49 the OSRD created the Office of Field Service to bring research closer to military users.50 With the OSRD, V. Bush succeeded in obtaining greater responsibilities than with its predecessor, the National Defense Research Committee (NDRC), namely responsibilities for development, procurement, and liaison with the army, besides research activities,51 without getting involved in production per se, with respect for the frontiers between research and production.
45 NRC (1920), Research Laboratories in Industrial Establishments of the United States of America, Bulletin No. 2, p. 2. 46 For examples, see: M. Holland and W. Spraragen (1933), Research in Hard Time, op. cit., pp. 9–11. 47 C. Pursell (1979), Science Agencies in World War II: The OSRD and its Challenges, in N. Reingold, The Sciences in the American Context: New Perspectives, Washington: Smithsonian Inst. Press, p. 363. 48 L. Owens (1994), The Counterproductive Management of Science in the Second World War: Vannevar Bush and the OSRD, Business History Review, 68, pp. 553–555. 49 E. P. Rau (2000), The Adoption of Operations Research in the United States During World War II, in A. C. Hughes and T. P. Hughes, Systems, Experts, and Computers: The Systems Approach in Management and Engineering, World War II and After, Cambridge, MA: MIT Press, pp. 57–92. 50 I. Stewart (1948), Organizing Scientific Research for War, op. cit., p. 128. 51 Owens (1994), The Counterproductive Management of Science in the Second World War: Vannevar Bush and the OSRD, op. cit., p. 527.
68 Is research always systematic? Thanks to the OSRD, the “R&D” acronym spread to other organizations, the Department of Defense’s Research and Development Board (1946), RanD project (1948), which gave the present organization that name, Air Force R&D Command (1950), and the position Assistant Secretary of Defense for R&D (1953). The 1946 Atomic Energy Act also used the term, the Senate measured government wartime effort in R&D terms in 1945,52 and the Bureau of Labor integrated the acronym into its surveys in 1953,53 as did the NSF the same year.54 The concept also spread rapidly into the academic world,55 then other countries56 and international organizations (OECD, UNESCO). The second reason development became associated with research in statistics was that it constitutes the bulk of industrial R&D and so was identified as such.57 This is still the main purpose of the category.
What do R&D surveys count? Government statisticians think the survey, the only valid instrument for measuring science. As we discuss later, any other instrument (bibliometrics, patents) is instantly discredited, conveniently overlooking surveys’ own limitations, a fact that would undermine government statisticians’ monopoly on science measurement. In the 1980s, A. Kleinknecht studied the quality of measures produced by official R&D surveys. He designed an industrial R&D survey and compared his results to a government survey’s. He found large differences between the two types, mainly for SMEs. The author measured four times as many man-years devoted to R&D in SMEs than shown in the government survey (see Table 3.1—SEO estimates). The official survey underestimated R&D by up to 33 percent.58
52 H. M. Kilgore (1945), The Government’s Wartime Research and Development, 1940–44: Survey of Government Agencies, Subcommittee on War Mobilization, Committee on Military Affairs, Washington. 53 Department of Labor, Bureau of Labor Statistics, and Department of Defense (Research and Development Board) (1953), Scientific Research and Development in American Industry: A Study of Manpower and Costs, Washington. 54 The acronym was so new that the NSF felt obliged to specify, in its first survey of industrial R&D: “The abbreviation ‘R&D’ is frequently used in this report to denote research and development (. . .).” NSF (1956), Science and Engineering in American Industry: Final Report on a 1953–1954 Survey, Washington, NSF 56-15, p. 1. 55 D. C. Dearborn, R. W. Kneznek, and R. N. Anthony (1953), Spending for Industrial Research, 1951–1952, op. cit. 56 Dominion Bureau of Statistics (1956), Industrial Research–Development Expenditures in Canada, 1955, Ottawa; DSIR (1958), Estimates of Resources Devoted to Scientific and Engineering R&D in British Manufacturing Industry, 1955, London. The DSIR did not separate applied research and development, but considered them as one category. In that sense, it followed V. Bush (1945), Science: The Endless Frontier, op. cit. and OSRD (1947), Cost Analysis of R&D Work and Related Fiscal Information, Budget and Finance Office, Washington. 57 D. Novick (1960), What Do We Mean by Research and Development, Air Force Magazine, October, pp. 114–118; D. Novick (1965), The ABC of R&D, Challenge, June, pp. 9–13. 58 A. Kleinknecht (1987), Measuring R&D in Small Firms: How Much Are We Missing?, The Journal of Industrial Economics, 36 (2), pp. 253–256; A. Kleinknecht and J. O. N. Reijnen (1991), More evidence on the undercounting of Small Firm R&D, Research Policy, 20, pp. 579–587. For similar numbers in France, see: S. Lhuillery and P. Templé (1994), L’organisation de la R&D dans les PMI-PME, Économie et Statistique, 271–272, pp. 77–85.
Is research always systematic? 69 Table 3.1 Number of Dutch firms performing R&D according to three sources, shown by size and by manufacturing and services Size (employees) 10–19
20–49
Estimate by central statistical office (CBS) Manufacturing 182 329 Services 312 245 Total 494 574 Estimate based on R&D subsidy record Manufacturing 565 940 Services 897 688 Total 1,462 1,628 SEO “medium” estimate Manufacturing 640 1,087 Services 1,400 1,044 Total 2,040 2,131
100–199
200–499
⬎500
Total
129 67 196
123 37 160
117 50 167
170 55 225
1,050 766 1,816
818 567 1,385
546 267 813
264 258 522
246 152 398
3,379 2,829 6,208
697 522 1,219
434 280 714
262 152 414
101 44 145
3,221 3,442 6,663
50–99
The reason offered was that SMEs conduct R&D informally and not continuously, or in a department devoted to R&D. Non-budgeted R&D is the rule: “in small firms, development work is often mixed with other activities.” Kleinknecht estimated 33 percent devoted less than one man-year to R&D. This rises to 50 percent in the service industry. Later studies confirmed these results using R&D tax credit59 or innovation survey data.60 How did Kleinknecht find the missing R&D in SMEs? He simply included a question specifically for firms with no formal R&D department.61 So SMEs reported more small-scale R&D work than they would have in the official survey: “if your enterprise does not have an R&D department, R&D activities might be carried out by other departments within your enterprise. For example: the sales department might develop a new product, or the production department might
59 M. S. Lipsett and R. G. Lipsey (1995), Benchmarks, Yardsticks and New Places to Look for Industrial Innovation and Growth, Science and Public Policy, 22 (4), pp. 259–265. 60 D. Francoz (2000), Measuring R&D in R&D and Innovation Surveys: Analysis of Causes of Divergence in Nine OECD Countries, DSTI/EAS/STP/NESTI(2000)26REV1, and C. Grenzmann (2000), Measuring R&D in Germany: Differences in the Results of the R&D Survey and Innovation Survey, DSTI/ EAS/STP/NESTI/RD(2000)24; D. Francoz, Achieving Reliable Results From Innovation Surveys: Methodological Lessons Learned From Experience in OECD member countries, Communication presented to the Conference on Innovation and Enterprise Creation: Statistics and Indicators, Sophia Antipolis, November 23–24, 2000. 61 The NSF had already identified the problem in the 1950s. See: NSF (1956), Science and Engineering in American Industry: Final Report on a 1953–1954 Survey, NSF 56-16, Washington, p. 89, which presented a questionnaire sent specifically to firms conducting negligible R&D activities; and NSF (1960), Research and Development in Industry, 1957, NSF 60-49, Washington, pp. 97–98, which discussed informal R&D in small companies.
70
Is research always systematic?
introduce improvements to a production process. Have any R&D activities been carried out within your enterprise even though you do not have a formal R&D department?”62 The OECD listened to these criticisms, agreeing to discuss the issue during the fourth revision of the Frascati manual after France suggested certain modifications.63 Two options were discussed. One was omission of references to “systematic” in the definition of R&D. This was rejected as the term was useful in excluding non-R&D activities. The other option was to qualify systematic as “permanent and organized” in defining R&D. In fact, the word systematic was never defined explicitly in any edition of the Frascati manual. This option was also rejected. However, a precise number was proposed and adopted for defining (core) R&D: a minimum of one FTE (full-time equivalent) person working on R&D per year. Smaller efforts would have to be surveyed via other sources (i.e. innovation surveys).64 Certain conventions, however, still prevent full consideration of R&D expenditures. One is the definition of a full-time researcher. The OECD says a full-time researcher is a person devoting 90 percent of their activity to research.65 Conversely, people whose research activity is less than 10 percent of their work never figure in official research calculations, although they spend time on research. This restriction has the effect of eliminating the proportion of scattered research incidental to professional activity, which, added together, could represent a non-negligible sum.
Conclusion Government and intergovernmental organizations developed a specific definition of research centered around systematic and institutionally-organized research. They did so to accommodate industry’s definition of research and the inherent limitations of their own survey instruments. Before this definition of research, as systematic, was standardized internationally, two situations prevailed. First, definitions differed depending on which government agency performed the survey (see Appendix 9). Second, and more often, research was “defined” using categories only (basic, applied, development) or merely by activity lists, as in V. Bush’s Science: The Endless Frontier (1945) and the President’s Scientific Research Board report (1947).66 The same was true for the UK Department of Scientific and Industrial Research’s first survey (1955), and the first edition of the Frascati manual (1963).
62 Kleinknecht (1987), op. cit., p. 254. 63 OECD (1991), R&D and Innovation Surveys: Formal and Informal R&D, DSTI/STII/(91)5 and annex 1. 64 OECD (1993), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, op. cit., p. 106. 65 Ibid., p. 84. 66 See: Chapter 14.
Is research always systematic? 71 Qualifying research as systematic had important consequences. By the mid1950s, this new conception of research had spread to almost all institutions and countries, had modified the language (R&D) used to talk about research and, above all, had limited the scope of measurements to a specific set of research performers. These decisions were only the first limiting the scope of S&T statistics. More were to follow. In the 1970s, the Frascati manual excluded related scientific activities (RSA) from S&T, and national governments followed along, despite some short-lived attempts at measuring these activities (see Chapter 4). In the 1980s, UNESCO tried extending S&T statistics to other activities than R&D, but without success (see Chapter 5). In all, R&D as defined in this chapter would remain the quintessence of S&T statistics.
4
Neglected scientific activities The (non-)measurement of related scientific activities
In every country, S&T measurement is generally limited to its research and development aspect. However, scientific and technological activities (STA) comprise more than just R&D. In fact, in 1978, UNESCO drafted an international recommendation defining STA as being composed of three broad types: R&D; scientific and technical training and education (STET); and scientific and technological services (STS).1 A few years later, in a new chapter of the Frascati manual, the OECD appropriated the concept of scientific and technological activities (STA).2 The purpose, however, was not to measure STA, but “to distinguish R&D, which is being measured, from STET and STS which are not” (p. 15). This chapter focuses on STS, often called related scientific activities (RSA), in order to understand why these are rarely measured. RSA includes important scientific and technological activities. These concern the generation, dissemination, and application of scientific and technical knowledge.3 Without these activities, several R&D activities would not be possible, at least not in their current form: “the optimal use of scientific and technological information depends on the way it is generated, processed, stored, disseminated, and used.”4 In countries, like Canada, RSA amounts to over one-third of government S&T activity. As early as 1963, the first edition of the Frascati manual recognized the centrality of these activities to any country: R&D activities are only one part of a broad spectrum of scientific activities which include scientific information activities, training and education, general purpose data collection, and (general purpose) testing and standardization.
1 UNESCO (1978), Recommendation Concerning the International Standardization of Statistics on Science and Technology, Paris, p. 2. 2 OECD (1981), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, Paris, chapter 1. 3 K. Messman (1975), A Study of Key Concepts and Norms for the International Collection and Presentation of Science Statistics, COM-75/WS/26, UNESCO, pp. 33–34. 4 UNESCO (1984), Guide to Statistics on Scientific and Technological Information and Documentation (STID), ST-84/WS/18, Paris, p. 5.
Neglected scientific activities
73
Indeed, in some countries one or more of these related scientific activities may claim a larger share of material and human resources than R&D. It may well be desirable for such countries to begin their statistical inquiries by surveying one or more of these areas rather than R&D. (OECD (1963), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Development, Paris, p. 13) However, OECD statistical series contain no numbers on RSA, instead concentrating on R&D. In fact, such numbers are almost unavailable because few countries collect them. Besides Canada and Ireland among OECD members—and some developing countries, mainly Latin American—no other country measures RSA. Yet these activities are lengthily discussed in each edition of the Frascati manual since 1963. This chapter attempts to explain why there is so little interest in measuring RSA. Discussions about RSA rarely concern measurement of these activities per se, but rather how to better exclude them from R&D. This is a classic example of boundary work: erecting boundaries to exclude things considered outside the field.5 Two theses are discussed. First, except for methodological difficulties, one main reason RSA were excluded from R&D was ideology: R&D was perceived as a higher-order activity. No argument was needed to convince people of this. It was taken for granted almost universally that “soft” activities like market studies or design were not part of S&T. This partly explains why social sciences and humanities were not included in the Frascati manual before 1976. The second thesis of this chapter is that the little interest there was in RSA was generally motivated by political considerations, such as presenting improved S&T performance, or displaying methodological competence in S&T statistics. The first part deals with how RSA were introduced in the first edition of the Frascati manual. It shows that RSA were considered an integral aspect of what
5 R. G. A. Dolby (1982), On The Autonomy of Pure Science: The Construction and Maintenance of Barriers Between Scientific Establishments and Popular Culture, in N. Elias, H. Martins, and R. Whitley (eds), Scientific Establishments and Hierarchies, Dordrecht: Reidel Publishing, pp. 267–292; D. Fisher (1990), Boundary Work and Science: The Relation Between Power and Knowledge, in S. Cozzens, T. F. Gieryn (eds), Theories of Science in Society, Bloomington: Indiana University Press, pp. 98–119; A. Holmquest (1990), The Rhetorical Strategy of Boundary-Work, Argumentation, 4, pp. 235–258; T. F. Gieryn (1983), Boundary-Work and the Demarcation of Science From NonScience: Strains and Interests in Professional Ideologies of Scientists, American Sociological Review, 48, pp. 781–795; T. F. Gieryn (1999), Cultural Boundaries of Science: Credibility on the Line, Chicago: University of Chicago Press; L. Laudan (1996), The Demise of the Demarcation Problem, in Beyond Positivism and Relativism, Boulder: Westview Press; C. A. Taylor (1996), Defining Science: A Rhetoric of Demarcation, Madison: University of Wisconsin Press; B. Barnes, D. Bloor, J. Henry (1996), Drawing Boundaries, in Scientific Knowledge: A Sociological Analysis, Chicago: University of Chicago Press; S. G. Kohlstedt (1976), The Nineteenth-Century Amateur Tradition: The Case of the Boston Society of Natural History, in G. Holton and A. Blanpied (eds), Science and Its Public, Dordrecht: Reidel Publishing, pp. 173–190.
74
Neglected scientific activities
ought to be measured in S&T statistics. The second part explains the origins of the concept. Canada and the US National Science Foundation (NSF) were pioneers in its development, but had few imitators among other OECD countries. The third part argues that UNESCO replaced the OECD in the early 1980s in developing standards for collecting RSA statistics. UNESCO’s efforts were shortlived, however. Finally, the last part describes how a specific kind of RSA, that performed by firms, gained recent political attention, becoming a measurement priority under the banner of innovation activities.
Defining R&D One surprising aspect of the first edition of the Frascati manual is the absence of a specific definition of research.6 Categories of research activities were defined precisely (basic, applied and development), but the following definition of R&D appeared only in the second edition of the manual (1970): “creative work undertaken on a systematic basis to increase the stock of scientific and technical knowledge and to use this stock of knowledge to devise new applications” (p. 8). In the 1963 edition, research was rather contrasted with routine work: The guiding line to distinguish R&D activity from non-research activity is the presence or absence of an element of novelty or innovation. Insofar as the activity follows an established routine pattern it is not R&D. Insofar as it departs from routine and breaks new ground, it qualifies as R&D. (OECD (1963), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Development, Paris, p. 16) As early as 1963, the manual dealt extensively with boundaries between routine work and R&D: “Definitions are not sufficient in themselves. It is necessary to amplify them by standard conventions, which demarcate precisely the borders between research and non-research activities” (p. 12). The manual distinguished R&D from two other types of activities: RSA and non-scientific activities (notably industrial production). This was where the main differences were said to exist between member countries. According to the 1963 Frascati manual, RSA falls into four classes: (1) scientific information (including publications); (2) training and education; (3) data collection; and (4) testing and standardization (p. 15). Non-scientific activities are of three kinds: (1) legal and administrative work for patents; (2) testing and analysis; and (3) other technical services ( p. 16).
6 This was standard practice in the UK and France at the time. See: J. C. Gerritsen et al. (1963), Government Expenditures on R&D in the United States of America and Canada: Comparisons with France and the United Kingdom on Definitions Scope and Methods Concerning Measurement, OECD, DAS/PD/63.23.
Neglected scientific activities
75
The manual stated that RSA must be excluded from R&D unless they serve R&D directly ( p. 16), adding that It is not possible here to make a detailed standard recommendation for related scientific activities (. . .). The objective of this manual is to attain international comparability in the narrower field of R&D (. . .). Arising from this experience, further international standards can be elaborated by the OECD for related activities. (pp. 14–15) The manual nevertheless recommended that All calculation of deductions for non-research activities of research organizations, and of additions for R&D activities of non-research organizations should be made explicit, that is to say, recorded both by individual respondents and by those compiling national totals from the data furnished by individual respondents. Furthermore, whenever possible, related scientific activities such as documentation and routine testing, should be measured simultaneously with R&D and reported separately. (OECD (1963), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Development, Paris, p. 14) The recommendation was soon abandoned, despite talks about extending the Frascati manual to RSA as early as 1964.7 In 1967, the OECD concluded that “these activities necessitate the formation of an ad hoc study group to elucidate the main problems which arise in measuring these activities.”8 Consequently, the suggestion to measure RSA was dropped. The second edition of the Frascati manual (1970) concentrated on R&D, and no study group was created: “We are not concerned here with the problem of measuring R&D related activities but with the conventions to be used to exclude them when measuring R&D activities” (p. 14). The second edition of the manual was in fact the first of many steps on boundary work. In 1970, RSA excluded from R&D extended to seven classes: (1) scientific education; (2) scientific and technical information (subdivided into six sub-classes, eight in 1976); (3) general purpose data collection; (4) testing and standardization; (5) feasibility studies for engineering projects; (6) specialized medical care; and (7) patent and license work. Policy related studies were added in 1976, and routine software development in 1993 (Appendix 10 presents the list of activities and boundaries in the five editions of the Frascati manual). Over this period, the OECD nevertheless adopted the UNESCO concept of STA. The 1981 edition of the Frascati manual evoked the UNESCO
7 OECD (1964), Committee for Scientific Research: Programme of Work for 1965, SR (64) 33, p. 12 and 18; OECD (1964), Committee for Scientific Research: Programme of Work for 1966, SR (65) 42, p. 23. 8 OECD (1967), Future Work on R&D Statistics, SP(67)16, p. 9.
76 Neglected scientific activities Table 4.1 Related scientific activities according to UNESCO and OECD UNESCO (1978)
OECD (1981)
Libraries Translation, editing Data collection Testing and standardization Patent and license activities Surveying Prospecting Counseling Museums
S&T information services General purpose data collection Testing and standardization Patent and license work
Policy related studies Feasibility studies Specialized medical care
recommendation on STA, and suggested modifications: the OECD’s list of STAs had seven classes, instead of UNESCO’s nine (Table 4.1).9 However, the 1981 adoption of STA in the manual appeared only in the introductory chapter “addressed principally to non-experts and (. . .) designed to put them in the picture” ( p. 13). It had correspondingly few consequences on measurement.
What are scientific activities? The concept of research, or scientific activities, dates from 1938, whereas the precursor to the concept of RSA—background research—appeared in 1947. Previously, people spoke simply of research or science, or, increasingly, R&D. In 1938, the US National Resources Committee introduced the concept of “research activities” in its government science report Research: A National Resource.10 The report defined research activities as “investigations in both the natural and social sciences, and their applications, including the collection, compilation, and analysis of statistical, mapping, and other data that will probably result in new knowledge of wider usefulness” (p. 62). The report recognized that “the principal conflicts of opinion about the definition used in this study have revolved around the inclusion of the following activities as research” (p. 62): collection and tabulation of basic data, economic and social studies, mapping and surveying, library and archival services. But it concluded that “part of the difficulty with the adopted definition of research is due to attempts to distinguish between what might be designated as the ‘higher’ and ‘lower’ orders of research without admitting the use of those concepts” (p. 62). It added, “it would probably be instructive to obtain separate estimates for these two ‘orders’ (. . .). However, such a separation has proven
9 The manual includes museums in R&D rather than in RSA, for example, because they often perform R&D (p. 15). 10 National Resources Committee (1938), Research: A National Resource, op. cit.
Neglected scientific activities
77
impractical because of the budgetary indivisibility of the two types of research processes” ( p. 62).11 Ten years later, the US President’s Scientific Research Board report Science and Public Policy borrowed the term “background research” from J. Huxley12 to define these activities identified by the National Resources Committee: “background research is the systematic observation, collection, organization, and presentation of facts, using known principles to reach objectives that are clearly defined before the research is undertaken, to provide a foundation for subsequent research or to provide standard reference data.”13 This type of activity was identified because the survey concerned government research: background activities are “proper fields for Government action” (p. 312), as already observed in the Bush report.14 Since then, RSA are measured, the few times that they are measured, for government activities only. For both the National Resources Committee and President’s Scientific Research Board, identification of specific activities in addition to R&D served only to define what to include or exclude in measuring research. There was no breakdown of data by activity type. We owe to Canada and the NSF the first measurements of RSA. As early as 1947, the Canadian Department of Reconstruction and Supply, in government research survey conducted with the Canadian NRC, defined “scientific activities” as the sum of three broad types of activities: research (composed of pure, background, and applied), development, analysis and testing.15 Again, as in the President’s Scientific Research Board report, the background category served only to specify what is included in research expenditures. No specific numbers were produced “because of the close inter-relationship of the various types of research undertaken by the Dominion Government” (p. 16), that is: because of the difficulty of separating R&D and RSA in available statistics. However, separate numbers were produced for a new activity category: it reported that 12 percent of Canadian scientific activity was devoted to (routine) analysis and testing (p. 25), activities usually excluded from R&D surveys.16 The NSF continued to innovate, while Canada performed no further government R&D surveys until 1960, by which time Canada’s Dominion Bureau of
11 Concerning the difficulties of separating activities before the OECD standard, see for example: National Resources Committee (1938), Research: A National Resource, New York: Arno Press, 1980, Vol. 1, pp. 6, 61–65; Vol. 2, pp. 5–8, 173; President’s Scientific Research Board (1947), Science and Public Policy, op. cit., pp. 73, 300–302; National Science Foundation (1959), Methodological Aspects of Statistics on R&D Costs and Manpower, op. cit. 12 J. S. Huxley (1934), Scientific Research and Social Needs, London: Watts and Co. 13 President’s Scientific Research Board (1947), Science and Public Policy, op. cit., p. 300. 14 V. Bush (1945), Science: The Endless Frontier, op. cit., p. 82. 15 Department of Reconstruction and Supply (1947), Research and Scientific Activity: Canadian Federal Expenditures 1938–1946, Ottawa: Government of Canada, p. 13. 16 See for example: Work Projects Administration (1940), Reemployment Opportunities and Recent Changes in Industrial Techniques, op. cit., p. 2.
78
Neglected scientific activities
Statistics had assimilated the NSF definitions. From the early 1950s, the NSF conducted regular surveys of government research. The results were published in the series Federal Funds for Science.17 R&D data included “other scientific activities”, as did most concurrent surveys in other countries.18 But these were not separated from R&D. Then in 1958, the NSF published Funds for Scientific Activities in the Federal Government.19 The publication, among other things, reanalyzed the 1953–1954 data. Scientific activities were discussed and defined as “creation of new knowledge, new applications of knowledge to useful purposes, or the furtherance of the creation of new knowledge or new applications” (no page number). Activities were divided into seven classes, three defining R&D and four defining “other scientific activities”: 20 R&D, planning and administration, expansion of R&D plant, data collection, dissemination of scientific information, training, and testing and standardization. It was estimated that “other scientific activities” amounted to $199 million, or 7.8 percent of all scientific activities. Of these, data collection was responsible for nearly 70 percent, while dissemination of scientific information (6.5 percent) was said to be underestimated at least threefold. Subsequent editions of Federal Funds for Science (after 1964, Federal Funds for R&D and Other Scientific Activities) included data on “other scientific activities.” But these were restricted to two categories: dissemination of scientific and technical information, and, for a brief period of time, general-purpose data collection. Over time, detailed sub-classes were developed for each category, peaking in 1978 when scientific and technical information (STI) alone had four classes, which were subdivided into eleven subclasses ( p. 43) (see Table 4.2).21 The NSF stopped publishing data on “other scientific activities” with the 1978 Federal Funds. It measured them for the last time in the three-volume report Statistical Indicators for Scientific and Technical Communication, written by King Research Inc. and published by the NSF’s Division of Scientific Information.22 That was NSF’s last work on the subject, although the research was initially contracted “to develop and initiate a system of statistical indicators of scientific and technical communication” ( p. V).23 Why did the NSF abandon measurement of RSA?
17 National Science Foundation (1953), Federal Funds for Science, Washington: Government Printing Office. 18 See: J. C. Gerritsen (1963), Government Expenditures on R&D in the United States of America and Canada: Comparisons with France and the United Kingdom on Definitions Scope and Methods Concerning Measurement op. cit. 19 National Science Foundation (1958), Funds for Scientific Activities in the Federal Government, Fiscal Years 1953 and 1954, NSF-58-14, Washington. 20 These four activities were later included in the first edition of the Frascati manual. 21 National Science Foundation (1978), Federal Funds for R&D and Other Scientific Activities: Fiscal Years 1976, 1977, 1978, pp. 78–300, Washington. 22 National Science Foundation (1976), Statistical Indicators of Scientific and Technical Communication: 1960–1980, three volumes, Washington. 23 Some of the statistics from the report were included in NSF, Science and Engineering Indicators (1977), Washington, pp. 59–63.
Neglected scientific activities
79
Table 4.2 Scientific and technical information (STI) according to NSF (1978) Publication and distribution Primary publication Patent examination Secondary and tertiary publication Support of publication Documentation, reference and information services Library and reference Networking for libraries Specialized information centers Networking for specialized information centers Translations Symposia and audiovisual media Symposia Audiovisual media R&D in information sciences
First, because of the magnitude of the activities. Over the period 1958–1978, surveys reported that information dissemination and data collection represented only about 1–2 percent of federally funded scientific activities. Such a low activity volume was considered not worth the effort.24 Not worth the effort, since, second, the NSF began publishing Science Indicators (SI ) in 1973.25 Everyone applauded the publication, including Congress and the press. Among the indicators soon appearing in SI were supposedly good statistics on scientific information—at least in the US view: bibliometric indicators. For fifteen years, the United States was the only regular producer of such statistics.26 For the NSF, counting publications became the main indicator for measuring scientific information. Third, over time, people became more interested in technologies associated with information and communication activities. Despite work by Bell, Porat, and Machlup on the information society,27 surveys increasingly focused on infrastructure
24 A survey on scientific and technical information (STI) in industry was planned in 1964, but never, to my knowledge, conducted. In 1961, however, the NSF conducted the first survey on industry publication practices, but it focused on measuring basic research, not RSA. See: NSF (1961), Publication of Basic Research Findings in Industry, 1957–59, NSF 61–62, Washington. 25 National Science Foundation (1973), Science Indicators: 1972, Washington. 26 F. Narin et al. (2000), The Development of Science Indicators in the United States, in B. Cronin and H. B. Atkins, The Web of Knowledge: A Festschrift in Honor of Eugene Garfield, Medford: Information Today Inc., pp. 337–360. 27 F. Machlup (1962), The Production and Distribution of Knowledge in the United States, Princeton: Princeton University Press; D. Bell (1973), The Coming of the Post-Industrial Society, New York: Basic Books; US Department of Commerce (1977), The Information Economy, Washington: USGPO; M. R. Rubin and M. T. Huber (1984), The Knowledge Industry in the United States, Princeton: Princeton University Press.
80 Neglected scientific activities and hardware. Over time, indicators on information technologies began replacing indicators on information activities. This was also true at the OECD. Its concern with STI dates back to 1949, when the OEEC established a working party on STI. The group was concerned mainly with exchange of scientific information between countries, including the USSR, and conducted, coordinated by the British Central Office of Information, an international inquiry on use of scientific and technical information by more than 2,000 small and medium-sized firms.28 In 1962, the newly created OECD established an ad hoc group of experts on STI, chaired by an NSF representative. The group performed pilot surveys on STI facilities in 1963–1964,29 recommending that data on resources devoted to STI be collected in subsequent R&D surveys.30 An Information Policy Group (IPG) was created in 1964, reporting again on the need for data.31 The OECD Directorate of Scientific Affairs (DSA) heard these demands. In 1968, the DSA recommended that governments prioritize measurement of STI, and proposed a specific survey “to supply governments with a solid statistical foundation on which to build their national policy.”32 So Germany’s Heidelberg Studiengruppe fur Systemsforschung was contracted to develop a methodological document on STI statistics.33 STI activities were extensively defined, in line with the NSF definition discussed above—and not yet, as in later OECD surveys, exclusively concerned with technologies. The manual was tested in Norway and vehemently criticized at a meeting in Oslo in 1971,34 particularly by countries where surveys were conducted. The manual was considered too complicated and clumsy, not providing governments with enough basic statistical data to formulate STI policy.35 In 1973, the STI policy group concluded, “before fixing on such a methodology, it is necessary to identify the essential data and to define the indicators that are needed.”36
28 EPA (1958), Technical Information and the Smaller Firm: Facts and Figures on Practices in European and American Industry, Paris; EPA (1959), Technical Information and Small and Medium Sized Firms: Methods Available in Europe and the United States, Numbers and Facts. The survey was followed by another in 1960 concerned with information suppliers: EPA (1960), Technical Services to the Smaller Firm by Basic Suppliers: Case Studies of European and American Industry, Paris. 29 OECD (1964), Sectoral Reviews of Scientific and Technical Information Facilities—Policy Note, SR (64) 38. 30 OECD (1963), Committee for Scientific Research: Minutes of the 7th Session, SR/M(63)2; OECD, SR(65)15. 31 OECD (1965), Sectoral Reviews of Scientific and Technical Information Facilities: A Critical Evaluation, SR (65) 51. 32 OECD (1968), Survey of STI Activities, DAS/SPR/68.35, p. 2. 33 OECD (1969), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of STI Activities, DAS/STINDO/69.9, Paris. 34 OECD (1972), Notes on the Meeting of Countries Collecting Statistics on Resources Devoted to STI, DAS/STINFO/72.22. 35 OECD (1973), Collection of Statistical Data on STI, DAS/SPR/73.94 (A); OECD (1973), Economics of Information, DAS/STINFO/73.18. 36 OECD (1973), Economics of Information, DAS/STINFO/73.18, p. 3. The same year, a study on information needs and resources, using bibliometric data, was published by the IPG: G. Anderla (1973), Information in 1985: A Forecasting Study of Information Needs and Resources, Paris: OECD.
Neglected scientific activities
81
Therefore the IPG established a steering committee on STI indicators in 1974. Adopting again the NSF definition then in vogue for measuring information and communication, the committee proposed five classes of indicators, some already collected, “to assist countries to manage their information policy” (p. 3): (1) financial resources allocated to STI, (2) manpower, (3) information produced and used (publications, services, libraries, conferences), (4) computers and communication, and (5) potential users.37 The two OECD instruments—the methodological manual and list of indicators— were never used to develop statistics measuring S&T in general or RSA in particular. First, the IPG gained autonomy, distancing itself from R&D policies. An OECD report criticized the tendency back in 1971: “The scope of the IPG has been too narrow, for it has focused primarily on mechanisms and interfaces between components of the international network and has not given sufficient attention to broader issues (. . .).”38 The IPG was concerned mainly with information policy per se—government support to STI. In line with this, beginning in the early 1970s, the OECD started publishing Reviews of National Scientific and Technical Information Policy (Ireland, Canada, Spain, and Germany). The reviews were qualitative rather than quantitative. The second reason for abandoning STI statistics was that the OECD was increasingly concerned with a very broad definition of information, one covering the entire “information society,” not just S&T. In the first statistical survey of information and communication activities, conducted in nine member countries, the definition of information included teachers, managers, journalists, arts, etc.39 (These were specifically excluded from the OECD manual on STI in 1969.) It showed that one-third of the active working population worked in these professions. Third, the study of information as an activity rapidly became the study of information as a commodity, generating a new fad for measuring technologies associated with information and communication.40 The interest in these matters dates back to 1969, when the DSA established an expert group to conduct a computer utilization survey.41 Some years later, a work program was established in Information, Computers and Telecommunications,42 and a committee on Information, Computer and Communication Policy (ICCP) was created in 1977. Subsequently, the OECD performed several quantitative studies on information and telecommunication technologies and, starting in 1979, published a series of documents concerned notably with statistical aspects of information and communication technologies (see Appendix 11). 37 OECD (1974), Summary Record of the First Meeting of the Steering Group on Indicators for Scientific and Technical Information, DAS/STINFO/74.28. 38 OECD (1971), Information for a Changing Society, Paris. 39 OECD (1981), Information Activities, Electronics and Telecommunications Technologies, Vol. 1, Paris, Table 1. 40 This tendency was criticized as early as 1968, by the Canadian delegate for example. See: OECD (1968), Establishment of a High-Level Group on Scientific and Technical Information, C(68)147, annex II. 41 OECD (1969), Expert Group on Computer Utilization: Outline of the Study, DAS/SPR/69.1. 42 OECD (1975), Information, Computers and Telecommunications: A New Approach to Future Work, DSTI/STINFO/75.22.
82
Neglected scientific activities
Among OECD countries, only Ireland and Canada pursued measurement of RSA. Canada had, since 1960, measured RSA in its surveys of government scientific activities. Ireland still collects RSA information, even developing specific STI statistics in the early 1970s43 that became a de facto standard for UNESCO when it contracted development of its STI guide to an Irish consultant.
The international politics of numbers Despite the OECD’s retreat from the field of RSA, statistical innovations continued to appear as UNESCO took over predominance in RSA measurement. Not only did UNESCO resurrect the concept of scientific and technical activities in 1978, at least internationally, it also published thoughts on RSA and imitated the OECD practice of dedicated manuals. Why was UNESCO involved in measurement of RSA? The official argument in document after document is their contribution to S&T: The priority given to R&D in data collection is only a matter of expediency, and does not mean that the importance of an integrated approach to R&D seen within a full context of educational and other services is underestimated. One may even argue that it is only in close conjunction with these services that R&D can be meaningfully measured—because they are indispensable for research efficiency (. . .) and should precede rather than follow the emergence of R&D in a country. (Z. Gostkowski (1986), Integrated Approach to Indicators for Science and Technology, CSR-S-21, Paris: UNESCO, p. 2) According to UNESCO, surveying national scientific and technological potential (STP) “should not be limited to R&D but should cover related scientific and technological activities (. . .). Such activities play an essential part in the scientific and technological development of a nation. Their omission from the survey would correspond to a too-restricted view of the STP, and would constitute an obstacle to the pursuance of a systematic policy of applying science and technology to development” ( p. 21).44 The obstacle was perceived to be bigger in developing countries with their reliance on foreign knowledge transfers: What would be the use of transfer of technology or knowledge derived from R&D if the countries to which they were passed lacked the infrastructure necessary to make them operational? ( J.-C. Bochet (1977), The Quantitative Measurement of Scientific and Technological Activities Related to R&D Development: Feasibility Study, CSR-S-4, Paris: UNESCO, p. 5)
43 National Science Council (1972), Scientific and Technical Information in Ireland: A Review, Dublin; National Science Council (1978), Scientific and Technical Information in Ireland: Financial Resources Devoted to STID in Ireland, 1975, Dublin. 44 UNESCO (1970), Manual for Surveying National Scientific and Technological Potential, NS/SPS/15, Paris.
Neglected scientific activities
83
Programmes of R&D in the developing countries are not sufficient to guarantee a rise in the scientific and technological activities of a country. In addition to those important activities it has been found necessary to create an infrastructure of scientific and technological services which, on the one hand, support and aid R&D proper, and on the other hand, serve to bring the results of R&D into the service of the economy and the society as a whole. ( J.-C. Bochet (1974), The Quantitative Measurement of Scientific and Technological Activities Related to R&D Development, CSR-S-2, Paris: UNESCO, p. I) It was therefore deemed essential to assess activities involved in collecting and disseminating scientific information. Indeed, World Plan for Action for the Application of Science and Technology to Development stressed the need for such a survey in 1971.45 But there were other reasons UNESCO became interested in RSAs. First, the OECD surprised UNESCO when, in 1963, it published a standard methodology for conducting R&D surveys, a manual that, according to the OECD, “attracted considerable interest in other international organizations and in member countries (. . .), [and was] one of the most important [items] in the Committee’s program.”46 Back in 1960, UNESCO was trying to assess resources devoted to S&T in developing countries.47 It was also aware of the difficulties of comparing national data. Was it not UNESCO’s role to handle international standards? By 1958, UNESCO had produced standards for education, and was developing others for periodicals (1964) and libraries (1970). Given the OECD Frascati manual, if UNESCO wanted to get into S&T measurement, it had to distinguish itself. It did so by taking the broader concept of STA more seriously than the OECD. This led not only to the 1978 recommendation, but also to manuals on STP,48 the social sciences,49 and education and training.50 All were measurements not yet covered by the OECD. These manuals were supported by a mid-1970s study series. UNESCO produced two studies on RSA.51 In a perceptive comment, their author noted, “there 45 UNESCO (1971), World Plan for Action for the Application of Science and Technology to Development, New York. 46 OECD (1964), Committee for Scientific Research: Minutes of the 11th Session, SR/M (64) 3, p. 11. 47 For details, see UNESCO (1968), General Surveys Conducted by UNESCO in the Field of Science and Technology, NS/ROU/132, Paris; W. Brand (1960), Requirements and Resources of Scientific and Technical Personnel in Ten Asian Countries, ST/S/6A, Paris: UNESCO. See also: UNESCO (1968), Provisional Guide to the Collection of Science Statistics, COM/MD/3, Paris, chapter 1. 48 UNESCO (1970), Manual for Surveying National Scientific and Technological Potential, NS/SPS/15, Paris. This term “S&T potential” came directly from the OECD’s work on technological gaps; see: OECD (1966), Differences Between the Scientific and Technical Potentials of the Industrially Advanced OECD Member Countries, DAS/SPR/66.13. 49 UNESCO (1971), The Measurement of Scientific Activities in the Social Sciences and the Humanities, CSR-S-1, Paris. 50 UNESCO (1982), Proposal for a Methodology of Data Collection on Scientific and Technological Education and Training at the Third Level, CSR-S-15, Paris. 51 J.-C. Bochet (1974), The Quantitative Measurement of Scientific and Technological Activities Related to R&D Development, op. cit. and J.-C. Bochet (1977), The Quantitative Measurement of Scientific and Technological Activities Related to R&D Development: Feasibility Study, op. cit.
84
Neglected scientific activities
does not seem to be any positive criterion by which activities related to R&D (are) defined.”52 The OECD definition currently in use was based on a negative criterion: RSA consisted of scientific and technological activities that were not innovative in nature. J.-C. Bochet suggested three other more positive definitions. He defined RSA as 1 2
3
Activities which, while not being actually innovative in character, form the infrastructure necessary for the effectiveness of R&D; Activities which, within the framework of S&T, maintain the continuity of the routine competence necessary for R&D activity, although not playing a direct part in it; Activities which, whilst not being innovative in character, have, in varying degrees, connections with R&D activities, created according to circumstances, either internally or externally to R&D.
From these reflections came a guide on scientific and technical information and documentation (STID) drafted in 1982, tested in seven countries, and published provisionally in 1984.53 The guide was based on a 1979 UNESCO study by D. Murphy of the Irish National Science Council.54 It defined STID as “the collection, processing, storage and analysis of quantitative data concerning information activities (. . .)” (p. 5). The principal items to be measured were institutions and individuals performing these activities, the amount of financial resources and physical facilities available, and the quantity of users. The first stage was to collect information only on the second category of institution: ● ● ●
specialized libraries and centers; national libraries and libraries of higher education, referral centers; editing, publishing, printing, consulting and advisory services and enterprises.
Over this short period, clearly UNESCO recommended and worked on measurement of RSAs. But it also restricted the range of RSAs to a single item: STID. One reason was availability of data on these activities: other divisions of the UNESCO Division of Statistics already collected information on communication.55 There was certainly a third reason UNESCO got involved in RSA methodology, however. Its interest in RSA was the consequence of its basic goal of extending standardization beyond industrialized (i.e. OECD) countries. The first step in
52 J.-C. Bochet (1974), The Quantitative Measurement of Scientific and Technological Activities Related to R&D Development, op. cit., p. 2. 53 UNESCO (1984), Guide to Statistics on Scientific and Technological Information and Documentation (STID), ST-84/WS/18, Paris. 54 D. Murphy (1979), Statistics on Scientific and Technical Information and Documentation, PGI-79/WS/5, Paris: UNESCO. 55 See: Z. Gostkowski (1986), Integrated Approach to Indicators for Science and Technology op. cit., pp. 11–17.
Neglected scientific activities
85
that program, in 1967, was Eastern Europe. In 1969, UNESCO published The Measurement of Scientific and Technical Activities by C. Freeman.56 The document concerned the data standardization between Western and Eastern Europe (p. 7) and the necessity of measuring RSA (p. 10): R&D is “only part of the spectrum of scientific and technological activities (. . .). It is considered essential at the outset to visualize the whole and to begin to build the necessary framework for establishing a viable data collection system covering the whole field” (p. i). The document led to a guide57 and a manual on S&T statistics.58 The UNESCO manual was in fact a duplicate of the Frascati manual. Indeed, the following statement appeared in the manual’s provisional edition (1980): “differences arising from this wider scope have very little effect on the key fundamental concepts (. . .)” (p. 13). Peculiar to eastern countries at the time was that R&D was not designated as such. The USSR put all statistics on S&T under the heading “science.”59 Government science included training, design and museums. UNESCO thus had to choose: either follow the OECD and emphasize R&D, or measure, as in Eastern Europe, both R&D and RSA. The latter option prevailed. In attempting to statistically accommodate Eastern Europe, UNESCO’s efforts were guided by its desire to generate a broader range of standardization than the OECD, as well as by an interest in RSA per se. But the program including Eastern Europe failed, and UNESCO never collected data on RSA. Why? First, UNESCO concentrated on R&D. R&D was supposedly easier to locate and measure, and had the virtue of an “exceptional” contribution to S&T. Hence, while UNESCO pushed for RSA, it argued for the centrality of R&D. Here is one example, among many, of the rhetoric used: Because of the unique (“exceptionnel ” in the French version) contributions that R&D activities make to knowledge, technology, and economic development, the human and financial resources devoted to R&D, which might be called the core of science and technology, are usually studied in greater detail (p. 6). (UNESCO (1986), Provisional Guide to the Collection of Science Statistics, COM/MD/3, Paris, p. 6) Consequently, in 1978, measurement of RSA was postponed to a later second stage in the measurement of S&T: Due to considerable costs and organizational difficulties, the establishment of a system of data collection covering at once the full scope of STS and STET in a country has been considered not practical. Some priorities have,
56 57 58 59
C. Freeman (1969), The Measurement of Scientific and Technical Activities, ST/S/15, Paris: UNESCO. UNESCO (1984), Guide to Statistics on Science and Technology (third edition), ST.84/WS/19, Paris. UNESCO (1984), Manual for Statistics on Scientific and Technological Activities, ST-84/WS/12, Paris, p. 6. C. Freeman and A. Young (1965), The R&D Effort in Western Europe, North America and the Soviet Union, Paris: OECD, pp. 27–30, 99–152; C. Freeman (1969), The Measurement of Scientific and Technical Activities op. cit., pp. 7, 11–12.
86
Neglected scientific activities thus, to be adopted for a selective and piecemeal extension of coverage of certain types of STS and STET. (Z. Gostkowski (1986), Integrated Approach to Indicators for Science and Technology, op. cit., p. 1) First stage: during this stage, i.e.: during the years immediately following the adoption of this recommendation [1978], international statistics should cover only R&D activities in all sectors of performance, together with the stock of SET [scientists, engineers and technicians] and/or the economically active SET (. . .). Second stage: during that stage, the international statistics should be extended to cover STS and STET. Subsequently, the international statistics relating to STS and STET should be progressively extended to the integrated units in the productive sector. (UNESCO (1978), Recommendation Concerning the International Standardization of Statistics on Science and Technology, Paris, pp. 10–13)
The second reason UNESCO never collected RSA data was the fact that, ultimately, few countries were interested.60 A meeting of experts on data collection methodology for STID activities was held in 1985 to assess lessons learned from the pilot surveys.61 It reported that STID activities were not deemed particularly urgent, that the purpose for measuring them was unclear, and that there were difficulties in interpreting the definition (pp. 26–29). But the main reason UNESCO failed in its efforts to measure RSA was the United States’ departing the organization in 1984. This considerably impacted UNESCO’s Division of Statistics in both financial and human resources terms, leading to the decline, almost the disappearance, of UNESCO from measurement of S&T.
The “autonomization” of RSA In 1992, the OECD draft manual on human resources in S&T (later the Canberra manual)62 suggested that Frascati manual and UNESCO definitions of S&T activities were too limited. From discussions during the draft manual workshop, it was decided that industrial production activities should be included in statistics on (human resources in) S&T.63 This was only the first amendment to the historical exclusion of non-scientific activities from S&T statistics.
60 The first OECD ad hoc review group argued the opposite in 1973: a majority of countries were said to be interested in RSA. See: OECD (1973), Report of the Ad Hoc Review Group on R&D Statistics, STP(73) 14, Paris, pp. 22–23. 61 UNESCO (1985), Meeting of Experts on the Methodology of Data Collection on STID Activities, 1–3 October 1985, Background Paper, ST-85/CONF.603/COL.1, Paris. 62 See: Chapter 13. 63 OECD (1993), Summary Record of the Workshop on the Measurement of S&T Human Resources, DSTI/EAS/M (93) 4, p. 5.
Neglected scientific activities
87
Since its first edition, the Frascati manual dealt with another type of activity for which boundaries had to be drawn from R&D: non-scientific activities. According to the 1963 edition, these comprised legal and administrative work for patents, routine testing and analysis, and other technical services, activities to be excluded from R&D, like RSA. However, these kinds of work could be considered industry’s equivalent of government RSA, sometimes included in government R&D surveys (patent work, for example, was transferred from non-scientific activities to RSA in the 1970 edition of the Frascati manual). These are “related activities which are required during the realization of an innovation” (p. 16).64 This was what the OECD formalized in 1981 in introducing the concept of innovation in the introductory chapter of the Frascati manual. Innovation was defined as Transformation of an idea into a new or improved salable product or operational process (p. 15). It involved all those activities, technical, commercial, and financial steps, other than R&D, necessary for the successful development and marketing of a manufactured product and the commercial use of the processes and equipment. (OECD (1981), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, Paris, p. 28) More specifically, innovation was defined as an activity composed of six subactivities ( pp. 15–16): 1 2 3 4 5 6
new product marketing patent work financial and organization changes final product or design engineering tooling and industrial engineering manufacturing start-up.
Of all non-R&D activities, these are the only ones in the history of OECD statistics on S&T given a certain autonomy, and status equivalent to R&D.65 In 1992, the OECD drafted a manual devoted specifically to measurement of innovation— the Oslo manual—officially published in collaboration with the European Union (Eurostat) in 1997.66 Based on innovation surveys conducted using this manual, it is now estimated that non-R&D activities account for about one-quarter of product innovation activities within firms.67 64 OECD (1981), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, Paris. 65 Other manuals exist, but are not official publications. 66 OECD/Eurostat (1997), Proposed Guidelines for Collecting and Interpreting Technological Innovation Data, Paris. 67 E. Brouwer, and A. Kleinknecht (1997), Measuring the Unmeasurable: A Country’s Non-R&D Expenditure on Product and Service Innovation, Research Policy, 25, p. 1239.
88
Neglected scientific activities
The reasons for the interest of the OECD and member countries in measurement of innovation are obvious. In the mid-1970s, scientific and technological policies were increasingly focusing on innovation rather than support for science per se. Consequently, the current agenda of OECD member countries is mainly economic, with technological innovation believed to be a key factor, if not the main one, contributing to economic growth.
Conclusion Back in 1963, the OECD defined R&D by excluding routine activities. This was why RSA were dealt with at length in the manual (other reasons were methodological difficulties of separating R&D and RSA and discrepancies between national data). There was no interest in RSA per se. It took fifteen years before a conceptual definition of RSA appeared in the manual. Before the UNESCO recommendation, RSA were defined only as a list of activities, and many examples still exist for instructing manual users on how not to include or measure RSA. UNESCO was the last organization to invest in defining and measuring RSA systematically. But it was, in the end, only slightly more interested in these activities than the OECD. UNESCO had to find a niche to become a credible actor in S&T statistical methodology. It followed Eastern Europe’s experience, that being the easiest way to standardize statistics outside of OECD countries. The very few countries that measure RSA today do so for various reasons. Probably the main one is to exhibit higher levels of S&T activity, like Latin American countries, where RSAs represent over 77 percent of the region’s scientific and technological activity.68 But the same applies for developed countries, like Canada: RSA allows certain government agencies and departments with little or no R&D to be included among S&T, or helps increase a region’s statistical performance. For example, Canadian RSA statistics appear in the main publication listing federal government activities on S&T.69 Certain of these numbers also appear in the annual issue of the monthly bulletin on science statistics, which deals with the distribution of federal expenditures by province.70 The bulletin provides data on both R&D expenditures and on total S&T activities. RSA figures can be obtained only by subtracting R&D from total S&T expenditures. One table, however, explicitly provides RSA statistics: that on the National Capital Region (NCR). The aim is to display higher federal activity in Quebec.71 On the Quebec side of the NCR, federal R&D expenditures are $17 million (versus $673 million for Ontario), yet the federal government is shown to spend $198 million more on RSA in Quebec than in Ontario overall.
68 69 70 71
RICYT (1998), Science and Technology Indicators: 1997, Buenos Aires. Statistics Canada, Federal Scientific Activities, 88-204. Statistics Canada (1999), Service Bulletin: Science Statistics, 88-001, December. For an analysis of the politicization of S&T statistics in Canada, see: B. Godin (2000), The Measure of Science and the Construction of a Statistical Territory: The Case of the National Capital Region (NCR), Canadian Journal of Political Science, 33 (2), pp. 333–358.
Neglected scientific activities
89
So in the past fifty years, RSA were rarely examined as part of S&T activity measurement, or even discussed as activities themselves. Initially, they were discussed in OECD manuals only to better exclude them from R&D measurement. Second, above all, they served political ends: promoting a field (statistics) judged by an organization (UNESCO) to be strategically important, or displaying higher scientific and technological activity (Canada). One useful further investigation would be to examine how measurement of social sciences and humanities suffered from the decision not to measure RSA. In the mid-1970s, when government surveys began to cover the social sciences and humanities,72 conventions designed for natural sciences in the previous decade were strictly applied to these new disciplines. Therefore, activities such as data collection and STI—among them statistics production—the raw material of the social sciences and humanities, and an integral part of research in these disciplines, were excluded as being RSA; similarly, economic studies and market research were never considered research activities in industrial surveys.73 The exclusion of RSA from R&D surveys is one more example of the principle of hierarchy behind the measurement of science. Indeed, this exclusion reaffirms the long-held belief that only experimentation counts as research. For the social sciences and humanities, it meant they were never measured until the 1970s, when such measurement finally did begin. An echo of this historical decision is the non-measurement today of much of the research in the social sciences and humanities.
72 Today, nine OECD countries still do not include the social sciences and humanities in their surveys. 73 P. Lefer (1971), The Measurement of Scientific Activities in the Social Sciences and Humanities, Paris: UNESCO, CSR-S-1; OECD (1970), The Measurement of Scientific Activities: Notes on a Proposed Standard Practice for Surveys of Research in the Social Sciences and Humanities, DAS/SPR/70.40, Paris.
5
What’s so difficult about international statistics? UNESCO and the measurement of scientific and technological activities
As the last chapter began to document, international organizations compete among themselves in their specific areas as much as national governments do in war and politics. Each wants its own decisions and interpretations to take precedence over those of its rivals. Some are more successful at it than others, however, like the OECD in the case of S&T statistics. In 1963, the OECD published the first edition of the Frascati manual, which subsequently became the world standard. Although UNESCO now regularly insists upon greater collaboration with the OECD,1 there was nevertheless a time when UNESCO felt it could dominate the field. This chapter is concerned with UNESCO’s efforts to standardize S&T statistics at the international level via the concept of scientific and technical activity (STA). Up to now, we have discussed R&D (Chapter 3), then the aborted effort to extend R&D statistics to related scientific activities (RSA) (Chapter 4). UNESCO was very active with regard to RSA, but also tried to go further, and to measure other STA. This was its way of taking a place in the field of S&T statistics dominated by the OECD. As early as 1966, UNESCO assigned itself the task of developing international standards for measuring S&T.2 Several other organizations would soon get involved in standardization at the “regional” level: the OECD, the ECE,3 the CMEA,4 the OAS,5 and the Scandinavian group Nordfork. But UNESCO wanted to serve as the “focal point”, as it claimed,6 and it partly succeeded. Having already developed international standards in statistics on education (1958) and culture (1964),7
1 UNESCO (1994), Meeting of Experts on the Improvement of the Coverage, Reliability, Concepts, Definitions and Classifications in the Field of Science and Technology Statistics, ST.94/CONF.603/12, p. 5; OECD (1999), Summary Record of the NESTI Meeting, DSTI/EAS/STP/NESTI/M (99) 1, p. 14. 2 UNESCO (1966), Problems Encountered in the Development of a Standard International Methodology of Science Statistics, UNESCO/CS/0666.SS-80/5. 3 Economic Community of Europe. 4 Council for Mutual Economic Assistance. 5 Organization of American States. 6 UNESCO (1969), Report of the Session Held in Geneva 2–6 June 1969, UNESCO/COM/CONF.22/7, p. 14. 7 The United Nations produced important international classifications that are still in use: International Standard Industrial Classification (ISIC), Standard International Trade Classification (SITC), International Standard Classification of Occupations (ISCO), International Standard Classification of Education (ISCED).
UNESCO and the measurement of S&T activities
91
UNESCO subsequently drafted an international recommendation for a standard methodology on S&T statistics, which was approved by member countries in 1978. Pragmatic difficulties would soon appear concerning the application of its recommendations, however. This chapter describes the history of these difficulties. The first difficulty that UNESCO encountered concerned its proposal to collaborate with the OECD. The latter had recently insisted on the leading role it played in the development of methodologies and indicators.8 I show how, in the early 1970s, this dominance drove the OECD’s response to the UNESCO proposal for increased collaboration. A second difficulty had to do with the “varying level of competence” of UNESCO member countries.9 UNESCO dealt with a larger and more diversified number of countries than the OECD, including countries that had not yet developed the necessary expertise to properly measure S&T. A related problem was the fact that current R&D statistics did not meet the specific needs of developing countries. A third difficulty concerned financial resources. In 1984, the United States withdrew from UNESCO, accusing the organization of political patronage and ideological biases. This decision affected the whole statistical program, particularly the division of science statistics. Ten years later, the division was still sending strong appeals to the Director-General “to foresee in the future budget the appropriate financial assistance.”10
The road toward international statistics The UNESCO Division of Statistics was set up in 1961. As early as 1965, a section on Science Statistics was created, with three main tasks: (1) collection, analysis, and publication of data; (2) methodological work to support the collection of statistical data; and (3) technical assistance to member countries through expert missions and fellowships. As we saw previously, countries usually based their first measurements on directories—or lists. In the 1950s, UNESCO similarly set up multiple directories on national councils, bilateral links of cooperation, and scientific institutions—both national and international.11 It first attempted to systematically measure S&T resources in 1960, collecting data from wherever it could get it.12 In 1966, however, it began circulating a questionnaire to gather data, among other things, for
8 OECD (1997), Some Basic Considerations on the Future Co-operation Between the OECD Secretariat and Eurostat with UNESCO in the Field of Science and Technology Statistics, DSTI/EAS/STP/NESTI (97) 12, p. 2. 9 UNESCO (1981), Progress Made in the Development of International Statistics on Science and Technology, UNESCO/ECE/CONF.81/ST.000/6, p. 3. 10 UNESCO (1994), Meeting of Experts on the Improvement of the Coverage, Reliability, Concepts, Definitions and Classifications in the Field of Science and Technology Statistics, ST.94/CONF.603/12, p. 5. 11 UNESCO (1968), General Surveys Conducted by UNESCO in the Field of Science and Technology, NS/ROU/132, Paris. 12 W. Brand (1960), Requirements and Resources of Scientific and Technical Personnel in Ten Asian Countries, ST/S/6A, Paris: UNESCO. See also: UNESCO (1968), Provisional Guide to the Collection of Science Statistics, COM/MD/3, Paris, chapter 1.
92
UNESCO and the measurement of S&T activities
forthcoming conferences (Latin America, Africa).13 The questionnaire was limited to two tables because of “paucity (sic) of statistical organizations and the disparity between the magnitude of the effort required to gather detailed data and the meager resources being measured.”14 It nonetheless served as the basis for the preparation of a second questionnaire—international in scope—for publication in the UNESCO Statistical Yearbook.15 This was the beginning of regular statistical series on S&T at UNESCO. The organization would conduct two kinds of survey between 1969 and 1975—afterward limited to only one: one based on a concise questionnaire for all countries with a limited number of questions (annual), and a larger and more comprehensive optional questionnaire (biennial). UNESCO’s program of work on S&T statistics was developed with the support of two groups (see Appendix 12). The first group was an Advisory Panel on Science Statistics,16 which held five meetings between 1966 and 1971. The first meeting was attended by four experts: H. Bishop (United Kingdom), A. A. Farag (United Arab Republic), J. Nekola (Czechoslovakia), and J.-P. Spindler (France). Except for the very first meetings, few archives remain to properly document the period. The second group of experts was a joint UNESCO/ECE Working Group on Statistics of Science and Technology. It assisted UNESCO over the period 1969–1981. The group was established jointly by the Conference of European Statisticians and UNESCO with a view to linking national statistical offices and science policy bodies to UNESCO’s work. The real work on standardization began in association with the latter group. In 1968, UNESCO circulated a questionnaire in preparation for a 1970 conference of European (east and west) Ministers responsible for science policy. The conference served as a basis for the implementation of UNESCO’s long-term science statistics program. Although the survey’s results were far from satisfactory in terms of coverage, definitions, and classifications,17 it was the first time the same instrument had been used to collect R&D statistics in both eastern and western European countries.18 This survey would thereafter serve as the model for the UNESCO biennial survey.
The view from nowhere To extend the OECD’s work to all countries, UNESCO faced two challenges, which corresponded to two groups of countries: “The methodology so developed 13 UNESCO (1970), World Summaries of Statistics on Science and Technology, ST/S/17. 14 UNESCO (1966), Statistical Data on Science and Technology for Publication in the UNESCO Statistical Yearbook: Development of the UNESCO Questionnaire, UNESCO/CS/0666.SS-80/4, p. 2. 15 UNESCO (1969), Report of the Session Held in Geneva 2–6 June 1969, UNESCO/COM/CONF.22/ 7, p. 8. 16 Later called Group of Experts on Methodology of Science Statistics. 17 UNESCO (1969), An Evaluation of the Preliminary Results of a UNESCO Survey on R&D Effort in European member countries in 1967, COM/CONF.22/3. 18 UNESCO (1970), Statistiques sur les activités de R&D, 1967, UNESCO/MINESPOL 5; UNESCO (1972), Recent Statistical Data on European R&D, SC.72/CONF.3/6.
UNESCO and the measurement of S&T activities
93
[OECD] must be adapted for use by Member States at widely varying levels of development and with diverse forms of socio-economic organizations,” UNESCO explained.19 The first group (developing countries) had almost no experience in the field of S&T statistics, whereas the second (Eastern European countries) had an economic system that required important adaptations to fit OECD standards: A statistical methodology developed in a country with 40,000 scientists and 200,000 engineers in all fields of S&T may be of little use in a country with only 50 scientists and 200 engineers; a questionnaire suitable for use in a country with a highly developed statistical organization may be impractical in a country where few professional statisticians are struggling to gather the most basic demographic and economic data essential to planning. (UNESCO (1966), Problems Encountered in the Development of a Standard International Methodology of Science Statistics, UNESCO/CS/0666.SS-80/5, p. 3) The task was enormous: “The Secretariat does not underestimate the formidable problems which are involved in such an undertaking, but is confident that, with the help of Member States having experience in this field of statistics, much progress can be made toward this goal.”20 There had certainly been previous attempts at standardization, but according to UNESCO, they were local in scope: Partial solutions already attempted on a regional basis [OECD, CMEA, EEC, OAS], while being welcomed by all those concerned, represent in fact intermediate steps in the search for agreement between all countries of the international community, irrespective of their geographical location or their socio-economic organization. (. . .) The most pressing methodological problems that exist in this field of statistics relate to the comparability of data among all nations, comparisons of the scientific efforts of the countries of Eastern Europe with those of Western Europe and North America, and the formulation of a methodology for science statistics adapted to serve Member States at varying levels of socio-economic development. (UNESCO (1972), Considerations on the International Standardization of Science Statistics, COM-72/CONF.15/4, p. 6) For UNESCO, the 1970s were ripe for standardization: “At first sight it would seem that the task of standardizing science statistics is premature because the field they cover is still far from being well delimited. However, the presence of three factors which characterize our scientific world leads one to conclude that the time 19 UNESCO (1966), Science Statistics in UNESCO, UNESCO/CS/0666.SS-80/3, p. 3. 20 Ibid., p. 4.
94
UNESCO and the measurement of S&T activities
has now come to make a first attempt at international standardization:21 1 2 3
The considerable (sic) experience that has already been gained by countries and organizations in a rather short period of time. The recent institutionalization and professionalization of scientific and technological activities, particularly R&D. The need for information by policy-makers planning the development or application of S&T.
Standards—based on the Frascati manual—were consequently suggested as early as 1969,22 and a provisional manual was published in 1980.23Along with a guide published in 1968 to assist countries in data collection,24 these were the first methodological documents that UNESCO produced. The manual on standards was written by C. Freeman, and dealt with the standardization of data between western and eastern Europe, and with the necessity to measure RSA. It dealt at length with the concept of “STA”, rather than solely with R&D, because: Broadening of the scope of science statistics is particularly appropriated to the conditions of most of the developing countries which are normally engaged in more general scientific and technological activities, rather than R&D solely. (UNESCO (1969), Science Statistics in Relation to General Economic Statistics: Current Status and Future Directions, UNESCO/COM/CONF.22/2, p. 9) In developing countries proportionally more resources are devoted to scientific activities related to the transfer of technology and the utilization of known techniques than to R&D per se. (UNESCO (1972), Considerations on the International Standardization of Science Statistics, COM-72/CONF.15/4, p. 14) As we previously mentioned, the concept of STA would become the basis of UNESCO’s philosophy of S&T measurement.25 In retrospect, it could be said that UNESCO had decided to correct three “limitations” of current statistics by extending:26 ●
21 22 23 24 25
Measurement to activities other than R&D, like information and documentation, and education.
Ibid., pp. 10–11. C. Freeman (1969), The Measurement of Scientific and Technical Activities, ST/S/15, Paris: UNESCO. UNESCO (1980), Manual for Statistics on Scientific and Technological Activities, ST-80/WS/38, Paris. UNESCO (1968), Provisional Guide to the Collection of Science Statistics, COM/MD/3, Paris. J.-C. Bochet (1974), The Quantitative Measurement of Scientific and Technological Activities Related to R&D Development, CSR-S-2, Paris: UNESCO; J.-C. Bochet (1977), The Quantitative Measurement of Scientific and Technological Activities Related to R&D Development: Feasibility Study, CSR-S-4, Paris: UNESCO. 26 UNESCO (1969), Provisional Agenda, UNESCO/COM/WS/108; UNESCO (1972), Purposes of Statistics on Science and Technology, COM-72/CONF.15/2.
UNESCO and the measurement of S&T activities ● ●
95
Statistical surveys to the social sciences and humanities. Statistics to outputs.
Over the next two decades, UNESCO worked to varying degrees on each of these three tasks (see Appendix 13). First it produced a study on output, written by C. Freeman,27 but it did not really go further than this because of the difficulties we discuss in the following two chapters.28 Second, it produced a methodological document on the social sciences,29 and included questions pertinent to these sciences in its questionnaire as early as 1971. Third, it undertook a study of Scientific and Technical Information and Documentation (STID),30 tested its methodology in four countries, and published a provisional manual.31 These advances were made despite strong skepticism within UNESCO itself throughout the whole period: It was felt by a number of participants that the broadening of the content of statistics of science and technology in the way being considered would throw a heavy and in some cases an impossible load upon those government departments or agencies which are more directly concerned with the collection of these data. (. . .) [But] UNESCO should provide guidelines for the development of a comprehensive system of statistics of science and technology. (UNESCO (1969), Report of the Session Held in Geneva 2–6 June 1969, UNESCO/COM/CONF.22/7, p. 12) And it did provide such guidelines: based on a study by K. Messman from the Austrian Central Statistical Office,32 UNESCO drafted a recommendation on international standardization that was adopted by member countries in November 1978.33 According to the recommendation, STA were composed of three broad types of activities: R&D, scientific and technical education and training (STET), and scientific and technological services (STS) (see Figure 5.1).
27 C. Freeman (1970), Measurement of Output of R&D, Paris: UNESCO, ST-S-16. 28 There was, however, an international comparative study launched in the mid-1970s that collected information on the effectiveness of more than 1,000 research units in six European countries. 29 UNESCO (1971), The Measurement of Scientific Activities in the Social Sciences and the Humanities, CSR-S-1, Paris; UNESCO (1972), Further Considerations in the Measurement of Scientific Activities in the Social Sciences and Humanities, CON-72/CONF.15/6; UNESCO (1974), Guidelines for the Pilot Survey on Scientific Activities in the Social Sciences and the Humanities, COM.74/WS/S. 30 D. Murphy (1979), Statistics on Scientific and Technical Information and Documentation, PGI-79/WS/5, Paris: UNESCO. 31 UNESCO (1984), Guide to Statistics on Scientific and Technological Information and Documentation (STID), ST-84/WS/18, Paris. 32 K. Messman (1975), A Study of Key Concepts and Norms for the International Collection and Presentation of Science Statistics, COM-75/WS/26, UNESCO. 33 UNESCO (1978), Recommendation Concerning the International Standardization of Statistics on Science and Technology, Paris.
96
UNESCO and the measurement of S&T activities
Scientific and Technological Activities (STA)
Research and Experimental Development (R&D)
S&T services provided by libraries, archives, information centers, reference departments, data banks, etc.
Scientific and Technological Services (STS)
Systematic work on translation and editing of S&T books and periodicals
S&T services provided by museums of science and technology, zoological and botanical gardens, etc.
Gathering of information on human, social, economic, and cultural phenomena; collection of statistics, etc.
Testing, standardization, metrology and quality control
S&T Education and Training at broadly the third level (STET)
Topographical, geological and hydrological surveying, and other surveys, observation and monitoring of soil, water, etc.
Counseling of clients and users advising on the access and use of S&T and management information
Prospecting an other activities designed to locate and identify oil and mineral resources
Patents and licenses; systematic work of a scientific, legal and administrative nature on patents and licenses
Scientific and Technological Information and Documentation (STID)
Figure 5.1 S&T activities (UNESCO).
Besides the internationalization of statistics, there was a second leitmotif driving UNESCO’s activities, namely the need to link statistics to policies: Except for the socialist countries, few efforts were made to link science policy effectively with national economic policies and to assess the implications for the social and natural environment of an ever-accelerating application of science and technology. (. . .) Statistics are mainly used (. . .) to claim additional funds if the position is unfavorable or to justify and maintain the present level of effort if this position is favorable. (UNESCO (1972), Purposes of Statistics on Science and Technology, COM-72/CONF.15/2, p. 3)
UNESCO and the measurement of S&T activities
97
UNESCO’s activities in this area took several forms. First, it showed an early interest in the quantification of S&T related to development34 and in regular surveys on national scientific and technological potential (STP).35 To UNESCO, the essential elements of STP comprised data not only on human and financial resources, but also on physical, informational and organizational resources. STP included quantitative as well as descriptive information designed to create national data systems on S&T. Second, UNESCO measured problems specific to developing countries. These followed requests in the 1970s by the Economic and Social Council of the United Nations (ECOSOC) and the United Nations Advisory Committee on the Application of Science and Technology (UNACAST). It consequently designed an R&D questionnaire for developed countries concerning the problems of developing countries.36 It also produced one of the first-ever studies of R&D in international organizations, applying a new methodology to conduct a survey of eight organizations.37 It conducted deliberations on technology transfer,38 along with two workshops—but no measurements. And finally, it added some questions on non-national personnel in its R&D questionnaire in order to measure the “brain drain” problem.39
Facing the OECD monopoly As I noted earlier, UNESCO’s advances were made in a context of skepticism: “UNESCO is aware of the difficulties involved in such work but, in view of the benefits to be derived from international standardization, will endeavour to overcome the obstacles and find solutions to the problems in science statistics as it
34 By way of a classification of R&D by socio-economic objective: UNESCO (1972), Classification of R&D Expenditures by Major Aims or Objectives, COM-72/CONF.15/8; UNESCO (1976), Draft Detailing of the Classification of The Purposes of Government, UNESCO/ECE/COM.76/CONF.711/5; UNESCO (1977), Draft Classification of R&D Activities by Objectives, ST-77/WS/15. 35 UNESCO (1970), Manual for Surveying National Scientific and Technological Potential, NS/SPS/15, Paris. 36 UNESCO (1973), The Quantification of R&D Expenditures Relevant to Specific Problems of Developing Countries, ESA/S&T/AC.2/4. The OECD also developed a questionnaire and collected data in the early 1970s. See: OECD (1972), Inventaire international des programmes et des ressources consacrées à la R&D en faveur des pays en voie de développement par les pays membres de l’OCDE en 1969 et 1971, DAS/SPR/72.17. 37 UNESCO (1976), R&D Activities in International Organizations, UNESCO-ECE/COM76/CONF.711/3. The results of a worldwide survey of bilateral institutional links in science and technology had already been published in 1969, but was never revised due to a lack of resources: UNESCO (1969), Bilateral Institutional Links in Science and Technology, Science Policy Studies and Documents No. 13, Paris: UNESCO. 38 UNESCO (1972), Statistics on the International Transfer of Technology, UNESCO/COM/ 72/CONF.15/7; UNESCO (1975), The Statistical Needs of Technology Transfer Policy-Makers, SC.TECH/R.27/Rev.1; UNESCO (1981), Development of Science and Technology Statistics to Measure the International Flow of Technology, UNESCO/ECE/CONF.81/ST.001/2. 39 UNESCO (1976), Progress Made on the Development of Statistics on Science and Technology, January 1973–June 1975, UNESCO/ECE/COM-76/CONF.711/2, p. 5.
98
UNESCO and the measurement of S&T activities
has in other fields such as education and cultural statistics.”40 This statement minimized two difficulties, the first of which came from the OECD. The first OECD ad hoc review group on S&T statistics was “not satisfied that hitherto there has been adequate consultation” between the two organizations.41 A few years later, the second ad hoc review group similarly concluded that there remained substantial problems: “Many of these problems relate to fundamental structural difficulties posed by the different membership patterns of the two organizations.”42 And it continued: “Our expectations of achieving comparability must remain realistic and, hence, modest.”43 UNESCO held four meetings with the OECD in 1973 and 1974 to harmonize questionnaires. The objectives were to facilitate the respondents’ task, avoid possible duplication, and obtain better comparability. The main difference between UNESCO and the OECD had to do with sectors of R&D performance.44 UNESCO did not consider private non-profit institutions as a sector in their own right. These institutions were instead included in the following three sectors: business, government, and university. The OECD, on the other hand, classified institutions according to their ownership (or control), which meant assigning private non-profit units to a separate sector. This was in line with the System of National Accounts (SNA), and was supposed to allow the comparison of S&T activities with other economic activities.45 For UNESCO, however, the formulation of policies did not require a close relationship between science statistics and the SNA: The definitions and contents of the SNA sectors were made for purposes other than those of science statistics (. . .). A system of science statistics should provide an independent framework (. . .) and whenever possible, it should be linked with the SNA but not forced into SNA categories. (UNESCO (1969), Report of the Session Held in Geneva 2–6 June 1969, UNESCO/COM/CONF.22/7, p. 7) UNESCO maintained that “financing and control cannot be considered as predominant criteria for a classification of R&D bodies into sectors of performance.”46 It was rather the “service rendered” that UNESCO considered important—as the OECD had indeed already admitted in the case of Higher Education, which was added as a sector in it own right. According to UNESCO, 40 UNESCO (1972), Considerations on the International Standardization of Science Statistics, COM-72/ CONF.15/4, p. 9. 41 OECD (1973), Report of the Ad Hoc Review Group on R&D Statistics, STP (73), p. 15. 42 OECD (1978), Report of the Second Ad Hoc Review Group on R&D Statistics, STP (78) 6, p. 34. 43 Ibid., p. 35. 44 UNESCO (1973), The Development of Coordinated UNESCO and OECD Questionnaires for Statistics on Science and Technology: UNESCO Proposals, COM/WS/343; OECD (1973), The Development of Coordinated UNESCO and OECD Questionnaires on Science and Technology, DAS/SPR/73.84. 45 See Chapter 10. 46 OECD (1973), The Development of Coordinated UNESCO and OECD Questionnaires on Science and Technology, op. cit., p. 13.
UNESCO and the measurement of S&T activities
99
units should be classified according to one of three functions—production of goods and services (business), collective needs (government), or knowledge (universities)—regardless of their legal ownership. UNESCO’s proposition regarding sectors of performance was rejected. Yet, the OECD Secretariat offered to devise a methodology converting data into a form suitable for UNESCO’s uses. It would have redefined the private non-profit sector and asked member countries for more disaggregated data that could have served both organizations. But member countries, via NESTI, rejected the idea. The two organizations instead agreed to include explanations and examples of concordance or conversion tables in their respective questionnaires. Why did OECD member countries refuse to depart from their practices? As reported by the OECD Secretariat in its responses to the second ad hoc review group, “les pays de l’OCDE perdraient le contrôle complet qu’ils détiennent actuellement sur leurs normes et méthodes.”47 Moreover: The time is not ripe for “world-wide” science standards and (. . .) the official adoption of the current draft of the UNESCO Manual in a fit of empty internationalism would be unlikely to bring any practical benefits. (. . .) The current draft is, in our view, rather too ambitious and insufficiently based on practical experience to play this role. (OECD (1977), Response by the Secretariat to the Questions of the Ad Hoc Review Group, DSTI/SPR/77.52, p. 18)
The end of a dream This was only the beginning of UNESCO’s difficulties. A few years later, UNESCO took stock of problems it was having with another partner, the ECE: “When various recommendations of earlier meetings in this field—starting from the 1976 Prague seminar—are reviewed, it is impossible not to conclude that there has been inadequate follow-up within ECE.”48 Too many projects were recommended for the available resources, or UNESCO and the ECE directed too few resources toward the recommended projects. But the main difficulty lay within the member countries themselves. In 1981, UNESCO concluded that, although there had been an increase in the number of countries responding to the UNESCO questionnaire (80 countries), this progress was undermined by the “scarcity and inconsistency of the data received (. . .) in spite of the closer cooperation with the national statistical services in the developing countries through staff missions in the field or consultancy services.”49 Yet, only a few years earlier, UNESCO had enthusiastically reported that: “the 47 The page where the citation appears is missing in the English version. OECD (1977), Response by the Secretariat to the Questions of the Ad Hoc Review Group, DSTI/SPR/77.52, p. 16. 48 UNESCO (1981), Report of the Fourth Joint Meeting on the Development of Science and Technology Statistics Held in Geneva, 4–7 May 1981, UNESCO/ECE/CONF.81/ST.001/9, p. 7. 49 UNESCO (1981), Progress Made in the Development of International Statistics on Science and Technology, UNESCO/ECE/CONF.81/ST.000/6, p. 3.
100 UNESCO and the measurement of S&T activities results obtained so far are encouraging in that a definite improvement has been noted in replies to questionnaires not only with regard to the response rate but also with regard to the application of the proposed standards.”50 Now, however, almost every project was recognized as having failed. The first instance of failure was the attempt to measure the social sciences and humanities. A pilot survey in 30 countries was carried out in 1974 and 1975 to test feasibility, followed by field testing in two countries. This led to a special inquiry that was annexed to the annual survey in 1977–1978. But very few countries responded, and those that did returned incomplete questionnaires. The main conclusion reached “was that at this stage such a survey is neither practicable nor realistic.”51 The survey was discontinued. A similar fate awaited the project on STID. The methodology was developed within the General Information Program Division of UNESCO, and tested in four countries using a provisional manual. The latter was used in regional training seminars that led to the revision of the proposed questionnaire. But the results of the first surveys (1987 and 1990) were qualified as unsatisfactory: “the responses were discouraging, they were incomplete and the institutional coverage was partial. This prompted us, therefore, to temporarily discontinue our activities in this area.”52 Statistics on third-level education (university) did not fare any better. Two meetings of experts in 1982 and 1989 led to two methodologies, one on the measurement of S&T personnel,53 the other on lifelong training.54 However, “due to the drastic reduction of personnel in the Division of Statistics, priorities had to be established and unfortunately, this area was not considered a high priority. No follow-up activities have, therefore, been undertaken since that meeting.”55 Finally, the responses to the R&D survey in developed countries concerning the problems of developing countries were also deemed unsatisfactory: only 18 of the 41 countries that received the questionnaire replied to it, and half of these replies contained only very scattered and incomplete data.56 All in all, the director of the UNESCO Division of statistics on S&T concluded that “the establishment of a system of data collection covering at once the full scope of STS and STET in a country has been considered not practicable. Some priorities have, thus, to be adopted for a selective and piecemeal extension of coverage of certain types of STS and STET.”57 Thus, in 1994, UNESCO called
50 UNESCO (1976), Report Prepared by UNESCO in Response to ECOSOC Resolution 1901 on the Quantification of Science and Technological Activities Related to Development, UNESCO/NS/ROU/379, p. 14. 51 UNESCO (1981), Progress Made in the Development of International Statistics on Science and Technology, UNESCO/ECE/CONF.81/ST.000/6, p. 1. 52 UNESCO (1994), General Background to the Meeting and Points for Discussion, ST.94/CONF.603/5, p. 4. 53 UNESCO (1982), Proposals for a Methodology of Data Collection on Scientific and Technological Education and Training at the Third Level, CSR-S-15. 54 UNESCO (1989), Secretariat Background Paper to the Meeting of Experts on the Methodology of Data Collection on Lifelong Training of Scientists, Engineers and Technicians, ST.89/CONF.602/3. 55 UNESCO (1994), General Background to the Meeting and Points for Discussion, op. cit., p. 3. 56 UNESCO (1976), Progress Made on the Development of Statistics on Science and Technology, January 1973–June 1975, UNESCO/ECE/COM-76/CONF.711/2, p. 2. 57 Z. Gostkowski (1986), Integrated Approach to Indicators for Science and Technology, Paris: UNESCO, p. i.
UNESCO and the measurement of S&T activities
101
a meeting of experts to reassess the needs of member countries regarding concepts, definitions, and classifications of S&T statistics.58 Thirteen countries attended the meeting. The experts took note of the fact that, since 1978, “there appears to be no improvement in the quantity and quality of the S&T data collected, particularly in the developing countries.”59 Experts “were of the opinion that the dramatic drop in the quantity of internationally comparable data on R&D transmitted to UNESCO from the developing countries reflects the lack of R&D activities and/or the acute shortage of financial resources necessary for the proper functioning of S&T statistical services at the national level.”60 The meeting nevertheless concluded that UNESCO should continue to collect internationally comparable data on R&D and to strengthen its assistance to member countries, but that it should limit its program to the most basic statistics and indicators.61 The meeting also recommended paying proper attention to statistics on human resources in every activity.62 The recommendations were never implemented, however. The actual measurement of S&T at UNESCO remains minimal—the occasional R&D survey at irregular intervals.
Conclusion UNESCO took on an enormous task. For the organization, standardizing S&T statistics meant two things. First, extending OECD standards to all countries; second, extending standards to activities beyond just R&D, that is to all STA. In this venture, UNESCO received some help from European countries, at least during the decade it held regular meetings of experts, but it received far less support from the OECD. The latter was indeed the major player in the field and had neither time for, nor interest in, assisting a competitor. But the real difficulty limiting UNESCO’s ambitions was the absence of a community of views between member countries. Unlike the OECD, where member countries were all industrialized countries, UNESCO’s membership was composed of countries at varying levels of development or with differing economic structures. The situation has changed somewhat since then. UNESCO has in fact abandoned its quest for an original classification of R&D by sector of performance. The reasons are many—non-responses, implementation difficulties63—but there was one major determining factor: the shift of centrally-planned economies toward
58 An additional evaluation exercise, although mainly concerned with indicators specific to western countries, was conducted in 1996 by R. Barré from the French OST: UNESCO’s Activities in the Field of Scientific and Technological Statistics, BPE-97/WS/2. 59 UNESCO (1994), Summary of the Case-Studies on the Needs, Availability, Concepts, Definitions and Classifications in the Field of Science and Technology Statistics, ST.94/CONF 603/4, p. 1. 60 UNESCO (1994), Meeting of Experts on the Improvement of the Coverage, Reliability, Concepts, Definitions and Classifications in the Field of Science and Technology Statistics, ST.94/CONF.603/12, p. 4. 61 It also recommended, contradictorily, that UNESCO envisage extending data collection to outputs. 62 UNESCO (1994), Meeting of Experts on the Improvement of the Coverage, Reliability, Concepts, Definitions and Classifications in the Field of Science and Technology Statistics, op. cit., pp. 2–3. 63 UNESCO (1994), Meeting of Experts on the Improvement of the Coverage, Reliability, Concepts, Definitions and Classifications in the Field of Science and Technology Statistics, op. cit., pp. 3–4.
102 UNESCO and the measurement of S&T activities free-market economies. This move came too late to bring UNESCO member countries closer together, however. Eastern European countries have since chosen the OECD methodology for measuring their scientific and technological activities, and the OECD held four meetings to that end in the 1990s.64 Nevertheless, and despite its relative failure, UNESCO clearly contributed to advancing the field of S&T statistics. It doubtless drew upon previous work, including work by the OECD, but it also preceded or paralleled the OECD on several topics: the measurement of the social sciences and humanities; the measurement of related scientific activities, education and training;65 and the construction of classifications (fields of science; socioeconomic objectives).66 UNESCO also suggested new solutions to several methodological problems regarding R&D surveys. I have already dealt earlier with the classification of institutions by sector of the SNA. A second and important survey problem had to do with selecting the most appropriate unit for obtaining precise information on research activities: “most of the countries agree that only a survey at the project level would allow an exact classification” of (higher education) R&D.67 Yet another problem concerned the classification of R&D expenditures: “as the situation now stands, expenditures in the government sector are usually broken down by ministry or department, in the productive sector by branch of industry or product group, and in the higher education sector by field of science or technology.”68 UNESCO held that a common classification by socioeconomic objective would correct the situation. But theoretical intentions were not enough: the proposed methodologies had to measure up to the pragmatic constraints of the real world. Building on the momentum created by the recent international Conference on Science,69 the newly created UNESCO Institute of Statistics (1999) is about to launch a new program in the field of S&T. If there is a lesson to be drawn from the preceding pages, it is that unless new resources are devoted to the task and new training programs for member countries are put into place, these new measurement efforts will most likely fail.
64 Training Seminar on Science and Technology Indicators for Non-member countries (1991); Conference on S&T Indicators in Central and Eastern Europe (1993); Workshop on the Implementation of OECD Methodologies in Countries in Transition (1995); Conference on the Implementation of OECD Methodologies for R&D/S&T Statistics in Central and Eastern European Countries (1997). 65 More than ten years before the publication of the OECD/Eurostat Canberra manual. 66 For fields of science, see: UNESCO (1969), List of Scientific Fields and Disciplines Which Will Facilitate the Identification of Scientific and Technical Activities, UNESCO/COM/CONF.22/10; The classification was developed in collaboration with Germany. First version in 1972, revised in 1973: UNESCO (1973), Proposed International Standard Nomenclature for Fields of Science and Technology, UNESCO/NS/ROU/257. 67 UNESCO (1975), Problems of Data Collection and Analysis at the National Level in the Field of Statistics on Science and Technology, UNESCO-ECE/COM-76/CONF.711/4, p. 6. 68 UNESCO (1972), Classification of R&D Expenditures by Major Aims or Objectives, COM-72/ CONF.15/8, p. 3. 69 UNESCO (1999), Science for the Twenty-First Century: A New Commitment, Budapest.
Section III Imagining new measurements
6
The emergence of science and technology indicators Why did governments supplement statistics with indicators?
UNESCO’s efforts to extend the measurement of S&T to other dimensions than R&D inputs were only the first such efforts in the history of S&T statistics. In 1973, the National Science Foundation (NSF) published Science Indicators (hereafter SI ), the “first effort to develop [a whole range of] indicators of the state of the science enterprise in the United States:” The ultimate goal of this report is a set of indices which would reveal the strengths and weaknesses of US science and technology, in terms of the capacity and performance of the enterprise in contributing to national objectives. (National Science Board (1973), Science Indicators 1972, Washington, p. iii) The publication had a wide impact. Indeed, SI was, according to a recent National Science Board (NSB) publication, the organization’s bestseller.1 It was widely acclaimed, discussed worldwide, and served as a model for several countries and organizations: in 1984, the OECD started a series titled Science and Technology Indicators, which in 1988 was replaced by Main Science and Technology Indicators (MSTI). The European Commission followed in 1994 with its European Report on Science and Technology Indicators. France also started its own series Science et Technologie: Indicateurs in 1992, and Latin American countries followed suit in 1996 (Principales Indicatores de Ciencia y Tecnologia). It is generally forgotten, however, that S&T indicators did not originate in the United States, but were first conceived of at the OECD. Certainly, the NSF considerably influenced the methodology of data collection on R&D in OECD countries in the early 1960s, but it was the OECD itself that inspired SI. Indeed, the debate of the 1960s on technological gaps between the United States and Europe gave the OECD the opportunity to develop the first worldwide indicators on S&T. Similar indicators were later included in SI, and gave the NSF the idea of developing biennial indicators to assess the state of S&T in the United States.
1 National Science Board (2001), The National Science Board: A History in Highlights, 1950–2000, Washington, p. 20.
106
The emergence of S&T indicators
This chapter documents the origins of S&T indicators. Where does the idea of indicators come from? How has it evolved over time? What did it mean for governments? The first part defines and clarifies the concepts of statistics and indicators in order to distinguish the two. The second traces the main factors that led to SI, and the third part discusses the impact SI had, particularly on the OECD. The fourth part relates SI to previous OECD reflections in order to show that the data produced during the debate on “Technological Gaps” in 1968 served as a model for the NSF.
Indicators as policy tools Statistics are often equated with mathematical tools for the treatment of numerical data.2 However, prior to Quetelet’s work (1796–1874), statistics was a brute compilation of numerical information.3 It was “untouched by the application of mathematical tools (aside from simple rules for computing rates and averages).”4 After Quetelet, more sophisticated statistics (based on measures of variation) began to replace averages, at least in the works of mathematicians and statisticians. Most government statistics are of the first kind. They are totals calculated for a number of dimensions and published as such. These statistics are simple numbers produced by additions, not by complex mathematical operations (such as regressions and correlations). They refer, not to the methodology for the treatment of data, but to the data themselves.5 Official S&T statistics follow the same pattern. This is not to say that government statistics have not evolved since Quetelet’s time. Indeed, it was governments together with social scientists who invented the notion of indicators. Indicators began to appear in economics in the 1930s: growth, productivity, employment, and inflation.6 The first social indicators were developed during the same period,7 but the term itself (social indicator) became widespread only in the 1960s.8 The “movement” for social indicators considerably influenced the development of similar statistics in S&T. Indeed, early editions of SI benefited from regular exchanges with the US Social Science Research Council’s (SSRC) Committee on Social Indicators. Among other things, the SSRC organized two conferences, one
2 MacKenzie, D. (1981), Statistics in Britain, 1865–1930, Edinburgh: Edinburgh University Press, p. 2. 3 Porter, T. M. (1986), The Rise of Statistical Thinking: 1820–1900, Princeton: Princeton University Press, p. 41. 4 C. Camic and Y. Xie (1992), The Statistical Turn in American Social Science, American Sociological Review, 59, p. 779. 5 See: S. Woolf (1989), Statistics and the Modern State, Comparative Studies in Society and History, 31, pp. 588–603. 6 Cahiers français (1998), Les indicateurs économiques en question, 286, May–June, Paris: La Documentation française. 7 President’s Research Committee on Social Trends (1933), Recent Social Trends in the United States, New York: McGraw Hill. 8 R. Bauer (1966), Social Indicators, Cambridge, MA: MIT Press; E. B. Sheldon and W. E. Moore (1968), Indicators of Social Change: Concepts and Measurement, New York: Russell Sage Foundation.
The emergence of S&T indicators
107
in 19749 and another in 1976,10 sponsored by the NSF and devoted to improving the quality of science indicators and better defining output indicators. How did people distinguish an indicator from a regular statistic? A glance at definitions should help answer this question. In 1969, US President L. Johnson asked the Department of Health, Education and Welfare to develop the necessary social statistics and indicators to chart social progress. The report, issued in 1970, defined an indicator as a statistics of direct normative interest which facilitates concise, comprehensive and balanced judgments about the condition of major aspects of a society. It is in all cases a direct measure of welfare and is subject to the interpretation that, if it changes in the “right” direction, while other things remain equal, things have gotten better, or people better off. (Department of Health, Education and Welfare (1970), Towards a Social Report, Ann Arbor: University of Michigan Press, p. 97) Similarly, R. Parke, director of the Center for Coordination of Research on Social Indicators of the SSRC, defined indicators as “statistical time series that measure changes in significant aspects of society.”11 Elsewhere, he specified To comprehend what the main features of the society are, how they interrelate, and how these features and their relationships change is, in our view, the chief purpose of work on social indicators. (E. B. Sheldon and R. Parke (1975), Social Indicators, Science, 188, p. 696) An important element of these definitions is that of warning about changes. An indicator measures dimensions of a phenomenon in order to follow the state of society. A second feature of indicators is that they are statistics that must be recurrent, otherwise they would not meet the above requirement—measuring change. Third, indicators usually appear as a collection of statistics: a lone statistic can rarely be a reliable indicator. Finally an indicator is based on a model: “an indicator is properly reserved for a measure that explicitly tests some assumption, hypothesis, or theory; for mere data, these underlying assumptions, hypotheses, or theories usually remain implicit.”12 D. D. S. Price formulated the same requirement the
9 Y. Elkana et al. (1978), Towards a Metric of Science: The Advent of Science Indicators, New York: John Wiley. 10 Scientometrics (1980), Vol. 2 (5–6), Special Issue on the 1976 Symposium organized by the SSRC’s Center for Coordination of Research on Social Indicators. 11 R. Parke’s speech before the Committee on Science and Technology, House of Representatives. See: USGPO (1976), Measuring and Evaluating the Results of Federally Supported R&D: Science Output Indicators, Hearings Before the Committee of Congress on Science and Technology, Washington, p. 48. 12 G. Holton (1978), Can Science Be Measured?, in Y. Elkana et al. (eds) (1978), Towards a Metric of Science: The Advent of Science Indicators, op. cit., p. 53.
108
The emergence of S&T indicators
following way: “To be meaningful, a statistic must be somehow anticipatable from its internal structure or its relation to other data (. . .). It means the establishment of a set of relatively simple and fundamental laws.”13 All these features were present in the OECD’s 1976 definition, according to which an indicator is a series of data which measures and reflects the science and technology endeavour of a country, demonstrates its strengths and weaknesses and follows its changing character notably with the aim of providing early warning of events and trends which might impair its capability to meet the country’s needs. (OECD (1976), Science and Technology Indicators, DSTI/SPR/76.43, p. 6) Similarly, the NSB of the NSF suggested that indicators are intended to measure and to reflect US science, to demonstrate its strengths and weaknesses and to follow its changing character. Indicators such as these, updated regularly, can provide early warnings of events and trends which might impair the capability of science—and its related technology—to meet the needs of the Nation. (National Science Board (1975), Science Indicators 1974, Washington, p. vii)
Indicators under pressure The US Congress passed the law creating the NSF in 1950. As we saw in Chapter 1, under this law, the NSF was charged with funding basic research, but was also given, under the influence of the Bureau of Budget, a role in policy advice and in the evaluation of research. The NSF was asked to “maintain a current register of scientific and technical personnel, and in other ways provide a central clearinghouse for the collection, interpretation, and analysis of data on scientific and technical resources in the United States.”14 More demands would soon follow. In 1968, Congress mandated the NSF “to evaluate the status and needs of the various sciences,” to “initiate and maintain a program for the determination of the total amount of money for scientific research,” and “to report on the status and health of science and technology.”15 The latter “shall include an assessment of such matters as national scientific resources and trained manpower, progress in selected areas of basic scientific research, and an indication of those aspects of such progress which might be applied to the needs of American society.” During the first years of its existence, the NSF mainly understood its role in evaluation as one of collecting and disseminating statistical information (and issuing 13 D. D. S. Price (1978), Toward a Model for Science Indicators, in Y. Elkana et al. (eds) (1978), Towards a Metric of Science: The Advent of Science Indicators, op. cit., p. 72. 14 Public Law 507 (1950) and Executive Order 10521 (1954). 15 Public Law 90–407 (1968).
The emergence of S&T indicators
109
statements concerning conditions that are desirable for the advancement of science).16 As early as 1953, with its first survey on R&D, the NSF stated No attempt has been made in this report to present any conclusions as to general policies (. . .). However, factual information of the kind developed by the study does provide an initial basis for policy (. . .). (NSF (1953), Federal Funds for Science: Federal Funds for Scientific Research and Development at Nonprofit Institutions 1950–1951 and 1951–1952, Washington, p. vi) Rapidly, people outside the NSF became uncomfortable with such an understanding of its mandate. Too few policy analyses and assessments were said to accompany the numbers. A. T. Waterman, the first director of the NSF, always defended the organization against these criticisms. His main argument was that “it (was) unrealistic to expect one federal agency to render judgment on the over-all performance of another agency or department.”17 Nevertheless, SI was the result of these criticisms, as well as being the response to new and explicit instructions from the government. In September 1970, President R. Nixon asked the Office of Science and Technology (OST) and the President’s Science Advisory Committee (PSAC) to “submit each May a report on the status and health of science and technology.”18 The request was, in fact, a reminder to the NSB of the NSF that it had not fully met its obligations (indeed, Congress made a similar request again in 197619). The NSB met with the two organizations and reconsidered the nature of its annual report. It studied two options.20 One was to issue an occasional white paper on policy, which was to be independent of the annual reports. The other was to produce an annual report that “would provide baseline data for each year with a series of chapters providing an assessment of the health of science.”21 The latter option prevailed. In February 1971, the NSB began discussions on the possibility of an SI report22 and approved the “systematic development of data and information on the health of science indicators and the preparation of an annual report based thereon.”23 To that end, an ad hoc committee on science indicators was formed, chaired by Roger W. Heyns, member of the NSB, chancellor of the University of
16 D. Wolfe (1957), National Science Foundation: The First Six Years, Science, 126, p. 340. 17 A. T. Waterman (1960), National Science Foundation: A Ten-Year Résumé, Science, 131, p. 1343. 18 USGPO (1983), The National Science Board: Science Policy and Management for the National Science Foundation, Hearings before the Committee on Science and Technology, Washington, p. 183. 19 GAO (1979), Science Indicators: Improvements Needed in Design, Construction, and Interpretation, Washington, pp. 48–49. 20 National Science Board, Minutes of the 133rd Session, November 19–20, 1970. 21 USGPO (1983), Science Indicators: Improvements Needed in Design, Construction, and Interpretation, op. cit., p. 183. 22 National Science Board (1971), Minutes of the 136th Session, February 18–19. 23 National Science Board (1971), Minutes of the 140th Session, July 15–16.
110 The emergence of S&T indicators California at Berkeley and president of the American Council on Education.24 The committee first conceived a list of 57 possible measures divided into seven categories, and rated the indicators on a scale of importance and feasibility (see Appendix 14).25 By January 1972, the work was so advanced that the NSB decided that its fifth annual report would be based on science indicators.26 The NSB reviewed the proposed indicators in March,27 and a first draft of the report was circulated for comments in September.28 The final report was approved in November and, as required by law, transmitted for review to the OST, the Office of Management and Budget (OMB) and other agencies.29 In September 1973, SI was officially sent to Congress.30 One month later, the NSB estimated that approximately 11,000 copies had been distributed so far, and was pleased with the favorable press coverage.31 The recognition of the reputed quality of SI would be confirmed again in 1982, when Congress amended the law regarding NSF and asked, among other things, for a biennial report on science indicators.32 According to C. Falk, the main person behind SI, the document was a success because of five characteristics.33 First, it collected dispersed statistics together in a single book. Second, it discussed science mainly by way of charts rather than numbers. Tables appeared primarily in the appendix. Third, it included brief highlights for policy-makers. Fourth, there was a small amount (and not controversial, I might add) of analysis. As we have seen, this was the NSF’s philosophy.34 Indeed, NSF personnel confessed to the General Accounting Office (GAO) that “the reports were meant to emphasize quantitative data and not venture at all into evaluations or assessment.”35 Finally, each edition always contained something new in terms of information and indicators. SI was planned and considered by the NSB to respond directly to the mandate Congress gave it from the start, that is, to provide a regular assessment of science
24 National Science Board (1971), Minutes of the 141st Session, September 9–10. 25 R. W. Heyns (1971), Memorandum to Members of the National Science Board, May 5, NSB-71-158. 26 National Science Board (1972), Minutes of the 144th Session, January 20–21. In fact, since May 1971, the fifth annual report was planned to be on undergraduate science education. In October however, the Board took notice of the lack of unanimity on the scope of the report and reconsidered the subject. Fortunately, a draft report on science indicators was ready to take its place. 27 National Science Board (1972), Minutes of the 145th Session, March 16. 28 National Science Board (1972), Minutes of the 149th Session, September 7–8. 29 National Science Board (1972), Minutes of the 151st Session, November 16–17. 30 National Science Board (1973), Minutes of the 158th Session, September 20–21. 31 National Science Board (1973), Minutes of the 159th Session, October 18–19. 32 Public Law 97–375 (1982). 33 Charles Falk, personal conversation (May 24, 2000). 34 And still is today: at the end of the 1990s, the president of the NSB admitted that the board should discuss policy matters more directly. See: Science (1997), US NSB Seeks a Wider Role in Policy Making, 386, p. 428; Science (1997), Science Board Wants Bigger Policy Role, 275, p. 1407. 35 GAO (1979), Science Indicators: Improvements Needed in Design, Construction, and Interpretation, op. cit., p. 55. See also: S. Cozzens, Science Indicators: Description or Prescription?, Office of Technology Assessment, Washington.
The emergence of S&T indicators
111
in the country.36 In 1976 for example, R. W. Heyns highlighted the six purposes and functions SI was intended to serve:37 ●
● ● ● ●
●
To detect and monitor significant developments and trends in scientific enterprise, including international comparisons. To evaluate their implications for the present and future health of science. To provide a continuing and comprehensive appraisal of US science. To establish a new mechanism for guiding the Nation’s science policy. To encourage quantification of the common dimensions of science policy, leading to improvements in R&D policy-setting within federal agencies and other organizations. To stimulate social scientists’ interest in the methodology of science indicators as well as their interest in this important area of public policy.
Not all people agreed, however, with such a positive view of SI. Government officials as well as academics critically discussed the document at length in several forums.38 The main criticisms centered on the following: ●
The “operationalism” of SI (as GAO called it), that is the tendency to use data because it’s there, rather than develop an explicit model of S&T that would underlie the measurement. During the 1976 hearings on SI in Congress, R. Ayres, Vice President, International Research and Technology, summarized this view in the following terms: “(. . .) the number of Nobel prizes is easy to count and that is why you are collecting them, not because it means anything.”39 Indeed, Heyns himself admitted, during the hearings before Congress, that “the priority emphasis on input indicators was predicated on the general availability of a number of accepted conventional measures.”40 This was one of the central criticisms of GAO: “At the time these measures were selected, most of the data already existed in hand for NSB (. . .) particularly in NSF’s Division of Science Resources Studies.”41
36 USGPO (1976), Measuring and Evaluating the Results of Federally Supported R&D: Science Output Indicators, op. cit., p. 7. 37 Ibid., p. 10. 38 See: S. Cozzens (1991), Science Indicators: Description or Prescription?, op. cit.; Scientometrics (1980), Special Issue on the 1976 Symposium organized by the SSRC’s Center for Coordination of Research on Social Indicators, op. cit.; GAO (1979), Science Indicators: Improvements Needed in Design, Construction, and Interpretation, op. cit.; R. McGinnis (1979), Science Indicators/1976: A Critique, Social Indicators Research, 6, pp. 163–180; Y. Elkana et al. (1978), Towards a Metric of Science: The Advent of Science Indicators, op. cit.; J. MacAulay (1978), The Ghost in the Big Machine: Science Indicators/1976, 4S Bulletin, Vol. 3, No. 4, pp. 30–35; J. D. Holmfeld (1978), Science Indicators and Other Indicators: Some User Observations, 4S Bulletin, Vol. 3, No. 4, pp. 36–43; USGPO (1976), Measuring and Evaluating the Results of Federally Supported R&D: Science Output Indicators, op. cit. 39 USGPO (1976), Measuring and Evaluating the Results of Federally Supported R&D: Science Output Indicators, op. cit., p. 72. 40 Ibid., p. 10. 41 GAO (1979), Science Indicators: Improvements Needed in Design, Construction, and Interpretation, op. cit., p 19.
112
The emergence of S&T indicators “It was natural that the initial SI reports would be based largely on an operational approach, deriving indicators from the readily available data on the basis of suspected importance. This approach, however, incorporated a limited view of science and technology, and led to the construction of a number of indicators whose underlying assumptions are tenuous or invalid.”42
●
●
●
●
●
●
42 43 44 45 46 47 48 49 50 51 52 53 54
S. Cozzens attributed this tendency to the pressures of having to add new indicators in each edition.43 The input/output model, where links between inputs and outputs are badly demonstrated: SI “lacks any overall unifying model that makes sense of the connections between science, technology, economy and society.”44 It is “too constricted by an input-output model framework. In this approach, science and technology are seen as resources which go into, and tangible results which come out of, a black box.”45 The emphasis on inputs (expenditures and personnel)46 to the detriment of outputs and impacts (or outcomes), as a consequence of an implicit model of science as autonomous.47 “The more inputs, the healthier the system.”48 The implicit assumptions and objectives inspired by V. Bush’s rationale to justify the federal funding of science.49 The relative absence of analysis of long-term trends, and the politically neutral discourse: “It is the Board policy that the data should speak for themselves.”50 “While SEI is an excellent statistical reference tool, its politically neutral text keeps it out of the business of assessment, and its encyclopedic size and organization transmit segmented information rather than a synthetic overview.”51 The NSF view of the world: “Matters of interest to NSF get high priority for inclusion, and matter of interest to other agencies get lower priorities.”52 This is manifested by “extensive treatment of academic research, a bit of information on industrial basic research, and a smattering of input data on government research.”53 The highly aggregated level of data: there was a “tendency throughout most of SI-72 and SI-74 to opt for bulk measures (. . .), even when more detailed spectroscopy of data was available in the literature.”54 Ibid., pp. 50–51. S. Cozzens (1991), Science Indicators: Description or Prescription?, op. cit., p. 5. Ibid., p. 10. GAO (1979), Science Indicators: Improvements Needed in Design, Construction, and Interpretation, op. cit., p. 19. Ibid., p. 9. Y. Elkana et al. (1978), Towards a Metric of Science: The Advent of Science Indicators, op. cit., pp. 5–6. S. Cozzens (1991), Science Indicators: Description or Prescription?, op. cit., p 11. J. D. Holmfeld (1978), Science Indicators and Other Indicators: Some User Observations, op. cit., pp. 40–41. S. Cozzens (1991), Science Indicators: Description or Prescription?, op. cit., p. 6. Ibid., p. iv. Ibid., p. 10. Ibid., p. 11. G. Holton (1978), Can Science Be Measured?, in Y. Elkana et al. (1978), Towards a Metric of Science: The Advent of Science Indicators, op. cit., p. 46.
The emergence of S&T indicators ●
113
The absence of details on methodology: “A widespread problem in the analysis of data is lack of attention to how the data were generated, to their limitations, and in general to the error structure of sampling, selection, measurement, and subsequent handling.”55
Over the years, SI (known as Science and Engineering Indicators, or SEI, since 1987) has grown considerably in content. While SI contained 93 pages and 112 tables in the 1972 edition, these numbers increased to 177 and 258 respectively in 1989. With the 2000 edition, SEI published two volumes for the first time. Over the same period, the indicators also grew in number, and covered more and more dimensions of S&T: resources, the workforce, economic performance, impacts and assessments, enrollment in science and graduation, scientific literacy, publications, citations, technology and international collaboration, innovation, information technologies, etc.
Following SI through the OECD SI had a huge impact on the OECD. In December 1976, the OECD Committee for Scientific and Technological Policy (CSTP) organized a meeting of national experts on R&D statistics in order to prepare the work of the second ad hoc review group on R&D statistics. The OECD Secretariat submitted the question of indicators to the group: Science indicators are a relatively new concept following in the wake of the long-established economic indicators and the more recent social indicators. So far, the main work on this topic has been done in the United States where the National Science Board has published two reports: Science Indicators 1972 (issued 1973) and Science Indicators 1974 (issued 1975). (OECD (1976), Science and Technology Indicators, Paris, p. 3) The background document to the meeting analyzed the indicators appearing in SI in depth, and compared them to the statistics available and to those that could be collected, and at what cost.56 The group was asked “to draw some lessons for future work in member countries and possibly at OECD.”57 The final report of the review group suggested a three-stage program for the development of new indicators:58 ● ●
Short-term: input indicators (like industrial R&D by product group); Medium-term: manpower indicators (like occupations of scientists and engineers);
55 Ibid., p. 17. 56 See particularly the annex of OECD (1976), Science and Technology Indicators, op. cit. 57 It is important to remember that, at the time, OECD was collecting information on R&D only (monetary investments and personnel). 58 OECD (1978), Report of the Second Ad Hoc Review Group on R&D Statistics, op. cit., pp. 17–21.
114 ●
The emergence of S&T indicators Long-term: output (productivity, technological balance of payments, patents) and innovation indicators, as well as indicators on government support to industrial R&D.
A few months later, in November 1978, the OECD Directorate for Science, Technology and Industry (DSTI) responded to the review group report and made proposals to member countries.59 It suggested limiting indicators to those most frequently requested by users of statistics, that is, input indicators. The decision was dictated by the need to accelerate the dissemination of data—a limitation already identified by the first ad hoc review group. It was nevertheless suggested that a database be created, from which a report based on indicators would be published every two years. The report would replace the fifth volume of the International Statistical Year (ISY) survey on R&D and “be modeled to some extent on the NSF Science Indicators reports.”60 The Canadian delegate, H. Stead, judged these proposals too timid. He suggested that the Frascati manual be revised in order to turn it into an indicator manual.61 The first part would carry more or less the current content of the manual, while the second would deal with other indicators, namely scientific and technical personnel, related scientific activities, outputs and high-technology trade. His suggestions were rejected as premature,62 but the introduction to the Frascati manual was rewritten for the 1981 edition in order to put R&D statistics in the larger context of indicators, and an annex on new indicators was added in the 1993 edition.63 In the following years, the OECD extended its coverage of indicators beyond input indicators. The first issue of the current series MSTI (1988) included data on R&D, patents, technological balance of payments, and high-technology trade. Overall, the OECD, following the holding of several workshops,64 produced the following publications: ●
A series titled Science and Technology: Indicators Report. The series was short-lived, however, because it was considered too time-consuming. Only three editions appeared: 1984, 1986, and 1989.
59 OECD (1978), General Background Document for the 1978 Meeting of the Group of National Experts on R&D Statistics, DSTI/SPR/78.39 and annex. 60 OECD (1978), Background Document, op. cit., p. 8. 61 Ibid., pp. 16–17. 62 OECD (1979), Summary of the Meeting of NESTI, STP (79) 2, p. 4. 63 The question would be discussed again in 1988: “The delegates discussed whether one or more OECD manuals should be developed for measuring scientific and technological activities. They concluded that the revised Frascati manual should continue to deal essentially with R&D activities and that separate manuals in the Measurement of Scientific and Technical Activities series should be developed for S&T output and impact indicators which are derived from entirely different sources from R&D statistics”: OECD (1988), Summary of the Meeting of NESTI, STP (88) 2. 64 Workshops were held on: outputs (1978 and 1979, followed by a conference in 1980), technological balance of payments (1981, 1987), innovation (1982, 1986, 1994), high technology trade (1983) and higher education (1985).
The emergence of S&T indicators ●
A database from which a series of data were, from 1988, published biannually— but without any analytical text: (a) (b) (c) (d)
●
115
Main Science and Technology Indicators (1988ss) Basic Science and Technology Statistics (1991, 1997, 1999, 2001) R&D Expenditures in Industry (1995, 1996, 1997, 1999, 2001) Science, Technology, and Industry Scoreboard (1995, 1997, 1999, 2001).
A series of new methodological manuals on: (a) (b) (c) (d)
Technological Balance of Payments (1990) Innovation (1992) Patents (1994) Human Resources (1995).
Despite the number of documents produced, the OECD never went as far as the NSF. Only a relatively small number of indicators appeared in its reports and data series. Despite important deliberations and debates on output indicators, for example, the only ones present in OECD series before the end of the 1990s concern patents, technological balance of payments, and international trade in high technology.65 Be that as it may, it was SI that convinced the OECD to transform international survey data on R&D into S&T indicators. Although the NSF’s influence on the OECD is evident here, the exchanges between the two organizations were not, however, one-way, but bi-directional. I now turn to the way the OECD itself influenced SI.
Over NSF’s shoulders It took only one year (from September 1971–1972) for the NSB committee to complete a first draft of SI. In fact, the NSB had the chance to benefit from previous OECD experiences with indicators. As early as 1965, Christopher Freeman and Alison Young compared R&D data and methodology in OECD countries.66 They analyzed statistics on investments, manpower, technological balance of payments, patents, and migration for seven countries (Belgium, France, Germany, the Netherlands, the United Kingdom, the United States, and the USSR). This was the first document in the industrialized countries to collect several indicators at once, years before SI did the same. The report identified a gap between American and European efforts in R&D. Indeed, “gaps” was a buzzword of the time. There has been the productivity
65 See: Chapter 7. 66 C. Freeman and A. Young (1965), The Research and Development Effort in Western Europe, North America and the Soviet Union: An Experimental International Comparison of Research Expenditures and Manpower in 1962, op. cit.
116
The emergence of S&T indicators
gap,67 the missile gap,68 the dollar gap,69 and then the technological gap.70 In fact, it was a kind of political manifesto published in 1964 by P. Cognard of the French Délégation Générale de la Recherche, de la Science et de la Technologie (DGRST) that brought the then-current political debate on American domination into the S&T field. With very preliminary data on R&D, patents, and international trade, the article claimed On ne voit pas très bien comment une Nation pourrait maintenir son indépendance politique si (elle) était subordonnée à des décisions techniques et économiques de firmes étrangères. (Le Progrès scientifique (1964), Recherche scientifique et indépendance, 76, p. 14) According to early OECD studies,71 Europe was lagging behind the United States in terms of both investment and performance. The data on which the conclusion was based, however, were considered insufficient to provide a firm basis for comparison. Indeed, OECD member countries only recently approved a standardized methodology for collecting R&D statistics. As a consequence, the second ministerial meeting for S&T (1966) suggested that “a committee of senior officials responsible for science policy (. . .) be set up, with instructions to carry out the preparatory work for future discussions.”72 Their task included a study on national differences in scientific and technical potential—that is, on what has generally come to be described as technological gaps. Nine studies were conducted, plus an analytical report. The material was submitted to the third ministerial meeting on science, held in 1968, under the title Gaps in Technology. The report was the first OECD policy-oriented analysis of data on S&T. The study “confirmed” the gap in R&D efforts between America and Europe, but when military and civil R&D expenditures were separated, the picture changed sharply. The report even went on to suggest that there was little correlation between
67 See: Chapter 12. 68 Soon after the USSR launched Sputnik in 1957, the American scientific community used the satellite as a symbol to blame the Eisenhower administration for restrictive policies on basic research. See: A. J. Levine (1994), The Missile and Space Race, Westport: Praeger, pp. 73–95. 69 M. J. Hogan (1987), The Marshall Plan: America, Britain, and the Reconstruction of Western Europe, 1947–1952, Cambridge: Cambridge University Press, pp. 238–292; D. W. Ellwood (1992), Rebuilding Europe: Western Europe, America and Postwar Reconstruction, London: Longman, pp. 154–174. 70 The question was still on the agenda in the 1980s. See, for example: P. Patel and K. Pavitt (1987), Is Western Europe Losing the Technological Race?, Research Policy, 16, pp. 59–85; J. Fagerberg (1987), A Technological Approach to Why Growth Rates Differ, Research Policy, 16, pp. 87–99; L. Soete (1987), The Impact of Technological Innovation on International Trade Patterns: The Evidence Reconsidered, Research Policy, 16, pp. 101–130. 71 OECD (1963), Science, Economic Growth and Government Policy, op. cit.; C. Freeman and A. Young (1965), The Research and Development Effort in Western Europe, North America and the Soviet Union: An Experimental International Comparison of Research Expenditures and Manpower in 1962, op. cit. 72 OECD (1968), Gaps in Technology: General Report, Paris, p. 5.
The emergence of S&T indicators
117
a country’s R&D effort and its economic growth or trade performance: “the above analysis shows that the United States lead has not had any adverse effects on other countries’ growth and trade performance.”73 Scientific and technological capability was a prerequisite, but not a sufficient basis, for success. In addition to the size of the US market, other factors were identified as far more important: the role of government support, the educational system, and the management culture. To the OECD, the technological gap was in fact a management gap.74 In order to arrive at this conclusion, Gaps in Technology looked at several indicators: R&D, innovation, trade, productivity, technological balance of payments, and foreign investments. Some of these indicators were calculated for the first time, and would become highly popular in the future (like innovation). Gaps was the first systematic attempt to measure S&T on several dimensions using indicators.75 A similar exercise followed Gaps a few years later. It was, in fact, the third OECD contribution on indicators: The Conditions for Success in Technological Innovation, written by K. Pavitt and S. Wald.76 The document followed the third ministerial meeting on science (1968), which asked for a follow-up to Gaps in Technology. Conditions for Success retained six indicators to measure ten countries’ performance in technological innovation: (1) significant innovation; (2) receipts for patents, licenses and know-how; (3) origin of technology; (4) patents granted; (5) imports; and (6) exports in research-intensive industries. Gaps in Technology had a huge political impact, but the analysis was far more nuanced than it appeared in the media or in some intellectuals’ prose.77 J.-J. Servan-Schreiber, for example, made a bestseller of his book Le Défi américain.78 Without being anti-American, Servan-Schreiber “sounded an alarm that America was well on the way to complete domination of the technological industries of Europe and, for that matter, of the world.” As A. King reminded us, ServanSchreiber based his analysis on OECD data, but without acknowledging it.79 The United States has, so said Servan-Schreiber, successfully integrated science with industry, whereas this giant step has completely escaped most European firms. For Servan-Schreiber, Europe needed continent-wide firms similar to the Americans and less political institutions (such as the European Economic Commission).
73 Ibid., p. 30. 74 See: Chapter 12. 75 The OECD thought, for some time, of producing a gap exercise for third world countries, but never did. See: OECD (1968), Summary of the Discussions of the Steering Group of the CSP, C (68) 91, p. 7. However, it documented gaps in fundamental research: J. Ben-David, Fundamental Research and the Universities: Some Comments on International Differences, OECD, Paris, 1968. 76 OECD (1971), The Conditions for Success in Technological Innovation, Paris. 77 J.-J. Sorel (1967), Le retard technologique de l’Europe, Esprit, November (pp. 755–775) and December (pp. 902–919). For R. R. Nelson, the technological gap was nothing new, having existed for upwards of 100 years: R. R. Nelson (1967), The Technology Gap: Analysis and Appraisal, RAND Corporation, Santa Monica, California, P-3694-1. 78 J.-J. Servan-Schreiber (1967), Le Défi américain, Paris: Denoel. 79 A. King, Let the Cat Turn Around: One Man’s Traverse of the 20th Century, Chapter 27 (Innovation Galore), to be published.
118
The emergence of S&T indicators
Nobody could have missed the ideological discourses on Gaps because they were widely published in the media.80 As Vice President of the United States H. H. Humphrey noted: “If there is a technological gap, there is no gap in the information about it.”81 Gaps had echoes within the United States as well, and, of interest to us, within two organizations in particular. First, the Department of Commerce (DoC)—as well as the NSF—began developing their own classification of technologically intensive industries in order to measure international trade.82 The definition was based on multiple indicators, such as scientific and technical personnel, R&D expenditure as a percentage of sales, and manpower competencies. This was the first tentative proposal to measure high-technology trade in the United States. The second effect of Gaps was that the NSF produced SI, the first comprehensive repertory of S&T indicators in the world.
Conclusion S&T indicators appeared in the mid-1960s, at a time when the term indicator became widely used, particularly in the measurement of social trends. They began to be developed at the OECD, particularly in the influential study Gaps in Technology (1968). The exercise was preceded, however, by one published in 1965, that of Freeman and Young, and was followed by one more, this time by Pavitt and Wald (1971). I know of only one occurrence of the term “indicator” in the OECD literature on S&T before the NSF. It appears in a chapter title of the results of the first International Statistical Year (ISY) on R&D, published in 1967.83 While the OECD launched the idea of indicators, it is to the NSF that we owe the development of the field. Before the 1990s, the OECD never really went further than producing a few of the indicators first suggested by Freeman and Young—R&D, patents, technological balance of payments, and trade in hightech industries.84 The two authors were, in fact, far in advance of everybody. They thought of more indicators than OECD would produce for some time. In contrast, the NSF constructed over 100 indicators in the first editions of SI, and the publication was imitated by several other organizations worldwide. Two factors, one internal to the NSF and the other external, played a role in the decision of the NSF to get involved in S&T indicators. First, the 1950 law specifically mandated the NSF to evaluate and assess the state of S&T in the country. This mandate was far from accomplished, according to the bureaucrats. It is thus probably safer to say that it was the increasing pressures put on the organization, rather than the law itself, that led the NSF to the decision to do 80 J.-J. Salomon (2000), L’OCDE et les politiques scientifiques, Revue pour l’histoire du CNRS, 3, p. 48. 81 H. H. Humphrey (1967), Technology and Human Betterment, in Department of Commerce, Technology and the World Trade, Proceedings of a symposium held on 16–17 November 1966, Maryland, p. 67. 82 See: Chapter 7. 83 OECD (1967), The Overall Level and Structure of R&D Efforts in OECD member countries, Paris, p. 12. 84 The selection of these four indicators was the result of the first workshop on output indicators (1978). See: OECD (1979), The Development of Indicators to Measure the Output of R&D: Some Preliminary Results and Plan for Future Work, STP (79) 26.
The emergence of S&T indicators
119
more than simply collect and publish statistics. Second, the OECD study on Gaps offered the NSF a model of what could be done and what was to be expected in terms of results when an organization develops indicators. Because of the quality of SI (and/or because of the volume of its indicators), the OECD proclaimed that “the main work on this topic has been done in the United States”. It is ironic how an organization can forget its own contribution to a field,85 unless someone else (the United States themselves) prepared the ground: Interest in science and technology indicators seems to have grown considerably since an indicator publication was first developed in the United States in the early seventies.86 (C. Falk, Factors to Be Considered in Starting Science and Technology Indicators Activities, Paper presented at the OECD Science and Technology Conference, September 15–19, 1980, STIC/80.14, Paris, p. 1) It was rather the OECD that initiated work on indicators and produced the first analyses of S&T based on them. But I do not want to make too strong an argument for the OECD-dominated mentality vis-à-vis the United States. The OECD produced one model—a few indicators to answer policy questions—the NSF another—a large number of data with no real assessment.87 In fact, the NSF and the OECD were, from the start, in a relative symbiosis, each being a forerunner at a different stage in the history of measurement. A dialectic always existed between the two organizations, and it is probably impossible, as usual in social studies, to definitely identify a unique cause for the emergence of indicators. But certainly, these two organizations were at the center of the discussions and ideas.
85 This is not an isolated citation. For example, in another document, the OECD wrote: “Prior to this conference, the OECD has not played a very positive role in the development of science and technology indicators” (OECD (1980), Preliminary Report of the Results of the Conference on Science and Technology Indicators, STP (80) 24, p. 39). 86 See also: C. Falk (1984), Guidelines for Science and Technology Indicators Projects, op. cit., p. 37. A similar assertion was made in 1969 by D. Bell concerning social indicators: “No society in history has as yet made a coherent and unified effort to assess those factors” (social needs), in D. Bell, The Idea of a Social Report, The Public Interest, 15, 1969, p. 7. The author forgot that it was UNESCO that launched the model in 1952 with its periodic Report of the World Social Situation. 87 Indeed, in the same paper where he presented the NSF as the model, C. Falk (1984) admitted a wide spectrum of alternatives: “At one extreme is the presentation of solely numerical indicators. (. . .) One can go one step further and draw conclusions (. . .). Or one can go even further and draw the type of conclusions that involve subjective judgments (. . .). Finally, one can supplement this approach with recommendations for specific actions” (p. 39).
7
Measuring output When economics drives science and technology measurement
S&T indicators emerged at the same time as governments developed an interest in assessing the value of science. This had to do with the development of science policy. Science policy developed over two periods in the second half of the twentieth century. The first was concerned with funding activities that would help build research infrastructures and develop scientific communities: “science policies concerned themselves with little more than the development of research potential.”1 This period lasted until the 1970s. The second period was concerned mainly with allocating scarce resources and, therefore, with choosing from among fields of science on the basis of selected socioeconomic objectives: “During the period of contracting budgets, expanding needs, and demands for more oriented research, the necessity for priorities has become abundantly clear.”2 “The increased output of scientists and engineers—in so far as it can be stimulated—is not alone enough of a rationale.”3 Each of the science policy periods had a corresponding type of indicator. Until the mid-1970s, most indicators dealt with input, that is, monetary investments and human resources involved in S&T. The Frascati manual was typical of this early period, in that it was exclusively concerned with the measurement of inputs, although it did always include—from the first to the most recent edition—some discussion of output indicators. Nevertheless, it was not until the 1970s that the United States began to systematically develop output indicators, followed by most OECD member countries in the 1980s. The OECD was a modest producer of work on output indicators. There are essentially three types of output indicators in current OECD publications: indicators on patents, indicators on the technological balance of payments (TBP), and indicators on high technology trade. As this short list makes plain, OECD indicators express a particular view of scientific activity. I mentioned in the previous chapter that the NSF based its indicators on the university system. The OECD, in contrast, based its indicators on economics.4
1 2 3 4
OECD (1974), The Research System, 3, Paris, p. 168. Ibid., p. 190. Ibid., p. 194. It is worth noting that the NSF serves academics, whereas the OECD serves governments.
When economics drives S&T measurement 121 This chapter deals with the history of output indicators at the OECD and its member countries. It is concerned with the following questions. Why were certain kinds of output indicators chosen or developed over others? What were the limitations of the selected indicators? What ideas (or ideologies) lay behind the choices? And finally, why was the emphasis placed on economics-based indicators, as opposed to indicators of university output or of the social impacts (or outcomes) of S&T? This chapter is divided into two parts. The first discusses the development of output indicators at the OECD in the 1980s: patents, TBP, and high technology trade. All of these indicators are characterized by a focus on the economic dimension of S&T. The second part discusses the complete absence of university output indicators. It shows that an important asymmetry exists in the OECD treatment and evaluation of university indicators and other output indicators. It suggests that this asymmetry arose from the desire of national statistical offices to exercise complete control over the measurement of S&T.
Economically speaking As early as 1963, Yvan Fabian, an ardent promoter of output indicators and a former director of the OECD Statistical Resource Unit (SRU) of the Directorate of Scientific Affairs (DSA), discussed the relevance of output indicators at the meeting that launched work on the Frascati manual.5 He concentrated on patents, and proposed four indicators that are still published today in the OECD biennial publication Main Science and Technology Indicators (MSTI ). He also showed how patent royalties could be used to measure international transfers of technology. He was ahead of his time. Although as early as 1963 the Committee for Scientific Research (CSR) had proposed reviewing existing work on the matter,6 and included questions on technological transfers and exports of high technology products in the first International Statistical Year (ISY), output indicators would not become systematically available before the 1980s. In fact, the first edition of the Frascati manual stated that: “Measures of output have not yet reached the stage of development at which it is possible to advance any proposals for standardization. (. . .) All these methods of measurement are open to objections.”7 The manual nevertheless presented and discussed the potential of two output indicators: patents and the technological balance of payments. By 1981, the manual included an appendix specifically devoted to output, and discussed a larger number of indicators, namely innovations, patents, TBP, high technology trade, and productivity. The tone of the manual had also changed. 5 Y. Fabian (1963), Note on the Measurement of the Output of R&D Activities, DAS/PD/63.48. 6 OECD (1963), Economics of Science and Technology: A Progress and Policy Report, SR (63) 33, p. 6; OECD (1965), Committee for Scientific Research: Minutes of the 13th Session, SR/M (65) 2, p. 18. 7 OECD (1962), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Development, Paris, p. 37.
122
When economics drives S&T measurement
While recognizing that there still remained problems of measurement, it stated that: “Problems posed by the use of such data should not lead to their rejection as they are, for the moment, the only data which are available to measure output.”8 As we saw in the preceding chapter, the OECD started using output indicators in the 1960s in three studies, with several indicators for assessing the various dimensions of S&T performance.9 The success of these studies spurred the DSA to seriously consider using output indicators on a systematic basis: “For an appraisal of research efforts the use of certain indicators such as scientific publications, lists of inventions and innovations or patent statistics can be envisaged.”10 The real catalyst for OECD work on output, however, was the report of the second ad hoc review group in 1976, which was itself influenced by the NSF publication Science Indicators. The review group suggested that the OECD should get involved in output indicators. Three indicators were proposed—productivity, technological balance of payments and patents—in addition to innovation indicators. The OECD’s response to the ad hoc group was positive. The organization agreed that: “it would be better to push on and begin measuring output than to continue to try and improve input data.”11 However, the Secretariat is not proposing to launch any special surveys of R&D output but intends to try and work from the existing series of data available, hoping that if the various series are taken in combination their results will be mutually reinforcing. Thus, though no one series is viable taken alone, if the results of several series were to agree this would confer some degree of plausibility on them. (OECD (1977), Responses by the Secretariat to the Questions of the Ad Hoc Group, DSTI/SPR/77.52, p. 10) From then on, the OECD developed a whole program of work,12 which proceeded in four steps. First, two workshops (1978 and 1979)13 and a conference (1980)14 were held to assess the current state of output indicators. Second, experimental studies were conducted on three indicators: patents,15 technological balance of 8 OECD (1981), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, Paris, p. 131. 9 C. Freeman and A. Young (1965), The Research and Development Effort in Western Europe, North America and the Soviet Union: An Experimental International Comparison of Research Expenditures and Manpower in 1962, Paris: OECD; OECD (1968), Gaps in Technology: General Report, Paris; OECD (1971), The Conditions for Success in Technological Innovation, Paris. 10 OECD (1967), Future Work on R&D Statistics, SP (67) 16, p. 5. 11 OECD (1977), Responses by the Secretariat to the Questions of the Ad Hoc Group, DSTI/SPR/77.52, p. 10. 12 OECD (1983), State of Work on R&D Output Indicators, STP (83) 12; OECD (1984), Secretariat Work on Output Indicators, STP (84) 8. 13 OECD (1979), The Development of Indicators to Measure the Output of R&D: Some Preliminary Results and Plan for Future Work, STP (79) 26. 14 OECD (1980), Preliminary Report of the Results of the Conference on Science and Technology Indicators, STP (80) 24. 15 OECD (1983), Experimental Studies on the Analysis of Output: Patents and the Science and Technology System, DSTI/SRP/83.13 PART 1.
When economics drives S&T measurement 123 payments,16 and high-technology trade.17 Third, dedicated methodological manuals were published. Finally, databases were constructed and data were published on a regular basis—three editions of Science and Technology Indicators, plus the biennial publication MSTI. The following sections deal with each of the output indicators that were originally proposed by the ad hoc review group, and that are currently available in OECD publications: patents, TBP, plus indicators on high-technology trade. Innovation is dealt with in the next chapter.
Patents Contrary to what D. Archibugi and G. Sirilli have recently argued,18 the first indicator to appear in the history of S&T measurement was patents, and not R&D statistics. We owe much to the pioneering work of the economist Jacob Schmookler,19 but other economists,20 and even sociologists,21 used patent statistics for studying S&T in the 1930s and 1940s. As a matter of facts, S. Kuznets wrote in 1962: “much more data, quantitative and qualitative, are available on output than on input.”22 When the OECD began working on the methodology of patent statistics in the 1970s, it qualified the indicator as “not particularly favoured by users.”23 But the indicator had one significant advantage over the others: patent data were fairly easy to standardize: Patents received immediate attention as an output indicator: long time series are immediately available, the data is objective in the sense that it passively reflects an economic decision, and the data is undoubtedly related to an
16 OECD (1983), Experimental Studies on the Analysis of Output: The Technological Balance of Payments, DSTI/SRP/83.13 PART 3. 17 OECD (1983), Experimental Studies on the Analysis of Output: International Trade in High Technology Products—An Empirical Approach, DSTI/SRP/83.13 PART 2. 18 D. Archibugi and G. Sirilli (2001), The Direct Measurement of Technological Innovation in Business, Rome: National Research Council, p. 6. 19 J. Schmookler (1950), The Interpretation of Patent Statistics, Journal of the Patent Office Society, 32 (2), pp. 123–146; J. Schmookler (1953), The Utility of Patent Statistics, Journal of the Patent Office Society, 34 (6), pp. 407–412; J. Schmookler (1953), Patent Application Statistics as an Index of Inventive Activity, Journal of the Patent Office Society, 35 (7), pp. 539–550; J. Schmookler (1954), The Level of Inventive Activity, Review of Economics and Statistics, pp. 183–190. 20 E. Graue (1940), Inventions and Production, The Review of Economic Statistics, 25 (4), pp. 221–223. 21 A. B. Stafford (1952), Is the Rate of Invention Declining?, The American Journal of Sociology, 42 (6), pp. 539–545; R. K. Merton (1935), Fluctuations in the Rate of Industrial Invention, Quarterly Journal of Economics, 49 (3), pp. 454–474. 22 S. Kutznets (1962), Inventive Activity: Problems of Definition, in NBER, The Rate and Direction of Inventive Activity: Economic and Social Factors, Princeton: Princeton University Press, p. 35. 23 OECD (1977), Responses by the Secretariat to the Questions of the Ad Hoc Group, op. cit., p. 12.
124
When economics drives S&T measurement important form of knowledge creation: the invention of economically important new products and processes. (OECD (1982), Patents, Invention and Innovation, DSTI/SPR/82.74, p. 21)
A workshop was held in 1982 to assess the indicator’s usefulness.24 Its main limitations were identified as follows:25 ● ●
● ●
Not all inventions are patented—or patentable; Firms and industries vary in their propensities to file patents—and cannot be compared; Legal systems and policies vary according to country; Patents vary in importance (value).
But these were manageable limitations, according to the OECD: There has been continuing controversy over the use of patent statistics. (. . .) But, as J. Schmookler wrote, we have a choice of using patent statistics continuously and learning what we can from them, and not using them and learning nothing. (. . .) All progress in this field will come ultimately from the reasoned use of this indicator which, while always taking into account the difficulties it presents, works to reduce them. (OECD (1983), State of Work on R&D Output Indicators, op. cit., p. 11) The OECD consequently started publishing patent indicators, beginning with the first edition of MSTI in 1988.26 In 1994, the organization produced a methodological manual on collecting and interpreting patent statistics, written by F. Laville, from the French Observatoire des sciences et des techniques (OST), with collaborators from the Centre de sociologie de l’innovation (CSI) of École des mines in Paris.27 But patents were only one indicator for measuring invention. While patents were being introduced into OECD statistics, indicators of innovation output were also beginning to gain in popularity in several countries, and were said to be more appropriate for measuring innovative activities. However, because they required proper surveys,28 the OECD waited until the 1990s before developing such
24 Ibid. 25 OECD (1979), The Development of Indicators to Measure the Output of R&D: Some Preliminary Results and Plan for Future Work, op. cit., pp. 22–23; OECD (1980), Preliminary Report of the Results of the Conference on Science and Technology Indicators, op. cit., pp. 13–15. 26 The data came from the World Intellectual Property Organization (WIPO) and Computer Horizons Inc. (CHI). 27 OECD (1994), The Measurement of Scientific and Technical Activities: Data on Patents and Their Utilization as Science and Technology Indicators, Paris. 28 OECD (1977), Responses by the Secretariat to the Questions of the Ad Hoc Group, op. cit., p. 12.
When economics drives S&T measurement 125 indicators.29 Since then, however, patent statistics have become increasingly fashionable, at least at the OECD,30 but remain limited in the organization’s studies because of the too-aggregated level of the data (lack of details on technological groups, and industrial sectors).31
Patent indicators appearing in MSTI ● ● ● ● ● ● ● ●
National patent applications Resident patent applications Non-resident patent applications External patent applications Dependency ratio Autosufficiency ratio Inventiveness coefficient Rate of diffusion.
Technological balance of payments The next two indicators were spin-offs of the OECD study on technological gaps of the late 1960s, and concerned commercial exchanges between countries. This concern probably has its origins in the European balance-of-payments deficits, which were at the heart of the launching of the US Marshall Plan in 1948,32 and the trade deficits of the early 1970s in the United States.33 The first indicator to appear was the TBP, a term coined by the French in the 1960s to measure the technological flows between countries: payments for patents, licenses and know-how. If patent statistics were developed without much hesitation, the TBP was the first output indicator to undergo detailed scrutiny at the OECD. In fact, in 1981, the first in a series of OECD workshops on output indicators was
29 See: Chapter 8. 30 OECD (1994), Workshop on Innovation, Patents and Technological Strategies: Summaries of Contributions, DSTI/EAS/STP/NESTI (94) 14; OECD (1997), Patents and Innovation in the International Context, OECD/GD (97) 201; OECD (1999), The Internationalization of Technology Analyzed with Patent Data, DSTI/EAS/STP/NESTI (99) 3; OECD (1999), Patents Counts as Indicators of Technology Output, DSTI/EAS/STP/NESTI (99) 5; OECD (1999), Patent Applications and Grants, DSTI/ EAS/STP/NESTI (99) 6; OECD (2000), Counting Patent Families: Preliminary Findings, DSTI/ EAS/STP/NESTI/RD (2000) 11; OECD (2001), Patent Families: Methodology, DSTI/EAS/ STP/NESTI (2001) 11. 31 For an exception, see OECD (1983), Experimental Studies on the Analysis of Output: Patents and the Science and Technology System, op. cit. 32 See: I. Wexler (1983), The Marshall Plan Revisited: The European Recovery Program in Economic Perspective, Westport: Greenwood Press. 33 M. Boretsky (1971), Concerns About the Present American Position in International Trade, Washington: National Academy of Engineering.
126
When economics drives S&T measurement
dedicated to the TBP (1981).34 From the beginning, the workshop identified three types of problems that persist to this day:35 1
Content ●
●
●
●
2
Classification ●
●
3
The TBP excludes international technology flows that do not give rise to specific financial flows. By definition, the TBP cannot record invisible technology transfers (transfers of funds between multinational-enterprise subsidiaries and the parent company). Among the financial flows the TBP records, there may be some which do not represent any real transfer of technology. Taxation policies, national regulations and monetary considerations all influence financial transfers and produce overestimates. The TBP usually includes payments for patents, licenses and know-how, but some countries also include trademarks, technical assistance and management fees or payments for taking out and renewing patents. Methods for collecting the data vary depending on the country: surveys, declarations by banking institutions.
Type of transaction: it is not always possible to break down receipts and expenditures between different categories (licenses, trademarks, know-how). Type of firm: it is difficult to identify the portion of receipts and payments transacted between affiliated companies.
International Comparability
In light of these limitations, the workshop’s experts suggested proceeding with caution: a negative TBP is not necessarily a sign of technological weakness. A deficit may instead be a sign, as in Japan’s case, of an active policy for increasing the country’s economic competitiveness.36 All in all, the TBP should be used only in conjunction with other indicators.37 Thus, the workshop recommended attempting international harmonization and preparing a compendium of methods. A methodological manual would indeed be drafted in 1983 by B. Madeuf,38 revised and distributed in 1990.39 That same year, the OECD launched its first
34 OECD (1982), Report of the Workshop on the Technological Balance of Payments, DSTI/SPR/82.9. 35 Already discussed in the 1970s: OECD (1970), Gaps in Technology, Paris, pp. 200–204; OECD (1977), Data Concerning the Balance of Technological Payments in Certain OECD member countries: Statistical Data and Methodological Analysis, DSTI/SPR/77.2. See also: OECD (1983), Experimental Studies on the Analysis of Output: The Technological Balance of Payments, op. cit., pp. 17ss. 36 OECD (1982), Report of the Workshop on the Technological Balance of Payments, op. cit., p. 9. 37 Ibid., p. 7. 38 B. Madeuf (1984), International Technology Transfers and International Technology Payments: Definitions, Measurement and Firms’ Behaviour, Research Policy, 13, pp. 125–140. France was indeed one of the first and most active countries to promote TBP statistics at the OECD. See for example: OECD (1964), Governments and Innovation: Progress Report, CMS-CI/GD/64/7, pp. 9–10. 39 OECD (1990), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for the Collection and Interpretation of Data on the Technological Balance of Payments, Paris.
When economics drives S&T measurement 127 international TBP survey and began publishing indicators in MSTI. The results of the surveys were quite dissatisfying, however. Three-quarters of the countries did not adequately detail and document their statistics, and there were gaps ranging from 60 to 120 percent between the amounts declared by the money recipients and by funders.40 A revision of the manual was therefore envisaged in 1994, but never conducted. For the OECD, the state of the indicator is as follows: Given that few countries have supplied [enough] detailed statistics, and that the data lack uniformity (across countries) and stability (over time), it has not been possible to test the data properly and make wider use of them in research and publications dealing with S&T indicators. This has led to a vicious circle where users, feeling dissatisfied, see no reason to provide more resources to producers, who are consequently held back. (OECD (1994), Possible Revision of the TBP Manual, DSTI/EAS/STP/NESTI (94) 10, p. 3) TBP indicators appearing in MSTI ● ● ● ● ●
Receipts Payments Balance Coverage ratio Total transactions.
Trade in high technology If the TBP was, and remains, a controversial indicator, indicators of high technology were subject to even greater controversy, particularly in countries like Canada that do not fare very well on such measures.41 For the OECD, the problem stemmed from the fact that each country had its own idea of what constituted high technology, and used its own vocabulary: advanced technologies, strategic technologies, critical technologies, core technologies, basic technologies, new technologies:42 “The concept of high technology became part of our everyday vocabulary before economists and scientists had even managed to produce a precise and generally accepted definition of the term.”43 The work on high technology at the OECD was influenced by two factors, one analytical, the other political. The influence of the first factor was related to the
40 OECD (1992), Les statistiques d’échanges techniques internationaux et de brevets d’invention: état des données de l’OCDE et perspectives de développement, DSTI/STII/STP/NESTI/RD (92) 6. 41 J. R. Baldwin and G. Gellatly (1998), Are There High-Tech Industries or Only High-Tech Firms? Evidence From New Technology-Based Firms, Research Paper series, No. 120, Statistics Canada; K. S. Palda (1986), Technological Intensity: Concept and Measurement, Research Policy, 15, pp. 187–198. 42 OECD (1993), Summary of Replies to the Questionnaire on Methodology, DSTI/EAS/IND/STP (93) 4. 43 OECD (1988), La mesure de la haute technologie: méthodes existantes et améliorations possibles, DSTI/IP/ 88.43, p. 3.
128
When economics drives S&T measurement
OECD’s attempts to analyze R&D trends,44 and to develop statistics for classifying countries45 and industries46 according to R&D effort or intensity. High technology was in fact the extension to industry of the GERD/GNP indicator for countries:47 the OECD developed ratios of (intramural) R&D divided by value added, and classified industries—and countries’ performances—according to three groups—high, medium, and low—depending on whether they were above or below the average level of R&D investment.48 Thus, an industry that invested above the going average in R&D was considered to be a high-technology industry. The second factor that influenced the development of high-technology indicators was a request in 1982 by the OECD Council of Ministers asking the Secretariat to examine competitiveness and the problems that could arise in the trade of high-technology products. High-technology trade had in fact gained strategic importance in the economic and political context of the 1970s, particularly in the United States (for security and economic reasons),49 but also in other OECD member countries: high-tech industries were expanding more rapidly than other industries in international trade, and were believed to be an important policy option for economic progress. The Industry Committee and the Committee for Scientific and Technological Policy (CSTP) of the Directorate for Science, Technology and Industry (DSTI) thus studied approaches to international trade theory,50 and conducted two series of analyses: six case studies of specific industrial technologies, plus some reflections on defining high technology in terms of five characteristics (which went beyond mere R&D investment ratios).51 It reported back to the Council in 1985.52 For its part, the statistical unit organized a workshop on methodologies linking technology and trade.53 The first statistics were published in 1986 in the second issue of Science and Technology Indicators.
44 45 46 47
48 49 50 51 52
53
OECD (1979), Trends in Industrial R&D in Selected OECD member countries, 1967–1975, Paris. OECD (1984), Science and Technology Indicators, Paris, pp. 24–25. OECD (1978), Problems of Establishing the R&D Intensities of Industries, DSTI/SPR/78.44. The first OECD statistical exercises on “research-intensive industries” are to be found in OECD (1963), Science, Economic Growth and Government Policy, Paris, pp. 28–35; OECD (1970), Gaps in Technology, Paris, pp. 206–212 and 253–260. OECD (1978), Problems of Establishing the R&D Intensities of Industries, op. cit., p. 16. Such discourses were held by institutions such as the Council on Competitiveness, the Department of Commerce and the National Critical Technologies Panel. OECD (1981), Analysis of the Contribution of the Work on Science and Technology Indicators to Work on Technology and Competitiveness, DSTI/SPR/81.21. OECD (1984), Background Report on the Method of Work and Findings of the Studies Carried Out by the Industry Committee and the Committee for Scientific and Technological Policy, DSTI/SPR/84.1. OECD (1985), An Initial Contribution to the Statistical Analysis of Trade Patterns in High Technology Products, DSTI/SPR/84.66. Analytical work continued in the following decade under the label “technology and competitiveness”. See OECD (1991), TEP: International Conference Cycle, Paris, pp. 61–68; OECD (1992), Technology and the Economy, Paris, Chapter 11; OECD (1996), Technology and Industrial Performance, Paris, Chapter 5. OECD (1984), Summary Record of the Workshop on Technology Indicators and the Measurement of Performance in International Trade, DSTI/SPR/84.3.
When economics drives S&T measurement 129 High-technology indicators appearing in MSTI ● ● ● ● ● ●
Export/import ratio: Aerospace industry Export/import ratio: Electronic industry Export/import ratio: Office machinery and computer industry Export/import ratio: Drug industry Export/import ratio: Other manufacturing industries Export/import ratio: Total manufacturing.
The early OECD analytical work on high technology was based on a US classification scheme.54 In fact, the first influential analyses on the subject were conducted in the United States55—but were, again, inspired by the OECD concerns with technological gaps in the 1960s.56 The US Department of Commerce developed a list of ten high-technology industries based on ratios of R&D expenditures to sales. The first OECD list of high-technology industries extrapolated the structure of American industry onto the entire area covered by the OECD, and was criticized for this reason.57 The OECD consequently organized a workshop in 198358 in which the literature on international trade theory and its main concepts59 was studied to learn how to develop high-technology trade indicators. The workshop concluded on the need for such indicators based on the following “fact”: “direct investment or the sale of technology are as effective as exports in gaining control of market.”60
54 In fact, before the OECD Secretariat worked on the topic, no country had developed much work apart from the United States. See: OECD (1993), Summary of Replies to the Questionnaire on Methodology, DSTI/EAS/IND/STP (93) 4. Canada tried once to apply the US classification; see: MOSST (1978), Canadian Trade in Technology-Intensive Manufactures, 1964–76, Ottawa. 55 For the Department of Commerce (DOC), see: M. Boretsky (1971), Concerns About the Present American Position in International Trade, in National Academy of Engineering, Technology and International Trade, Washington; R. K. Kelly (1976), Alternative Measurements of Technology-Intensive Trade, Office of International Economic Research, Department of Commerce; R. Kelly (1977), The Impact of Technology Innovation on International Trade Patterns, US Department of Commerce, Washington; US Department of Commerce (1983), An Assessment of US Competitiveness in High Technology Industries, International Trade Administration; L. Davis (1982), Technology Intensity of US Output and Trade, US Department of Commerce, International Trade Administration, Washington; V. L. Hatter (1985), US High Technology Trade and Competitiveness, US Department of Commerce, International Trade Administration, Washington; L. A. Davis (1988), Technology Intensity of US, Canadian and Japanese Manufacturers Output and Exports, Office of Trade and Investment Analysis, Department of Commerce. For the NSF, see the 1974 edition of Science Indicators and those following. For other countries, see: OECD (1988), La mesure de la haute technologie: méthodes existantes et améliorations possibles, op. cit., pp. 10–14. 56 See: Chapter 12. 57 OECD (1980), International Trade in High R&D Intensive Products, STIC/80.48; OECD (1983), Experimental Studies on the Analysis of Output: International Trade in High Technology Products—An Empirical Approach, op. cit. 58 OECD (1984), Summary Record of the Workshop on Technology Indicators and the Measurement of Performance in International Trade, DSTI/SPR/84.3. 59 Export/import, specialization (advantages), competitiveness (market share). 60 OECD (1984), Summary Record of the Workshop on Technology Indicators and the Measurement of Performance in International Trade, op. cit., p. 4.
130
When economics drives S&T measurement
In collaboration with the Fraunhofer Institute for Systems and Innovation Research (Germany), the OECD thus developed a new classification based on a broader sample of eleven countries.61 But there were still problems regarding the lack of sufficient-disaggregated sectoral data: the list was based on industries rather than products.62 All products from high-technology industries were qualified as high-tech even if they were not, simply because the industries that produced them were classified as high-tech. And conversely, all high-tech products from low-technology industries were qualified as low-tech. Another difficulty was that the indicator did not take technology dissemination into account, only R&D. An industry was thus reputed to be high-technology intensive if it had high levels of R&D, even if it did not actually produce or use much in the way of hightechnology products and processes. Finally, the data upon which the list was based dated from 1970–1980,63 whereas high-technology products were known to be continuously evolving. The list was therefore revised in the mid-1990s in collaboration with Eurostat64 and following a workshop held in 1993.65 It used much more recent data, and included a new dimension to take technology dissemination, as embodied technology (technology incorporated in physical capital), into account. Two lists were in fact developed. The first concerned high-technology industries, and considered both direct (R&D)66 and indirect67 intensities.68 Four groups of industries were identified, with medium technology being divided into high and low. But limitations persisted: high-technology intensities were calculated on the basis of the principal activity of the firms that made up the industry, and there was a lack of disaggregated details. In addition, the OECD recognized that: “the classification of the sectors in three or four groups in terms of their R&D intensity is partly a normative choice.”69 This led to the development of the second list, which was based on products rather than industries, and which was solely concerned with the high-technology
61 OECD (1984), Specialization and Competitiveness in High, Medium and Low R&D-Intensity Manufacturing Industries: General Trends, DSTI/SPR/84.49. 62 OECD (1978), Problems of Establishing the R&D Intensities of Industries, op. cit. 63 OECD (1988), La mesure de la haute technologie: méthodes existantes et améliorations possibles, op. cit.; OECD (1991) High Technology Products: Background Document, DSTI/STII (91) 35. 64 OECD (1994), Classification of High-Technology Products and Industries, DSTI/EAS/IND/WP9 (94) 11; OECD (1995), Classification of High-Technology Products and Industries, DSTI/EAS/IND/STP (95) 1; OECD (1997), Revision of the High Technology Sector and Product Classification, DSTI/IND/ STP/SWP/NESTI (97) 1. 65 OECD (1994), Seminar on High Technology Industry and Products Indicators: Summary Record, DSTI/EAS/IND/STP/M (94) 1. 66 R&D expenditure-to-output ratios were calculated in 22 sectors of the 10 countries that accounted for more than 95 percent of the OECD industrial R&D, then, using purchasing power parities, each sector was weighted according to its share of the total output. 67 Input–Output coefficients. 68 For details on calculations, see: OECD (1995), Technology Diffusion: Tracing the Flows of Embodied R&D in Eight OECD Countries, DSTI/EAS (93) 5/REV1; G. Papaconstantinou et al. (1996), Embodied Technology Diffusion: An Empirical Analysis for 10 OECD Countries, OECD/GD (96) 26. 69 OECD (1995), Classification of High-Technology Products and Industries, DSTI/EAS/IND/STP (95) 1, p. 8.
When economics drives S&T measurement 131 Table 7.1 OECD list of technology groups (1997) High Aircraft and Spacecraft (ISIC 353) Pharmaceuticals (ISIC 2423) Office, accounting, and computing machinery (ISIC 30) Radio, TV, and communications equipment (ISIC 32) Medical, precision, and optical instruments (ISIC 33) Medium-high Electrical machinery and apparatus (ISIC 31) Motor vehicles, trailers, and semi-trailers (ISIC 34) Chemicals excluding pharmaceuticals (ISIC 24 less 2423) Railroad equipment and transport equipment (ISIC 352 ⫹ 359) Machinery and equipment (ISIC 29) Medium-low Coke, refined petroleum products, and nuclear fuel (ISIC 23) Rubber and plastic products (ISIC 25) Other non-metallic mineral products (ISIC 26) Building and repairing of ships and boats (ISIC 351) Basic metals (ISIC 27) Fabricated metal products, except machinery and equipment (ISIC 28) Low Manufacturing; Recycling (ISIC 36–37) Wood and products of wood and cork (ISIC 20) Pulp, paper, paper products, printing, and publishing (ISIC 21–22) Food products, beverages, and tobacco (ISIC 15–16) Textiles, textile products, leather, and footwear (ISIC 17–19)
category (see Table 7.1).70 All products with R&D intensities above the industry average, that is, about 3.5 percent of total sales, were considered high-tech. This list excluded products that were not high-tech, even if they were manufactured by high-tech industries. Moreover, the same products were classified similarly for all countries. But there were and still remain two limitations. First, the indicator was not totally quantitative: it was partly based on expert opinion. Second, the data were not comparable with other industrial data. The OECD work on high technology never led to a methodological manual. Several times, among them during the fourth revision of the Frascati manual, a manual devoted to high technology was envisioned,71 but it was never written. Nevertheless, indicators were published regularly in MSTI from 1988, and high-technology intensities across countries were discussed at length in the series Science and Technology Indicators. 70 The SPRU conducted one of the first analyses of this kind for the OECD in the 1960s. See: OECD (1970), Gaps in Technology, op. cit., pp. 211 and 231–232. 71 OECD (1991), Future Work on High Technology, DSTI/STII/IND/WP9 (91) 7; OECD (1991), High Technology Products, DSTI/STII (91) 35; OECD (1992), High Technology Industry and Products Indicators: Preparation of a Manual, DSTI/STII/IND/WP9 (92) 6; OECD (1993), Seminar on High Technology Industry and Products Indicators: Preparation of a Manual, DSTI/EAS/IND/STP (93) 2.
132
When economics drives S&T measurement
Controlling the instrument Besides discussing indicators of economic output of S&T, the OECD and member countries regularly discussed university output indicators in the 1980s and 1990s.72 Yet, no bibliometric indicators (publication and citation counts) were ever developed. A manual was planned73 and drafted,74 but transformed into a working paper because its structure and coverage did not bear any relationship to a manual.75 “The OECD has never been a primary actor in this field and is unlikely to do so in the future.”76 The participants in the 1980 conference on output indicators were nevertheless enthusiastic about bibliometrics. They agreed that bibliometric indicators could yield reliable information, but that care was needed in interpreting them.77 The initial enthusiasm soon faded, however. In a document synthesizing its work on output indicators and distributed to member countries in 1983, the OECD Secretariat emphasized two main areas of uncertainty or disagreement about bibliometrics: conceptual problems (indicators of what?) and relevance problems (indicators for whom?).78 These uncertainties would dominate the workshop on higher education held in 1985, in which the OECD announced that it would not set up a bibliometric database because of the indicators’ costs and limitations.79 The position of most countries, including the OECD in general, was that offered by J. Moravcsik: Bibliometrics is, by no means, an uncontroversial method of assessing the output and impact of research. The methods have been criticized, and sometimes rightly so, for being based on over-simple assumptions and for failing to take into account international biases in the data. (OECD (1985), Summary Record of the OECD Workshop on Science and Technology Indicators in the Higher Education Sector, DSTI/SPR/85.60, p. 27) 72 OECD (1980), Preliminary Report of the Results of the Conference on Science and Technology Indicators, op. cit., pp. 25–30; OECD (1985), Summary Record of the OECD Workshop on Science and Technology Indicators in the Higher Education Sector, DSTI/SPR/85.60; OECD (1987), Review of the Committee’s Work Since 1980, DSTI/SPR/87.42; S. Herskovic (1998), Preliminary Proposal for the Development of Standardized International Databases on Scientific Publications and Patent Applications, DSTI/EAS/STP/NESTI/RD (98) 11. 73 OECD (1991), Record of the NESTI Meeting, DSTI/STII/STP/NESTI/M (91) 1; OECD (1997), Record of the NESTI Meeting, DSTI/EAS/STP/NESTI (97) 1. 74 OECD (1995), Understanding Bibliometrics: Draft Manual on the Use of Bibliometrics as Science and Technology Indicators, DSTI/STP/NESTI/SUR (95) 4. 75 Y. Okubo, Bibliometric Indicators and Analysis of Research Systems: Methods and Examples, OECD/GD (97) 41. 76 OECD (1994), Statistics and Indicators for Innovation and Technology, DSTI/STP/TIP (94) 2, p. 11. 77 OECD (1980), Preliminary Report of the Results of the Conference on Science and Technology Indicators, op. cit., p. 29. 78 OECD (1983), State of Work on R&D Output Indicators, op. cit., pp. 22–23. 79 OECD (1985), Summary Record of the OECD Workshop on Science and Technology Indicators in the Higher Education Sector, op. cit., p. 35.
When economics drives S&T measurement 133 What were those limitations? The 1989 supplement to the Frascati manual, which was concerned with problems of measurement in the higher education sector, listed at length the (supposed) limitations that prevented member countries from getting involved in bibliometrics:80 ● ● ●
● ● ●
●
● ●
orally communicated ideas between scientists are not included; analyses are based on scientific journals to the exclusion of books; documents can be cited for reasons other than their positive influence on research; most far-reaching ideas soon cease to be cited formally; some scientists and researchers cite their own papers excessively; non-English language publications are cited less frequently that those published in English; there is a time lag between the publication of results and the citation of the article; scientists and researchers with the same name can often be confused; there is a bias in favor of the first author of multi-authored publications.
Were these limitations really the heart of the matter? For example, the OECD constantly reminded its audience that no indicators were without limitations, but that they should nevertheless be used. In the case of patents, the OECD wrote: “There are obvious limitations (. . .). Patent data, however, are no different from other science and technology indicators in this regard” and “patent data should be part of [the] mix.”81 Similarly, concerning the highly contested high-technology indicator, the OECD once stated: Obviously, one has to be very careful in making policy conclusions on the basis of statistically observed relationships between technology-intensity measures and international competitiveness. Yet, as emphasized by one participant, to deny that policy conclusions can be made is to ignore some of the most challenging phenomena of the last decade. (OECD (1980), Preliminary Report of the Results of the Conference on Science and Technology Indicators, op. cit., p. 18) How, then, could the limitations of bibliometrics be perceived as any worse than the limitations of other indicators? Certainly, “[b]ecause of its basic nature, the results or [university] outputs are difficult to quantify, and are largely in the form of publications and reports,” states the OECD.82 Since researchers essentially produce knowledge, “[o]utputs of R&D are not immediately identifiable in terms 80 OECD (1989), The Measurement of Scientific and Technical Activities: R&D Statistics and Output Measurement in the Higher Education Sector, Paris, p. 50–51. 81 OECD (1980), Preliminary Report of the Results of the Conference on Science and Technology Indicators, op. cit., pp. 14–15. 82 OECD (1989), The Measurement of Scientific and Technical Activities: R&D Statistics and Output Measurement in the Higher Education Sector, op. cit., p. 12.
134
When economics drives S&T measurement
of new products or systems but are more vague and difficult to define, measure and evaluate.”83 But the OECD nevertheless recommends that “[o]utputs of research should be measured wherever possible, bearing in mind the limitations of the methods being used”84 and by “drawing upon, whenever possible, not one isolated indicator, but several.”85 The Frascati manual even states that “we are more interested in R&D because of the new knowledge and inventions that result from it than in the activity itself ” (p. 18). Moreover, Christopher Freeman once suggested that if we cannot measure all of [the information generated by R&D activities] because of a variety of practical difficulties, this does not mean that it may not be useful to measure part of it.”86 There are two explanations for the absence of output data on university research at the OECD.87 To begin with, several factors contributed to focusing output indicators on the economic dimension of S&T. The first factor was related to the organization’s mission, namely economic co-operation and development. It was therefore natural that most of the OECD work dealt with indicators of an economic nature. Second, economists have been the main producers and users of S&T statistics and indicators, and have constituted the bulk of national and OECD consultants because they were, until recently, among the only analysts that worked systematically with statistics: one would have thought that political science, not economics, would have been the home discipline of policy analysis. The reason it was not was that the normative structure of political science tended to be squishy, while economics possessed a sharply articulated structure for thinking about what policy ought to be. (R. R. Nelson (1977), The Moon and the Ghetto, New York: Norton, p. 30) Third, the economic dimensions of reality are the easiest to measure. S&T, particularly university research, involve invisibles and intangibles that still challenge statisticians. Last of all, at the time the OECD started working on output indicators, the state of the art of bibliometrics was not what it is now. It was then a new and emerging specialty that was widely criticized—by scientists among others. But the main reason the OECD did not get involved in bibliometrics had to do with control of the field of measurement, and this explains why member countries
83 84 85 86
Ibid., p. 13. Ibid., p. 15. Ibid., p. 47. C. Freeman (1970), Measurement of Output of Research and Experimental Development, Paris: UNESCO, p. 11. 87 National statistical organizations are not alone in debating the value of bibliometrics. The field has yet to win the acceptance of the university community itself (Science (1991), No Citation Analyses Please, We’re British, 252, 3 May, p. 639; Science (1993), Measure for Measure in Science, 14 May, pp. 884–886; Nature (2002), The Counting House, 415, 14 February, pp. 726–729), and university researchers interested in the sociology of science (Edge, D. (1979), “Quantitative Measures of Communication in Science: A Critical Review,” History of Science, 27, pp. 102–134; Woolgar, S. (1991), “Beyond the Citation Debate: Towards a Sociology of Measurement Technologies and their Use in Science Policy,” Science and Public Policy, 18 (5), pp. 319–326).
When economics drives S&T measurement 135 prevented the OECD from developing bibliometric indicators. History has shown how “the census became authoritative in part through efforts by state officials to defeat or limit the scope of other ways of determining population.”88 National statistical offices applied similar efforts in rejecting outside data on S&T, including bibliometric data. First, official documents not uncommonly contained negative remarks about outsider data like the following: “their methodology may not come up to the standards.”89 It is worth bearing in mind, however, that these standards were solely determined by the statistical agencies themselves. Second, output measurements were often criticized for being “adapted from existing data sources, [for being] themselves proxies and [for giving] only partial measures of R&D output.”90 As the Frascati manual reported on these indicators: “one of the main problems is due to the type of data used. Data used to measure output have not, generally, been collected for that purpose and, thus, it is necessary to adjust them considerably (. . .).”91 In fact, the data were collected by different organizations— central banks, patent offices, trade departments, private firms, and academics.92 In general, national statistical offices explicitly rejected other kinds of data, sources and activities than the survey. For example, although directories of institutions are essential for conducting surveys, the task of compiling them is no longer (but was for some time) considered part of a statistical agency’s mandate, because other departments now collect this information.93 Similarly, benchmarking is said to be “an exercise neither for a statistical office nor a statistical division.”94 Finally, databases on new products and processes were rejected in the 1980s for measuring innovation. The survey of innovative activities was preferred.95 In sum, the lesson is clear: the main tool of official statisticians is not only the survey per se, it is the official survey that they control. As one participant in an NSF/OECD workshop noted in 1997: “The [R&D] questionnaire implicitly is built upon the basic assumption that the source data under science and technology indicators come fundamentally from surveys linked to the Statistical Office.”96
Conclusion Input indicators went hand in hand with early science policies that were devoted to funding research for its own sake: “during the period when it was believed that
88 B. Curtis (2000), The Politics of Population, University of Toronto Press, p. 32. 89 OECD (1994), Statistics and Indicators for Innovation and Technology: Annex I, DSTI/STP/TIP (94) 2/ANN 1, p. 6. 90 Ibid., p. 10. 91 OECD (1981), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of R&ED, op. cit., p. 131. 92 Ibid., p. 132. 93 H. Stead, The Development of S&T Statistics in Canada: An Informal Account, Montreal: CSIIC, http://www.inrs-ucs.uquebec.ca/inc/CV/godin/Stead.pdf. 94 OECD (2001), Summary Record of the NESTI Meeting, DSTI/EAS/STP/NESTI/M (2001) 1, p. 6. 95 See: Chapter 8. 96 OECD (1998), The Use of S&T Indicators in Policy: Analysing the OECD Questionnaire, DSTI/EAS/ STP/NESTI/RD (98) 6.
136
When economics drives S&T measurement
fundamental research was the pivot of economic and industrial development and, therefore, merited preferential treatment, the only conceivable research policy was to expand.”97 Therefore, input indicators were developed, so it was said, to enlighten government decisions concerning funding. The second period of S&T measurement was marked by the development of output indicators, with the aim of evaluating S&T as well as the results of government investments in these activities: “today, it is clear that it is neither absolute expenditures nor percentages which matter but the purposes for which human and financial resources are used.”98 “National science policies are going to be more and more concerned with oriented scientific research.”99 The OECD developed three indicators to this end: patents, technological balance of payments, and high-technology trade. Innovation indicators were added in the early 1990s100 and more recently, studies have begun to appear on the links between science, technology, and productivity.101 All in all, however, input indicators still remain the favorites. A recent OECD study revealed that member countries used output indicators less frequently than any other type of indicator.102 Whereas indicators based on input (GERD) got over 80 percent favorable responses, indicators based on patents, technological balance of payments, and high-technology trade balance got less than 50 percent. Keeping to the OECD vocabulary, I have used the term “output” throughout this chapter. But “output” has often been understood to mean “impact” (or outcomes) at the OECD. There was always confusion between the outputs and impacts of S&T.103 While outputs are the direct results or products of research—production or mere volume of output as economists call them—impacts are the effects of research on society and the economy. But both were, more often than not, amalgamated under the term output. Patents are real output indicators, but other OECD indicators are in fact measures of the economic impacts (or outcomes) of S&T: TBP, high-technology trade and productivity. Be that as it may, the OECD never got involved in bibliometrics, although one can find some bibliometric statistics now and again in several reports.104 And 97 98 99 100 101
OECD (1974), The Research System, 3, op. cit., p 172. Ibid., p. 175. Ibid., p. 187. See: Chapter 8. B. Godin (2004), The New Economy: What the Concept Owes to the OECD, Research Policy, 33, pp. 679–690. 102 OECD (1998), How to Improve the Main Science and Technology Indicators: First Suggestions from Users, DSTI/EAS/STP/NESTI/RD (98) 9. 103 For exceptions, but without any consequences, see OECD (1980), Science and Technology Indicators: A Background for the Discussion, DSTI/SPR/80.29, Paris; C. Falk (1984), Guidelines for Science and Technology Indicators Projects, Science and Public Policy, February, pp. 37–39. 104 See: OECD (1965), Chapters From the Greek Report, DAS/SPR/65.7; OECD (1968), Fundamental Research and the Universities: Some Comments on International Differences, Paris; OECD (1988), Science and Technology Policy Outlook, Paris, pp. 45ss; OECD (1999), Science, Technology and Industry Scoreboard, Paris, p. 89; OECD (2001), Science, Technology and Industry Scoreboard, Paris, pp. 63, 113. The European Union’s European Report on S&T Indicators also contains bibliometric indicators.
When economics drives S&T measurement 137 although it did conduct some discussions on the evaluation of university research,105 the OECD never developed bibliometric indicators. Nor did it produce indicators on the social impacts of research, although it conducted several qualitative exercises on the social assessment of technology and its methodologies. It therefore seems clear that economics and industrial preoccupations drove the measurement of S&T. This was a constant trend throughout the 1961–2000 period. Most member countries never conducted regular surveys of either university or government R&D, but they did conduct industrial R&D surveys on a systematic basis. Now as then, university data and government activities are, for the most part, based on estimates and budget documents, respectively.106 Contrary to what had originally been planned at the OECD, there are still no databases for university research,107 and nor are there any methodological manuals for bibliometrics. As a result, data on university research are the poorest of all DSTI data. Besides the absence of R&D surveys and output measurements, there are two historical limitations affecting the available data. First, the difficulty of measuring basic research led more and more countries to abandon the concept as a means of classifying university R&D.108 Second, the functional classification of university research by scientific discipline is unanimously qualified as outdated.109 As the OECD once admitted regarding university R&D, we are in “a vicious circle whereby the lower the quality of the data, the less it can be used in policy and analytical studies and, the less it appears in such studies, the lower the pressure to improve the data.”110 Hence, the prioritization of economics and industry in statistics and policies.
105 See, for example: OECD (1987), Evaluation of Research, Paris; OECD (1997), The Evaluation of Scientific Research: Selected Experiences, OECD/GD (97) 194. 106 See: Chapter 9. 107 The idea of an Academic Structural Database was considered in the early 1990s, but promptly abandoned. See: OECD (1991), Record of the NESTI Meeting, DSTI/STII/STP/NESTI/M (91) 1; OECD (1992), Combined Program of Work, DSTI/STII/IND/STP/ICCP (92) 1. 108 See: Chapter 14. 109 See: Chapter 10. 110 OECD (1995), NESTI and its Work Relating to the Science System, DSTI/STP/NESTI/SUR (95) 2, p. 9.
8
The rise of innovation surveys Measuring a fuzzy concept
In 1993, twelve European countries conducted the first-ever coordinated survey of innovation activities. This was the second standardized survey of its kind in the history of S&T measurement—the first being the international R&D survey conducted since 1963. The innovation survey was based on the Oslo manual, which OECD member countries had adopted in 1992.1 There have since been three more rounds of innovation surveys. Governments’ interest in innovation dates back to the 1960s, but the OECD countries only began to systematically carry out innovation surveys in the 1980s. There had been some sporadic data collection by government departments (US Department of Commerce), statistical agencies (Statistics Canada), and academic units (Science Policy Research Unit—UK) before then, but rarely in any standardized way. When measuring innovation, governments generally relied on already-available data like patents or industrial R&D expenditures. Eurostat’s and the OECD’s methodological work in the early 1990s marked the beginning of standardization in the field of innovation measurement. The main objective was to develop output indicators, which, as statisticians and policy analysts firmly declared, would measure innovation by measuring the products, processes, and services that arise from innovation activities. But, as this chapter shows, subsequent developments strayed significantly from this initial goal. While official measurements of innovation were, from the start, clearly intended to measure outputs, with time both the national and the international surveys focused instead on the activities. The summary of a recent NSF workshop on innovation in fact reported: “participants generally used the term in a way that focused on the processes and mechanisms for producing commercial applications of new knowledge rather than on the products or outputs from these processes.”2 This chapter examines several reasons for this methodological departure. The first part of the chapter describes early official measurements of innovation by
1 OECD (1991), OECD Proposed Guidelines for Collecting and Interpreting Innovation Data, DSTI/STII/IND/STP (91) 3. Published under catalog number OECD/GD (92) 26. 2 E. V. Larson and I. T. Brahmakulam (2001), Building a New Foundation for Innovation: Results of a Workshop for the NSF, Santa Monica: RAND, p. xii.
The rise of innovation surveys 139 way of proxies: patents and industrial R&D. The second part discusses two competing approaches for measuring innovation proper: innovation as an output and innovation as an activity. The last part examines how the last approach won, and became standardized at the international level.
R&D as a legitimate proxy As early as 1934, J. Schumpeter defined innovation as consisting of any one of the following five phenomena:3 (1) introduction of a new good; (2) introduction of a new method of production; (3) opening of a new market; (4) conquest of a new source of supply of raw materials or half-manufactured goods; and (5) implementation of a new form of organization. Of all the S&T statistics that were carried out before the 1970s, however, very few concentrated on innovation as defined by Schumpeter. Before the 1970s, innovation was usually measured via proxies, the most important of which were patents and industrial expenditures on R&D. The extensive use of patents as an indicator of innovation was pioneered by Jacob Schmookler in the 1950s. People soon began to realize, however, that patents actually measure invention, not innovation.4 Fortunately, a second source of data became widely available at that time. In the mid-1960s, R&D surveys began to be conducted in a systematic way, and industrial R&D was concomitantly used as a proxy for measuring innovation. One can find precursors to this practice of using R&D to measure innovation going back to the 1930s. In 1933, M. Holland and W. Spraragen from the US National Research Council (NRC) produced the first innovation statistics: “The inquiry was designed to bring out principally the comparative amounts spent for research in 1929 and 1931; also the relation of these expenditures to changes in volumes of sales, and the relative effectiveness of industrial laboratories in leading commercial development.”5 Holland and Spraragen’s study of industrial R&D laboratories in the United States showed an increase in research devoted to the development or improvement of new products as opposed to the reduction of production costs.6 Over 90 percent of firms reported having produced new products that had been commercialized during the previous two years. The study also compiled a list of new products that these laboratories were investigating.7 The next wave of innovation statistics would occur some thirty years later in industrial surveys, such as those conducted by McGraw-Hill in the United States,
3 J. A. Schumpeter (1934), The Theory of Economic Development, London: Oxford, 1980, p. 66. 4 National Bureau of Economic Research (1962), The Rate and Direction of Inventive Activity: Economic and Social Factors, New York: Arno Press. We owe to Schumpeter, op. cit. the distinction between invention, (initial) innovation, and (innovation by) imitation. 5 M. Holland and W. Spraragen (1933), Research in Hard Times, op. cit., p. 1. 6 Ibid., p. 5. 7 Ibid., table 11, no page number.
140
The rise of innovation surveys
which asked questions on the purpose of R&D (products or processes),8 or the Federation of British Industries (FBI), which conducted a survey of industrial R&D with questions on innovations and their commercial use.9 British firms were asked to estimate the expenditures and personnel (in man-hours) allocated to innovation activities for the purpose of minor improvements, major improvements, new products or technical services. The FBI reported that 37 percent of industrial R&D was directed toward new products and 24 percent toward major improvements. Despite their titles, early official statistical analyses and policy documents on innovation, like Technological Innovation in Britain (1968) by the Advisory Council for Science and Technology, mostly measured R&D rather than innovation. Similarly, the first OECD documents on innovation relied chiefly on industrial R&D data.10 In 1976, then, K. Pavitt, acting as consultant to the OECD, suggested that the organization thereafter measure innovation activities proper: Statistics on R&D have inherent limitations (. . .). They do not measure all the expenditures on innovative activities (. . .). In particular, they do not measure the expenditures on tooling, engineering, manufacturing and marketing startup that are often necessary to turn R&D into economically significant technical innovations. Nor do they measure the informal and part-time innovative activities that are undertaken outside formal R&D laboratories (. . .). They do not indicate the objectives of R&D activities, for example, products or processes (. . .). They do not measure outputs, either in terms of knowledge, or in terms of new or better products and production processes.11 (OECD (1976), The Measurement of Innovation-Related Activities in the Business Enterprise Sector, DSTI/SPR/76.44, pp. 2–3) Soon, everyone admitted the deficiency of the indicator. For example, an OECD ad hoc group on science, technology, and competitiveness stated: “innovation cannot be reduced to nor does it solely arise from R&D,” and admitted that “it is probably quite as erroneous and misleading for appropriate and adequate policy making for technology and competitiveness to equate R&D with innovative capacity.”12
8 McGraw-Hill (1971), Business’ Plans for Research and Development Expenditures, New York. 9 Federation of British Industries (1961), Industrial Research in Manufacturing Industry: 1959–1960, London, pp. 83ss. 10 OECD (1966), Government and Technical Innovation, Paris. 11 In 1965, the OECD had already distanced itself from Schumpeter’s three-part definition of innovation (invention, innovation, imitation): “innovation should be interpreted more broadly to include all related activity resulting in improvements in processes and products (. . .)”. See: OECD (1965), The Factors Affecting Technical Innovation: Some Empirical Evidence, op. cit., p. 5. 12 OECD (1984), Science, Technology and Competitiveness: Analytical Report of the Ad Hoc Group, STP (84) 26, p. 40.
The rise of innovation surveys 141
Measuring innovation proper Drawing upon a review of the literature and the results of recent surveys, Pavitt suggested including questions in national R&D surveys on patents, technology transfer, and innovation activities. With regard to innovation activities specifically, he suggested asking for the percentage of the company’s activities devoted to innovation, the expenditures spent on industrial innovation, and a list of significant new products and processes that the company had introduced.13 Pavitt was in fact suggesting the measurement of innovation as both an activity (the percentage of the company’s activities devoted to innovation) and an output (the list of significant new products and processes). In fact, innovation is a concept with multiple meanings. For some, it refers to products and processes coming out of R&D and related activities, and early measurements of innovation proper were clearly intended to measure output coming from these activities. For others, the concept refers to the activities themselves. With time, national and international surveys focused on innovation as an activity. Innovation as output At the official level, it was the US National Science Foundation (NSF) that started measuring innovation by using the output approach: identifying and counting commercialized technological innovations (and the characteristics of the firms that produced them). This orientation was probably a spin-off from earlier studies contracted to A. D. Little14 and to E. Mansfield, who was associate professor of Economics at Carnegie Institute of Technology,15 and the well-known TRACES study on the relationships between S&T.16 The first large NSF innovation study was conducted by the US National Planning Association between 1963 and 1967 under the direction of S. Myers from the Institute of Public Administration in Washington DC.17 The NSF published the results in 1969.18 The study examined
13 OECD (1976), The Measurement of Innovation-Related Activities in the Business Enterprise Sector, op. cit. 14 Arthur D. Little Inc. (1963), Patterns and Problems of Technical Innovation in American Industry, report submitted to the NSF, C-65344, Washington. 15 For summaries, see: NSF (1961), Diffusion of Technological Change, Reviews of Data on R&D, 31, October, NSF 61-52; NSF (1962), Innovation in Individual Firms, Reviews of Data on R&D, 34, June, NSF 62-16; NSF (1963), Enquiries into Industrial R&D and Innovation, Reviews of Data on R&D, 38, March, NSF 63-12; E. Mansfield (1970), Industrial Research and Technological Innovation: An Econometric Analysis, New York: Norton; E. Mansfield et al. (1971), Research and Innovation in the Modern Corporation, New York: Norton. 16 IIT Research Institute (1968), Technology in Retrospect and Critical Events in Science (TRACES), Washington: NSF; Batelle Columbus Labs (1973), Interactions of Science and Technology in the Innovative Process: Some Case Studies, Washington: NSF. 17 S. Myers, E. B. Olds, and J. F. Quinn (1967), Technology Transfer and Industrial Innovation, Washington: National Planning Association. 18 S. Myers and D. G. Marquis (1969), Successful Industrial Innovation: A Study of Factors Underlying Innovation in Selected Firms, NSF 69-17, Washington.
142
The rise of innovation surveys
567 technical innovations, most of them minor, that were identified by 121 firms in five manufacturing industries. Interviews were conducted with individuals who had been directly involved in the innovation. The report discussed the characteristics of the firms and examined, among other things, the sources of the innovations (original or adopted), their nature (products or processes), their costs, and their impacts on the firms’ production processes. In 1974, the NSF commissioned Gellman Research Associates to conduct a second innovation survey based on the same approach. The study examined 500 major product innovations that were introduced during the 1953–73 period. It considered the time between invention and innovation, the rate of return on the investment, the “radicalness” of the innovations and the size and R&D intensity of the companies that produced them.19 The NSF included the results of the study in the 1975 and 1977 editions of Science Indicators (SI ).20 It was at about the same time that interest in measuring innovation at the OECD really began. With its 1968 report Gaps in Technology, innovation performance became the key for explaining differences between the United States and Western Europe: “The performance of a country in technological innovation has been defined as the rate at which new and better products and production processes have been introduced and diffused throughout its economy.”21 Two aspects of innovation were measured: (1) performance in terms of being first to commercialize new products and processes (performance in originating innovations) and (2) performance in terms of the level and rate of increase in the use of new products and processes (performance in diffusing innovations). The data relied on 140 significant innovations since 1945 in the basic metals, electrical and chemical industries. The report indicated that American firms were the most innovative: approximately 60 percent of the 140 innovations came from the United States. It concluded that: “United States firms have turned into commercially successful products the results of fundamental research and inventions originating in Europe. Few cases have been found of the reverse process.”22 The report was based on data collected from nine OECD sector studies,23 but some of the data had also been obtained from national governments, published sources, experts, and industrialists. The Technological Gaps report was followed by a second study a few years later by K. Pavitt and S. Wald titled: The Conditions for Success in Technological Innovation.24 The study noted the country of origin of 110 of
19 Gellman Research Associates (1976), Indicators of International Trends in Technological Innovation, Washington: NSF. 20 NSF (1975), Science Indicators 1974, Washington, pp. 99–110; NSF (1977), Science Indicators 1976, Washington, pp. 115–127. 21 OECD (1968), Gaps in Technology: General Report,Paris, p. 14. 22 Ibid. p. 17. 23 Six sector studies were undertaken by the Committee for Science Policy, and three more by the Committee for Industry: scientific instruments, electronic components, electronic computers, machine tools, plastics, fibers, pharmaceuticals, iron and steel, and non-ferrous metals. 24 OECD (1971), The Conditions for Success in Technological Innovation, Paris.
The rise of innovation surveys 143 the most significant innovations identified in the Gaps study. The United States led with 74 innovations, followed by the United Kingdom (18) and Germany (14).25 Innovation as activity Both the NSF and the OECD measured innovation as an output rather than as an activity. Innovation was measured, among other things, on the basis of technological products and processes that had originated from innovative activities. Both organizations soon stopped using the output approach, however. The NSF instead turned to supporting consultants26 and academics27 in developing innovation indicators, based on an activities approach that consisted of asking firms about their overall innovation activities, not their specific innovations.28 The NSF carried out two surveys on innovation activities: one in 1985 of 620 manufacturing companies,29 and another in 1993.30 The Census Bureau conducted the latter as a pilot study of 1,000 manufacturing firms (and one servicesector firm). The survey revealed that a third of the firms introduced a product or process during the 1990–92 period. The 1993 NSF survey was part of the OECD/European Community Innovation Survey (CIS) efforts (see below). Before the 1990s, however, the NSF had been completely oblivious to the existence of similar surveys in Europe.31 Germany was in fact already active in this field, having initiated such innovation surveys as early as 1979.32 Italy and other European countries followed suit in the
25 One more author using the same data was J. Ben-David (1968), Fundamental Research and the Universities: Some Comments on International Differences, Paris: OECD, pp. 20–21. 26 W. M. Hildred and L. A. Bengston (1974), Surveying Investment in Innovation, Washington: NSF. 27 C. T. Hill, J. A. Hansen, and J. H. Maxwell (1982), Assessing the Feasibility of New Science and Technology Indicators, Center for Policy Alternatives, MIT; C. T. Hill, J. A. Hansen, and J. I. Stein (1983), New Indicators of Industrial Innovation, Center for Policy Alternatives, MIT; J. A. Hansen, J. I. Stein, and T. S. Moore (1984), Industrial Innovation in the United States: A Survey of Six Hundred Companies, Center for Technology and Policy, Boston University; J. A. Hansen (1987), International Comparisons of Innovation Indicator Development, Washington: NSF. 28 On the accounting problems of the time related to this way of measuring innovation, see: S. Fabricant et al. (1975), Accounting by Business Firms for Investment in R&D, Study conducted for the NSF, New York University, section III. 29 A Survey of Industrial Innovation in the United States: Final Report, Princeton: Audits and Surveys— Government Research Division, 1987; NSF (1987), Science and Engineering Indicators 1987, Washington, pp. 116–119. 30 NSF (1996), Science and Engineering Indicators 1996, Washington, pp. 6-29–6-30. 31 J. A. Hansen (2001), Technology Innovation Indicator Surveys, in J. E. Jankowski, A. N. Link, and N. S. Vonortas (eds), Strategic Research Partnerships, Proceeding of an NSF Workshop, NSF 01-336, Washington, p. 224. 32 L. Scholz (1980), First Results of an Innovation Test for the Federal Republic of Germany, STIC/80.40; L. Scholz (1986), Innovation Measurement in the Federal Republic of Germany, Paper presented at the OECD Workshop on Innovation Statistics; L. Scholz (1988), The Innovation Activities of German Manufacturing Industry in the 1980s, DSTI/IP/88.35; L. Scholz (1992), Innovation Surveys and the Changing Structure of Investment in Different Industries in Germany, STI Review, 11, December, pp. 97–116.
144
The rise of innovation surveys
mid-1980s.33 What characterized the European surveys was their use of the activities approach: they surveyed on firms’ overall innovation activities, rather than on their specific innovative output.34 The only other official efforts to measure innovation within an output approach were the US Small Business Administration’s study of 8,074 innovations that were commercially introduced in the United States in 1982,35 and the (irregular) surveys on the diffusion of advanced technologies carried out in the United States, Canada, and Australia.36 The reorientation of innovation statistics towards activities owes its origin to the publication of the Steacie report by the US Department of Commerce in 1967.37 In fact, the Steacie report solved one of the main methodological problems confronting S&T statisticians—how to measure output: “There exist no coherent and universally accepted body of economic theory, or of statistics, which enables a simple and uncontroversial measurement of performance in technological innovation (. . .),” stated the OECD report on technological gaps. “Ideally, these comparisons should be based on an identification of the most significant innovations” (output approach).38 But the report identified three limitations to such a methodology: a limited and biased sample, no assessment of the relative importance of innovations, and the difficulty of clearly identifying the country of
33 D. Archibugi, S. Cesaratto, and G. Sirilli (1987), Innovative Activity, R&D and Patenting: the Evidence of the Survey on Innovation Diffusion in Italy, STI Review, 2, pp. 135–150; D. Archibugi, S. Cesaratto, and G. Sirilli (1991), Sources of Innovation Activities and Industrial Organization in Italy, Research Policy, 20, pp. 299–314. For other countries, see the special issue of STI Review (1992), Focus on Innovation, 11, December. 34 For good summaries, see: J. A. Hansen (1999), Technology Innovation Indicators: A Survey of Historical Development and Current Practice, SRI; J. A. Hansen (1992), New Indicators of Industrial Innovation in Six Countries: A Comparative Analysis, DSTI/STII/STP/NESTI/RD (92) 2; J. A. Hansen (1987), International Comparisons of Innovation Indicator Development, Washington: NSF; J. A. Hansen (1986), Innovation Indicators: Summary of an International Survey, OECD Workshop on Innovation Statistics, Paris, December 8–9; OECD (1982), Patents, Invention and Innovation, DSTI/SPR/82.74, pp. 34–38; J. M. Utterback (1974), Innovation in Industry and the Diffusion of Technology, Science, 183, pp. 620–626. 35 The Futures Group (1984), Characterization of Innovations Introduced on the US Market in 1982, Report prepared for the US Small Business Administration, Department of Commerce, Washington: NTIS. 36 For the United States, see: NSF (1991), Science and Engineering Indicators 1991, Washington, pp. 154–157; NSF (1996), Science and Engineering Indicators 1996, Washington, pp. 6-24–6-27; NSF (1998), Science and Engineering Indicators 1998, Washington, chapter 8; NSF (2000), Science and Engineering Indicators 2000, Washington, Chapter 9; For Canada: Statistics Canada (1989), Survey of Manufacturing Technology: The Leading Technologies, Science Statistics, 88-001, 13 (9), October; Y. Fortier and L. M. Ducharme (1993), Comparaison de l’utilisation des technologies de fabrication avancées au Canada et aux États-Unis, STI Review, 12, pp. 87–107; For Australia: B. Pattinson (1992), Survey of Manufacturing Technology—Australia, DSTI/STII/STP/NESTI (92) 8. 37 US Department of Commerce (1967), Technological Innovation: Its Environment and Management, USGPO, Washington. 38 OECD (1970), Gaps in Technology: Comparisons Between Member Countries in Education, R&D, Technological Innovation, International Economic Exchanges, Paris, pp. 183–184.
The rise of innovation surveys 145 39
origin. Conclusions were therefore rapidly drawn: “We are not convinced by the various attempts to measure trends in the output of innovations and, in particular, the output of major ‘epoch-making’ innovations.”40 The Steacie report solved the problem when it suggested continuing with the current philosophy of measuring inputs devoted to activities—as the Frascati manual had—rather than counting products and processes coming out of these activities. The report defined and measured innovation in terms of five categories of activities: R&D, design engineering, tooling and engineering, manufacturing, and marketing. The report found that only 5–10 percent of innovation costs could be attributed to R&D, which meant that R&D was not a legitimate proxy for measuring innovation. The statistics were soon challenged,41 but the report influenced subsequent innovation surveys worldwide. Canada was the first country to produce this type of survey. Statistics Canada, for example, conducted three innovation surveys and tested two approaches to measuring innovation in the early 1970s.42 The first approach (1971) expanded the regular industrial R&D survey to cover innovation: 97 firms were asked how much they spent on innovation activities in general, as broadly defined by the categories of the Steacie report. The other approach (1973) was project-based, and collected data on 202 specific projects. But since most firms kept few records of their innovation activities and projects, neither approach was able to produce conclusive results.
Internationalizing the official approach It took some time before the OECD turned to the activity approach. As early as the first OECD ministerial meeting on science in 1963, ministers had asked the organization to intensify its work on the contribution of science to the economy. The demand led, among other things, to sector reviews43 and policy discussions on innovation,44 and to a specific definition of innovation as follows: “Technical innovation is the introduction into a firm, for civilian purposes, of worthwhile new or improved production processes, products or services which have been made possible by the use of scientific or technical knowledge.”45 But when the
39 OECD (1970), Gaps in Technology, op. cit., p. 191. 40 OECD (1980), Technical Change and Economic Policy, Paris, p. 47. 41 E. Mansfield et al. (1971), Research and Innovation in the Modern Corporation, New York: Norton and Co; H. Stead (1976), The Costs of Technological Innovation, Research Policy, 5: 2–9. 42 Statistics Canada (1975), Selected Statistics on Technological Innovation in Industry, 13-555. 43 Sector reviews were studies of selected sectors of science with implications for economic growth and development (e.g. mineral prospecting, chemical engineering, metal physics, automatic control, biological sciences, operational research). See: OECD (1963), Sector Reviews in Science: Scope and Purpose, SR (63) 32. The first sector review was conducted by C. Freeman, in collaboration with J. Fuller and A. Young (1963), The Plastics Industry: A Comparative Study of Research and Innovation, National Institute for Economic and Social Research, London. 44 OECD (1965), The Factors Affecting Technical Innovation: Some Empirical Evidence, DAS/SPR/65.12; OECD (1966), Government and Technical Innovation, op. cit. 45 OECD (1966), Government and Technical Innovation, op. cit., p. 9.
146
The rise of innovation surveys
OECD finally included the concept of innovation in the Frascati manual for the first time (1981), it excluded innovation activities from the measurement of R&D because they were defined as related scientific activities (RSA): Scientific and technological innovation may be considered as the transformation of an idea into a new or improved salable product or operational process in industry and commerce or into a new approach to a social service. It thus consists of all those scientific, technical, commercial and financial steps necessary for the successful development and marketing of new or improved manufactured products, the commercial use of new or improved processes and equipment or the introduction of a new approach to a social service. (OECD (1981), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, Paris, p. 15) As national innovation surveys multiplied, however, interest in measuring innovation increased at the OECD. The conference on output indicators held in 1980 discussed recent national innovation surveys and indicators (patents),46 and workshops specifically devoted to innovation were organized in 1982,47 1986,48 and 1994.49 By then, and for a while, patents were categorically recognized as a “poor indicator of a country’s technological position.”50 OECD projects on innovation 1972–1986 1973–1976 1978–1981 1981–1989 1986–1989
Innovation policies Innovation in services Innovation in small and medium sized enterprises (SME) Evaluation and impact of government measures Reviews of innovation policies (France, Ireland, Spain, Yugoslavia, and Western Canada) 1994–1998 Best Practices in Technology Policies 1994–2001 National Systems of Innovation (NSI) Technology Diffusion The real impetus to the OECD’s involvement in innovation surveys was the first international (or regional) collection of data in Scandinavia under the aegis of the Nordic Fund for Industrial Development.51 The Nordic Fund wished to conduct
46 Science and Technology Indicators Conference, September 1980, particularly: C. DeBresson, The Direct Measurement of Innovation, STIC/80.3, OECD. 47 OECD (1982), Patents, Invention and Innovation, DSTI/SPR/82.74. 48 OECD (1986), Workshop on Innovation Statistics. 49 OECD (1996), Innovation, Patents and Technological Strategies, Paris. 50 OECD (1982), Patents, Invention and Innovation, op. cit. p. 28. 51 Nordic Industrial Fund (1991), Innovation Activities in the Nordic Countries, Oslo.
The rise of innovation surveys 147 a coordinated set of surveys on innovation activities in four countries (Finland, Norway, Denmark and Sweden) and organized a workshop to that end in 1988.52 The OECD and member countries were invited to attend. The basic paper of the workshop, written by K. Smith from the Innovation Studies and Technology Policy Group (Science Policy Council, Norway), set forth a conceptual framework for developing innovation indicators.53 The framework was revised during a second workshop in Oslo in 198954 and presented to the OECD Group of National Experts on Science and Technology Indicators (NESTI) the same year. To stay ahead of the game, the OECD decided to adopt the Nordic “manual” as its own. NESTI recommended that the Nordic Fund for Industrial Development prepare a draft manual for OECD member countries. K. Smith and M. Akerblom (Central Statistical Office, Finland) drafted the document.55 The draft was discussed and amended by the OECD member countries in 1990 and 1991,56 adopted in 1992,57 revised for the first time in 1996,58 and published in collaboration with Eurostat in 1997. Another revision is planned before the next (the fourth) round of surveys.59 The purpose of the Oslo manual was to harmonize national methodologies60 and collect standardized information on the innovation activities of firms: the type of innovations carried out, the sources of technological knowledge, the expenditures on related activities, the firms’ objectives, the obstacles to innovation and the impacts of innovation activities. It concentrated on technological product and process (TPP) innovations: “TPP innovation activities are all those scientific, technological, organizational, financial and commercial steps, including investment in new knowledge, which actually, or are intended to, lead to the implementation of technologically new or improved products or processes.”61 A firm was considered innovative if it produced one or more technologically new or significantly improved products or processes in a three-year period.62
52 53 54 55 56 57 58 59 60 61 62
OECD (1988), Nordic Efforts to Develop New Innovation Indicators, DSTI/IP/88.25. K. Smith (1989), New Innovation Indicators: Basic and Practical Problems, DSTI/IP/89.25. The main revisions dealt with the problems of identifying and measuring novelty. OECD (1990), Preliminary Version of an OECD Proposed Standard Practice for Collecting and Interpreting Innovation Data, DSTI/IP/90.14. OECD (1991), Compte rendu succinct de la réunion d’experts nationaux pour l’examen du projet de Manuel Innovation, DSTI/STII/IND/STPM (91) 1. OECD (1991), OECD Proposed Guidelines for Collecting and Interpreting Innovation Data (Oslo Manual), op. cit. OECD (1996), Summary Record of the Meeting Held on 6–8 March 1996, DSTI/EAS/STP/NESTI/M (96) 1. B. Pattinson (2001), The Need to Revise the Oslo Manual, DSTI/EAS/STP/NESTI (2001) 9; OECD (2001), Summary Record of the NESTI Meeting, DSTI/EAS/STP/NESTI/M (2001) 1. For differences between countries, see: P. Kaminski (1993), Comparison of Innovation Survey Findings, DSTI/EAS/STP/NESTI (93) 2. OECD (1997), The Measurement of Scientific and Technological Activities: Proposed Guidelines for Collecting and Interpreting Technological Innovation Data, Paris, p. 39. Ibid., p. 53.
148
The rise of innovation surveys
In 1992, the OECD organized, in collaboration with Eurostat, a meeting to draft a standard questionnaire and a core list of questions that would permit international comparisons of innovation surveys in Europe.63 Three rounds of coordinated surveys were subsequently carried out in 1993, 1997, and 2001. Workshops were also held in 199364 and 199965 to review the results of the CIS.66 The discussions centered on a number of important issues. The first issue consisted of choosing the approach. Should the survey consider innovation as an output or as an activity? The Oslo manual called the first option the “object approach” (with the innovation serving as the unit of analysis) and the second option the “subject approach” (with the firm and the totality of its innovative activities serving as the unit of analysis). According to the manual, the object approach “results in a direct measure of innovation.”67 It “has the important advantage of asking questions at the project level, while in standard R&D and innovation surveys they tend to be asked at the firm level, forcing large firms to give some average answer across a number of projects.”68 The approach works as follows: “develop a list of significant innovations through literature searches or panels of experts, identify the firms that introduced the innovations, and then send questionnaires to those firms about the specific innovations.”69 The OECD opted for the subject approach, however, relegating the discussion of the object approach to an appendix in the Oslo manual. There it mentioned that the two approaches could be combined, adding that in such cases the survey should be limited to the main innovations only, since most firms were ill equipped to provide this kind of detailed information. This methodological consideration only played a secondary role in the decision, however. In fact, the OECD claimed that it preferred the subject approach because it is “firms that shape economic outcomes and are of policy significance.”70 The choice was in line with the way statistical offices have “controlled” the measurement of S&T since the 1960s: the object approach is primarily an expertise developed (and owned) by academics
63 OECD (1992), Summary Record of the Meeting of the Expert Working Group on Harmonized Innovation Surveys, DSTI/STII/STP/NESTI/M (92) 2. 64 OECD (1993), Summary Record of the Joint EC/OECD Seminar on Innovation Surveys, DSTI/EAS/ STP/NESTI/M (93) 2. 65 OECD (1999), Summary Record of the Joint Eurostat/OECD Meeting on Innovation Surveys: Outcomes from the Workshop, DSTI/EAS/STP/NESTI/M (99) 2. 66 For non-member country surveys, see OECD (1999), Description of National Innovation Surveys Carried Out, or Foreseen, in 1997–1999 in OECD non CIS-2 participants and NESTI Observer Countries, DSTI/DOC (99) 1. 67 OECD (1997), The Measurement of Scientific and Technological Activities: Proposed Guidelines for Collecting and Interpreting Technological Innovation Data, op. cit., p. 85. 68 Ibid., pp. 83–84. 69 J. A. Hansen (2001), Technology Innovation Indicator Surveys, in J. E. Jankowski, A. N. Link, and N. S. Vonortos (eds), Strategic Research Partnerships, Proceedings from an NSF Workshop, Washington, p. 222. 70 OECD (1997), The Measurement of Scientific and Technological Activities: Proposed Guidelines for Collecting and Interpreting Technological Innovation Data, op. cit., p. 29.
The rise of innovation surveys 149 71
like economists in the United States, SPRU researchers in the United Kingdom,72 and A. Kleinknecht et al. in the Netherlands;73 whereas the firm-based survey (and its subject approach) has always been the characteristic instrument of statistical offices.74 The second issue discussed at the workshops concerned the survey’s focus and coverage. Schumpeter suggested five types of innovation, including organizational and managerial innovation. The Oslo manual, however, concentrated solely on technological innovation. Although the second edition of the manual included (marketed) services,75 it maintained a restricted and techno-centric view of innovation.76 As H. Stead once stated, technological innovation “obviously excludes social innovation.”77 Non-technological innovation such as organizational change, marketing-related changes and financial innovations were discussed in the manual, but again, only as an afterthought in the appendices. This choice was by no means new. The measurement of science and technology had been biased by a hierarchical approach ever since the first edition of the 71 J. Jewkes, D. Sawers, and R. Stillerman (1958), The Sources of Invention, New York: St Martin’s Press; National Bureau of Economic Research (1962), The Rate and Direction of Inventive Activity, Princeton: Princeton University Press; E. Mansfield (1968), Research and Technical Change, Industrial Research and Technological Innovation: An Economic Analysis, New York, Norton; E. Mansfield et al. (1977), Social and Private Rates of Return From Industrial Innovations, Quarterly Journal of Economics, May, pp. 221–240. 72 As part of the Bolton Committee of Enquiry on Small Firms, the Science Policy Research Unit (SPRU) initiated a huge project in 1967 compiling all significant innovations in Britain: C. Freeman (1971), The Role of Small Firms in Innovation in the United Kingdom, Report to the Bolton Committee of Enquiry on Small Firms, HMSO; SAPPHO Project (1972), Success and Failure in Industrial Innovation: A Summary of Project SAPPHO, London: Centre for the Study of Industrial Innovation; R. Rothwell et al. (1974), and SAPPHO updated: Project SAPPHO Phase II, Research Policy, pp. 258–291; F. Henwood, G. Thomas, and J. Townsend (1980), Science and Technology Indicators for the UK—1945–1979: Methodology, Problems and Preliminary Results, STIC/80.39; J. Townsend et al. (1981), Science Innovations in Britain Since 1945, SPRU Occasional Paper series, no. 16, Brighton: SPRU; C. Debresson (1980), The Direct Measurement of Innovation, op. cit.; K. Pavitt (1983), Characteristics of Innovative Activities in British Industry, Omega, 11, pp. 113–130. 73 A. Kleinknecht (1993), Towards Literature-Based Innovation Output Indicators, Structural Change and Economic Dynamics, 4 (1), pp. 199–207; A. Kleinknecht and D. Bain (1993), New Concepts in Innovation Output Measurement, London: Macmillan; E. Brouwer and A. Kleinknecht (1996), Determinants of Innovation: A Microeconometric Analysis of Three Alternative Innovation Output Indicators, in A. Kleinknecht (ed.), Determinants of Innovation: the Message from New Indicators, Houndmills: Macmillan, pp. 99–124. See also: E. Santarelli and R. Piergiovanni (1996), Analyzing Literature-Based Innovation Output Indicators: The Italian Experience, Research Policy, 25, pp. 689–711; R. Coombs, P. Narandren, and A. Richards (1996), A Literature-Based Innovation Output Indicator, Research Policy, 25, pp. 403–413. 74 Australia and Canada tried to incorporate into the Oslo manual questions on the diffusion of advanced technology, but without success, because the subject approach took precedence in the end. See: OECD (1991) Compte rendu succinct, op. cit., p. 6; B. Pattinson (1992), Proposed Contents of an Addendum Dealing with Surveys of Manufacturing Technology, DSTI/STII/STP/NESTI (92) 9. 75 Excluding health care, however. 76 F. Djellal and F. Gallouj (1999), Services and the Search for Relevant Innovation Indicators: A Review of National and International Surveys, Science and Public Policy, 26 (4), p. 231. 77 H. Stead (1976), The Measurement of Technological Innovation, DSTI/SPR/76.44/04, p. 1.
150
The rise of innovation surveys
Frascati manual. The manufacturing industries took precedence over the service industries in surveys, for example, and national R&D surveys initially concentrated on the natural sciences and only later included the social sciences. Finally, related scientific activities have always been systematically excluded from surveys. All in all, current statistics “were built on the bricks and mortar model.”78 A third issue of the survey’s methodology was the concept of novelty. Some recent national innovation surveys had recorded a disproportionately high number of innovative firms. In a recent Canadian study, for example, over 80 percent of the firms surveyed declared themselves to be innovators!79 The source of such overestimations would seem to lie in the Oslo manual’s decision to define novelty as something that a firm perceives as new rather than as what the market established as new.80 Why define novelty in this way? Because “firms generally know when a product or production process is new to their firms. Often they do not know whether it is also new to their industry, new to their country or region, or new to the world.”81 Nonetheless, it is by using such qualitative answers to the questionnaire that statisticians calculate the main innovation indicator. The simplest Oslo manual indicator is the innovation rate, or the percentage of firms that innovate. As the Canadian statistic showed, the majority of firms describe themselves as innovators. It is a marvelous statistic for policy rhetoric and managerial morale, but, as the Oslo manual itself warned, the “proportion of [innovative firms] threatens to become a magic number comparable to the percentage of GDP devoted to R&D.”82 Apart from the above three issues, there were two other problems that troubled people, because they could weaken the legitimacy of innovation statistics. First, there were two major countries that did not conduct regular innovation surveys nor participate in the OECD/Eurostat exercise. These were the United States and Japan. This absence was compounded by the fact that only about 50 percent of firms actually respond to the surveys in the participating countries. In fact, it is mainly European countries that conduct innovation surveys today. This goes back to the technological gap debate and disparities with the United States, and the slowness with which European countries transformed research results into commercial innovations, which were at the center of policy discussions immediately after World War II. The OECD was deeply involved in these debates, and the science and technology statistics it published between 1970 and 1990 always showed and discussed Europe lagging far behind the United States. The same discourse on innovation gaps continues to this day at the European Union.83 The relative absence of innovation surveys in the 78 D. Guellec (2001), New Science and Technology Indicators for the Knowledge-Based Economy: Opportunities and Challenges, STI Review, 27, p. 9. 79 Statistics Canada (2001), Innovation Analysis Bulletin, 88-003, 3 (2), p. 5. 80 J. A. Holbrook and L. P. Hughes (2001), Comments on the Use of the OECD Oslo Manual in Non-Manufacturing Based Economies, Science and Public Policy, 28 (2), pp. 139–144. 81 J. A. Hansen (2001), Technology Innovation Indicator Surveys, op. cit., p. 229. 82 OECD (1997), The Measurement of Scientific and Technological Activities: Proposed Guidelines for Collecting and Interpreting Technological Innovation Data, op. cit., p. 11. 83 European Union (2001), Competitiveness Report 2001: Competitiveness, Innovation and Enterprise Performance, Brussels.
The rise of innovation surveys 151 United States and Japan, on the other hand, is probably a consequence of their uncontested superiority in innovation. With their comfortable lead, the United States and Japan had, until recently, little need to measure their technological performances, or at least not as regularly as the European countries do. However, their participation is crucial for carrying out international comparisons. Second, innovation surveys carried measurement problems. Experts, for example, considered expenditures data84 to be of questionable value: “The biggest problem stems from attempts to separate the part of each category of expenditure that is related to new and improved products and processes from the part that relates to routine activities.”85 It is a problem commonly encountered in measuring R&D activities. The European questionnaire attempted to address the matter by asking firms whether the data provided were exact or only rough estimates of the actual numbers. This resulted in more firms simply doing rough estimates. Another related and important measurement problem was the recurring discrepancy between innovation and R&D survey data.86 Innovation surveys recorded significantly less R&D activity than did standard R&D surveys because of methodological differences between the two types of surveys (see Table 8.1). Nine sources of differences were recently identified, including ●
●
●
●
●
Different population frames: R&D surveys are often drawn from a special list of known (or potential) R&D performers, whereas innovation surveys are generally based on a population of businesses drawn from a statistical register. Different sampling methods: R&D surveys are censuses of businesses which undertake R&D, while innovation surveys are generally based on stratified random samples of businesses. Occasional R&D is often omitted from R&D surveys because it is too difficult, or too expensive, to obtain a list of occasional R&D performers. Industrial classification: large enterprise groups set up separate enterprises to perform their R&D, and do not have the appropriate accounting systems for monitoring expenditures. Non-response: in about half the countries, response rates of less than 50 percent were obtained in the innovation survey.
So which of the two instruments is better for measuring innovation? The answer is neither, if we take the following statistician’s statement at face value: “We should not seek at any price to secure the same measurement of R&D in both surveys, but 84 On R&D, know how, tooling up, design, start-up, marketing. 85 J. A. Hansen (2001), Technology Innovation Indicator Surveys, op. cit., p. 232. 86 OECD (2001), Assess Whether There Are Changes Needed as a Result of the Comparison of R&D Data Collected in R&D and in Innovations Surveys, DSTI/EAS/STP/NESTI (2001) 14/PART3; D. Francoz (2000), Measuring R&D in R&D and Innovation Surveys: Analysis of Causes of Divergence in Nine OECD Countries, DSTI/EAS/STP/NESTI (2000) 26; D. Francoz (2000), Achieving Reliable Results From Innovation Surveys: Methodological Lessons Learned From Experience in OECD Member Countries, Communication presented to the Conference on Innovation and Enterprise Creation: Statistics and Indicators, Sophia Antipolis, November 23–24.
152
The rise of innovation surveys
Table 8.1 R&D expenditure measured in R&D surveys and innovation surveys, France, 1997a Industry
R&D expenditure from R&D survey (US$ millions)
R&D expenditure from innovation survey (US$ millions)
Food, beverages, tobacco Textiles, clothing, footwear, and leather Wood and paper products Printing, publishing, and recorded media Petroleum, coal, chemical, and associated products Non-metallic mineral products Metal products Machinery and equipment Electric and electronic machinery Precision instruments Automobiles Other transport (mainly aeronautics and space) Energy Other manufacturing Total manufacturing
N/A 120 51 4 3,832
N/A 126 49 14 1,894
212 497 1,230 2,551 1,616 2,027 2,439
128 455 879 2,724 1,171 1,122 1,039
524 111 15,214
575 78 10,254
Note a For similar data on Italy and Germany, see G. Sirilli (1999), Old and New Paradigms in the Measurement of R&D, DSTI/EAS/STP/NESTI (99) 13; C. Grenzmann (2000), Differences in the Results of the R&D Survey and Innovation Survey: Remark on the State of the Inquiry, DSTI/EAS/STP/NESTI/RD (2000) 24.
rather understand and measure the divergences.”87 For others, however, the right number was the one taken from the R&D survey, not the innovation survey: “Several delegates did not see it as a problem to have different figures if it recognized that the official figure for R&D should be taken from the R&D survey.”88 Efforts are nevertheless underway to obtain a single measure of innovation. There are two options on the table:89 either the two surveys could be combined, as envisaged by Eurostat—the main user of the innovation survey—or they could, at the very least, be conducted by the same agency, as the OECD seems to prefer.
87 D. Francoz (2000), Measuring R&D in R&D and Innovation Surveys: Analysis of Causes of Divergence in Nine OECD Countries, DSTI/EAS/STP/NESTI (2000) 26, p. 5. 88 Eurostat (2002), Summary Record of Eurostat/OECD Task Force Meeting 20 March 2002 to Discuss the Co-ordination of R&D Surveys and Innovation Surveys, Luxembourg, p. 3. 89 OECD (2001), Assess Whether There Are Changes Needed as a Result of the Comparison of R&D Data Collected in R&D and in Innovations Surveys, op. cit., p. 3; OECD (2000), Record of the NESTI Meeting, DSTI/EAS/STP/NEST/M (2000) 1, p. 8; Eurostat (2001), Working Party Meeting on R&D and Innovation Statistics: Main Conclusions, April 19–20.
The rise of innovation surveys 153
Conclusion The recent internationalization of innovation surveys was characterized by a conceptual shift from outputs in the 1970s to activities in the 1990s. Without really noticing that they had departed from their original goal, national governments and the OECD ended up measuring innovation the way they measured R&D, that is in terms of inputs and activities. Certainly, there were contextual factors leading statisticians to measure innovation as an output early on. Since its very beginning, the NSF has always tried to convince the government of the relevance of research to society and the economy. Measuring the products and processes coming out of research was one way to demonstrate this relevance. The stated aim of the first NSF innovation statistics was to “provide empirical knowledge about the factors which stimulate or advance the application in the civilian economy of scientific and technological findings.”90 Similarly, the OECD needed ways to convince governments about the superiority of the United States over Western Europe in terms of technology invention and adoption. Counting innovations was thus part of the rhetoric for convincing European governments to set up science policies and increase R&D investments.91 The current practice of measuring innovation as an activity rather than as an output can be explained by at least three factors. One is the influence of the linear model that has guided policy-makers since 1945. According to this model, innovation (as a product) is what comes out ( as output) of basic research. Whenever statisticians measured innovation, then, they called it output. However, having focused on activities, innovation surveys fell far short of measuring innovative outputs (products and processes) and their characteristics and impacts (or outcomes). Although there are some questions in the innovation survey on the impact of innovation on sales, for example, which were recognized as key questions as early as 1991,92 most of these are qualitative questions with yes/no answers.93 Therefore, “it is impossible to quantify these impacts.”94 A second factor was probably very influential in determining the method of measuring innovation. I would argue that control by governments of the instrument was again a key factor in the way innovation is now measured by official statisticians. Statistical offices have long chosen the survey of activities (via the expenditures devoted to these activities) as the preferred instrument for measuring their concepts. They systematically refuse to rely on data and databases developed elsewhere, such as in administrative departments (patents) or in academic circles. 90 S. Myers and D. G. Marquis (1969), Successful Industrial Innovation: A Study of Factors Underlying Innovation in Selected Firms, op. cit., p. iii. 91 See: Chapter 12. 92 Sales are not really an impact of an innovating firm, for example. An economic impact would rather be profits coming out from innovations, effects of a new process on the innovative firm’s performance, for example, or of a new product on other firms’ performance (productivity, costs) or on the economy as a whole. 93 OECD (1991), Compte rendu succinct, op. cit., p. 5. 94 D. Guellec and B. Pattinson (2001), Innovation Surveys: Lessons from OECD Countries’ Experience, STI Review, 27, p. 92.
154
The rise of innovation surveys
Certainly, methodological considerations were important factors for choosing the activity approach. It is easier to measure activities than products and processes. But ultimately, only governments have the resources to produce statistics regularly, so it is this monopoly that defines the de facto standards and dictates the availability of statistical series. The third factor explaining the way innovation is actually measured deals with the concept of innovation itself. Innovation is a fuzzy concept, and is, depending on the author cited, defined, and measured either as a product or as an activity. This is only one side of the fuzziness of the concept, however. Another relates to whether an innovation is new at the world level, domestically, or from a firm’s point of view. Still another refers to the production or use of technologies. A firm, for example, is usually said to be innovative if it invents new products or processes, but some argue that it could also be so qualified if it uses new technologies (to improve its operations). Neglecting the latter was one of the criticisms raised against the indicator on high technology in the 1980s: a firm may not be considered innovative only because it conducts R&D, but also if it purchases and uses advanced technologies in its activities and employs highly trained workers.95 The concept of innovation and its measurement have yet to stabilize. First of all, the OECD/Eurostat definition of innovation has been changed twice in the last decade. The definition initially centered on manufacturing activities, but then service activities were added for the second edition of the Oslo manual. This meant non-comparability between the two surveys. Second, the European questionnaire moved toward a weaker distinction between technological and nontechnological activities in the last round of surveys. Finally, and above all, respondents do not yet have a consistent understanding of the concept of innovation, which varies from one industrial sector to another.96 On the basis of these shifts and limitations, and from the conclusions of a recent workshop organized by the European consultative committee on statistical information, one has to conclude that the OECD/Eurostat manual on innovation was a bit premature.97
95 B. Godin (2004), The Obsession with Competitiveness and its Impact on Statistics: The Construction of High Technology Indicators, Research Policy, forthcoming. 96 D. Guellec and B. Pattinson (2001), Innovation Surveys: Lessons From OECD Countries’ Experience, op. cit., pp. 77–101. 97 Comité consultatif européen de l’information statistique dans les domaines économique et social (2003), Les statistiques de l’innovation: davantage que des indicateurs de la R&D, 21e séminaire du CEIES, Athens, April 10–11. While some participants qualified the surveys as being experimental still (p. 26), the chair of the sub-committee on innovation statistics stated that there remains a long way to go before one could have a definite and comprehensible questionnaire (p. 49).
Section IV Dealing with methodological problems
9
Metadata How footnotes make for doubtful numbers
The preceding chapters should have convinced anyone of the uniqueness and the novelty of the measurements developed by governments over the last fifty years or more. But it should also be shown how the official surveys have limitations of their own. The next two chapters are devoted to consideration of these limitations. The present chapter considers the way limitations were treated by official statisticians themselves. Examining footnotes to the statistical tables provides us with the material to this end: the footnotes make the limitations visible. But this is only one function of footnotes. Let us recall the fact that objectivity is a central characteristic of science. Whether we consider science from the perspective of the scientist or of the philosopher, objectivity is central to its very nature. It took hundreds of years to develop the means by which the evidence that defines objectivity came to be accepted.1 This began in the seventeenth century with the testimony of witnesses on experiments performed in “public” spaces, and coalesced into the “virtual witnessing” of the scientific paper with its detailed instructions for replicating experiments.2 Today, facts and data are acceptable forms of evidence in the natural sciences. Similarly, the social and human sciences are also founded on “facts”: the survey is a major source of data in the social sciences, while archives and documents are the principal sources of data in the humanities.3 In all of these scientific ventures, whether natural or social, footnotes served two particular functions.4 First, footnotes indicated sources of evidence: they attested to facts. They offered empirical support for stories told and arguments presented. Second, they sought to persuade the reader that the scientist had done an acceptable amount of work. They conferred authority on a writer by establishing his or her seriousness, expertise, and credibility.5
1 P. Dear (1995), Discipline and Experience: The Mathematical Way in the Scientific Revolution, Chicago: University of Chicago. 2 S. Shapin and S. Schaeffer (1985), Leviathan and the Air Pump: Hobbes, Boyle and the Experimental Life, Princeton: Princeton University Press. 3 B. S. Shapiro (2000), A Culture of Facts, England 1550–1720, Ithaca: Cornell University Press; M. Poovey (1998), A History of the Modern Fact, Chicago: University of Chicago Press. 4 A. Grafton (1997), The Footnote: A Curious History, Cambridge, MA: Harvard University Press. 5 N. G. Gilbert (1977), Referencing as Persuasion, Social Studies of Science, 7, pp. 113–122.
158
Metadata: how footnotes make for doubtful numbers
Besides being a source of evidence and a tool of persuasion however, footnotes serve a third function: they often disclose the data’s weaknesses in the process of qualifying its strengths. Footnotes “make clear the limitations of their own theses even as they try to back them up. (. . .) Footnotes prove that [the author’s argument or data] is a historically contingent product, dependent on the forms of research, opportunities, and states of particular questions.”6 As we shall see here, footnotes often lead one to indulge in antirealist rhetoric: authors acknowledge the data’s shortcomings in footnotes, while ignoring them in the body of the text. This chapter examines the treatment of limitations in S&T statistics. In 1963, member countries of the OECD unanimously approved a methodological manual for standardizing S&T statistics. The aim of the manual, as we have mentioned several times, was to harmonize national practices in conducting R&D surveys. Several editions later, however, there are still many methodological differences that continue to hamper international comparisons. As L. A. Seymour noted forty years ago: “as one proceeds from comparisons of two countries to several countries, each with an independent system of data collection, the complications multiply so rapidly that any thought of further extension seems presently unfeasible.”7 As a result of international non-comparability, the OECD regularly corrected the national data or simply added footnotes to inform the user of discrepancies in the series. It is a well-known fact that measurement abstracts only a few properties to compare objects. Eliminating details permits the construction of tables that would not otherwise exist: statistical tables present standardized data that exclude national specifics—by relegating them to footnotes. In this chapter, I endeavour to show how, for those who bother to read the footnotes, the data often appear as mere tendencies only, if not veritable fictions. The first part of the chapter presents the problems and difficulties that national statisticians faced in analyzing R&D data before the OECD’s involvement in S&T statistics. There were at least three kinds of problems: the definition of concepts, the demarcation between scientific and non-scientific activities, and the sample of the population surveyed. The second part analyzes the problems of comparability that led the OECD to standardize practices between countries. Despite these efforts, differences persisted and gave rise to the development of tools, like the footnote, for minimizing national specifics.
Living with differences In 1959, J. Perlman, head of the Office of Special Studies at the NSF, praised his staff for having introduced two innovations into the field of R&D surveys: the regularity and comprehensiveness of surveys through periodic coverage of all
6 A. Grafton (1997), The Footnote: A Curious History, op. cit., p. 23. 7 L. A. Seymour (1963), Problems of International Comparisons of R&D Statistics, OECD, DAS/PD/63.3, p. 4.
Metadata: how footnotes make for doubtful numbers 159 economic sectors (government, industry, university), and the analysis of intersectoral flows of funds.8 But he failed to mention a third innovation: the NSF had put an end to variations in the methodology of R&D surveys, at least in the United States.9 Prior to the 1960s, there were three problems that prevented statisticians from comparing surveys, drawing historical series or even believing in the numbers generated. The first problem concerned definitions of research. Two situations prevailed. First, more often than not, there was no definition of research at all, as was the case in the US National Research Council (NRC) directory of industrial R&D. The first edition reported using a “liberal interpretation” that let each firm decide which activities counted as research: “all laboratories have been included which have supplied information and which by a liberal interpretation do any research work.”10 Consequently, any studies that used NRC numbers, like those by Holland and Spraragen11 and by the US Work Progress Administration ( WPA)12 were of questionable quality: “the use of this information [NRC data] for statistical analysis has therefore presented several difficult problems and has necessarily placed some limitations on the accuracy of the tabulated material.”13 The US National Resources Planning Board used a similar practice in its survey of industrial R&D in 1941: the task of defining the scope of activities to be included under research was left to the respondent.14 In Canada as well, the first study by the Dominion Bureau of Statistics contained no definition of research.15 The second situation regarding definitions was the use of categories of research in lieu of a precise definition. Both the Bush16 and the President’s Scientific Research Board reports,17 as well as the British DSIR survey18 suggested categories that resembled each other (basic, applied, and development, for example)— but that were never in fact the same. As a rule, these categories served to help
8 NSF (1959), Methodological Aspects of Statistics on Research and Development: Costs and Manpower, NSF 59-36, Washington, pp. 2–3. 9 For the UK, see: E. Rudd (1959), Methods Used in a Survey of R&D Expenditures in British Industry, in NSF (1959), Methodological Aspects of Statistics on R&D Costs and Manpower, op. cit., pp. 39–46. 10 National Research Council (1920), Research Laboratories in Industrial Establishments of the United States of America, Bulletin of the NRC, Vol. 1, Part 2, March, p. 45. 11 M. Holland and W. Spraragen (1933), Research in Hard Times, Division of Engineering and Industrial Research, National Research Council, Washington. 12 G. Perazich and P. M. Field (1940), Industrial Research and Changing Technology, Work Projects Administration, National Research Project, Report No. M-4, Pennsylvania: Philadelphia. 13 Ibid., p. 52. 14 National Resources Planning Board (1941), Research: A National Resource (II): Industrial Research, Washington: USGPO, p. 173. 15 Dominion Bureau of Statistics (1941), Survey of Scientific and Industrial Laboratories in Canada, Ottawa. 16 V. Bush (1945), Science: The Endless Frontier, op. cit., pp. 81–83. 17 President’s Scientific Research Board (1947), Science and Public Policy, op. cit., pp. 300–301. 18 DSIR (1958), Estimates of Resources Devoted to Scientific and Engineering R&D in British Manufacturing Industry, 1955, London, p. 8.
160 Metadata: how footnotes make for doubtful numbers respondents decide what to include in their responses to the questionnaire, but disaggregated data were not available for calculating statistical breakdowns. Others, such as the National Resources Committee, simply refused to use such categories because of the intrinsic connections between basic and applied research, which seemed to prevent any clear distinctions from being made.19 The situation improved thanks wholly to the NSF and the OECD. Some precise definitions were indeed proposed as early as 1938 by the National Resources Committee in its survey of government R&D,20 but these were indeed limited to government R&D, and provoked controversy for including the social sciences. The Canadian government also suggested (influential) definitions, like that of “scientific activities,” as early as 1947,21 but a standard definition would have to wait until the creation of the NSF and the OECD. The idea of “systematicness” would thereafter define research: “systematic, intensive study directed toward fuller knowledge of the subject studied and the systematic use of that knowledge for the production of useful materials, systems, methods, or processes.”22 The second problem of pre-1960s R&D surveys, closely related to the problem of definition, concerned the demarcations of research and non-research activities. The main purpose of both the Harvard Business School study and the US Bureau of Labor Statistics survey, two influential studies of the early 1950s, was to propose a definition of R&D and to measure R&D. Two problems were identified: there were too many variations on what constituted R&D, so they claimed, and too many differences among firms on which expenses to include in R&D.23 Although routine work was almost always excluded, there were wide discrepancies at the frontier between development and production, and between scientific and non-scientific activities: testing, pilot plants, design and market studies were sometimes included in research and at other times not (see Appendix 15). Indeed, companies had accounting practices that did not allow these activities to be easily separated.24 K. Arnow, of the NSF, summarized the problem as follows: Even if all the organizations responding to the NSF’s statistical inquiries shared, by some miracle, a common core of concepts and definitions, they
19 National Resources Committee (1938), Research: A National Resource (I): Relation of the Federal Government to Research, Washington: USGPO, p. 6. 20 Ibid., p. 62. 21 Department of Reconstruction and Supply (1947), Research and Scientific Activity: Canadian Federal Expenditures 1938–1946, Government of Canada: Ottawa, pp. 11–13. 22 National Science Foundation (1953), Federal Funds for Science, Washington, p. 3. 23 D. C. Dearborn, R. W. Kneznek, and R. N. Anthony (1953), Spending for Industrial Research, 1951–1952, Division of Research, Graduate School of Business Administration, Harvard University, p. 91. 24 On accounting difficulties, see: O. S. Gellein and M. S. Newman (1973), Accounting for R&D Expenditures, American Institute of Certified Accountants, New York; S. Fabricant, M. Schiff, J. G. San Miguel, and S. L. Ansari (1975), Accounting by Business Firms for Investments in R&D, Report submitted to the NSF, New York University.
Metadata: how footnotes make for doubtful numbers 161 might still not be able to furnish comparable data, since they draw on a diversity of budget documents, project reports, production records, and the like for estimating R&D expenditures. (K. Arnow (1959), National Accounts on R&D: The NSF Experience, in NSF, Methodological Aspects of Statistics on Research and Development: Costs and Manpower, NSF 59-36, Washington, p. 58) According to R. N. Anthony, author of the influential Harvard Business School survey, accounting practices could result in variations of up to 20 percent for numbers on industrial R&D.25 Both the US Bureau of Census26 and the NSF also believed that only better accounting practices could correct such errors. Nevertheless, the Harvard Business School27 and the NSF28 both insisted on developing a whole series of specifications for defining and delimiting measurable activities. The first NSF industrial R&D survey included pilot plants, design, laboratory scale models, and prototypes in its definition of research; and it excluded market and economic research, legal work and technical services (minor adaptations, licenses, advertising, patents, and exploration). In the following decades, the OECD improved and standardized these demarcations through the Frascati manual, and it concentrated on measuring research activities as such. As we saw, related scientific activities (RSA) have rarely been systematically measured by any organization.29 A third and final problem of early R&D surveys concerned the sample of the population under study. We have noted how the NRC repertory was open to all firms who agreed to complete the questionnaire: “the NRC surveys were designed for the purpose of compiling a series of directories of research laboratories in the United States. The schedules were therefore sent out without instructions which would have been necessary had it been intended to use the data for purposes of statistical analysis.”30 When the statisticians finally began addressing the problem, however, their methodologies differed: some limited the survey to distinct laboratories,31 others completed and sent in the questionnaire on a consolidated
25 R. N. Anthony (1951), Selected Operating Data: Industrial Research Laboratories, Harvard Business School, Division of Research, Boston, p. 3. 26 H. Wood (1959), Some Landmarks in Future Goals of Statistics on R&D, in NSF, Methodological Aspects of Statistics on Research and Development: Costs and Manpower, op. cit., p. 52; NSF (1960), Research and Development in Industry, 1957, op. cit., p. 99. 27 D. C. Dearborn, R. W. Kneznek, and R. N. Anthony (1953), Spending for Industrial Research, 1951–1952, op. cit., pp. 43–44, 92. 28 National Science Foundation (1953), Federal Funds for Science: Federal Funds for Scientific R&D at Nonprofit Institutions 1950–1951 and 1951–1952, Washington, p. 16. 29 In the case of industrial R&D, the exception was: D. C. Dearborn, R. W. Kneznek, and R. N. Anthony (1953), Spending for Industrial Research, 1951–1952, op. cit. 30 G. Perazich and P. M. Field (1940), Industrial Research and Changing Technology, op. cit., p. 52. 31 R. N. Anthony (1951), Selected Operating Data: Industrial Research Laboratories, op. cit., p. 42.
162
Metadata: how footnotes make for doubtful numbers
company basis,32 and still others concentrated on big firms to “speed up results.”33 There were no real standards. All in all, the absence of norms made survey comparisons impossible before the 1960s, which resulted in statistics that were often of limited value. The President’s Scientific Research Board wrote that it was “not possible to arrive at precisely accurate research expenditures” because of three limitations: (1) variations in definition, (2) accounting practices, and (3) the absence of a clear division between science and other research activities.34 Similarly, the NSF admitted that the industrial R&D surveys it conducted before 1957 were not comparable to those it conducted after that date.35 Despite the limitations and biases, people commonly defended the data with what I call the “argument from minimizing limitations”:36 These difficulties have resulted in wide variations in the analyses of research expenditures which have been published during the last decade, but they do not affect orders of magnitude or the general trends on which policy decisions rest. (President’s Scientific Research Board (1947), Science and Public Policy, op. cit., p. 73) It is sometimes usually difficult for budget officers to draw the line between research and administration (. . .). Nevertheless (. . .) the results are believed significant, at least for compilation into totals for Government as a whole, if not for detailed inter-agency comparisons. (National Resources Committee (1938), Research: A National Resource (I): Relation of the Federal Government to Research, op. cit., p. 63) The data collected in this way are, of course, not complete. Many organizations doing research have not been reached, nor are the returns received always comparable. However, it is believed that the coverage is quite adequate to yield
32 D. C. Dearborn, R. W. Kneznek, and R. N. Anthony (1953), Spending for Industrial Research, 1951–1952, op. cit., p. 43. 33 Dominion Bureau of Statistics (1956), Industrial Research-Development Expenditures in Canada, 1955, Ottawa, p. 22. 34 President’s Scientific Research Board (1947), Science and Public Policy, op. cit., pp. 73, 301. 35 NSF (1960), Funds for R&D: Industry 1957, NSF 60-49, Washington, pp. 97–100. 36 The term was suggested to me by D. Walton in a personal conversation on June 2, 2002. For similar rhetorical strategies on the part of academic researchers, see for example: E. Mansfield (1972), Contribution of R&D to Economic Growth in the United States, Science, 175, February 4, pp. 480 and 482; E. Mansfield (1991), Academic Research and Industrial Innovation, Research Policy, 20, p. 11; Z. Griliches (1995), R&D and Productivity: Econometric Results and Measurement Issues, in P. Stoneman, Handbook of the Economics of Innovation and Technological Change, Oxford: Blackwell, p. 82.
Metadata: how footnotes make for doubtful numbers 163 a representative and qualitatively correct picture of present day industrial research. (National Resources Planning Board (1941), Research: A National Resource (II): Industrial Research, op. cit., p. 173) Similarly, the Canadian Dominion Bureau of Statistics once asserted: “although the records of some respondents did not follow the definitions (. . .) it is felt that any variations in interpretation of the type of data to be included in the questionnaire were not significant enough to make any appreciable difference in the published data.”37 The US Bureau of Labor Statistics also stated: “despite these limitations, the findings of this survey are believed to give a satisfactory general picture. (. . .) The reader should, however, bear in mind the approximate nature of the figures.”38 The early NSF surveys presented the same rhetoric:39 “While the amounts reported do not correspond exactly to what was actually received by nonprofit institutions in either of the two years covered, they do represent orders of magnitude from which generalizations may be drawn.”40 This was in fact the view, as we shall see later, that would be adopted by the DSTI statistical division at the OECD: because they were imperfect, R&D statistics should be interpreted as tendencies or trends only, as suggested by the titles of most of its analytical publications on R&D data in the 1970s, and in the following citation from a study on indicators of high-technology: The findings must be viewed as indicating principally broad trends. The data only gives a partial representation of the role played by high-technology processes and products in international trade. This is due to the intrinsic limitations of the method followed in preparing the figures. (OECD (1984), An Initial Contribution to the Statistical Analysis of Trade Patterns in High Technology Products, DSTI/SPR/84.66, p. 1) All of these positions had nothing to do with statistics per se, as the second OECD ad hoc review group on R&D statistics would soon report:41 “Most doubts about national data have nothing to do with standard deviations and sampling theory but concern whether the data in a response reflect the reality in the firm, or institute or university laboratory involved. There are a very large number of
37 Dominion Bureau of Statistics (1956), Industrial Research-Development Expenditures in Canada, 1955, op. cit., p. 22. 38 US Department of Labor, Bureau of Labor Statistics, Department of Defense (1953), Scientific R&D in American Industry: A Study of Manpower and Costs, Bulletin No. 1148, Washington, p. 46. 39 National Science Foundation (1953), Federal Funds for Science: Federal Funds for Scientific R&D at Nonprofit Institutions 1950–1951 and 1951–1952, op. cit., p. 5. 40 This was a problem that would occupy state statisticians for years: the difference between the amount of R&D performed versus financed. 41 OECD (1978), Report of the Second Ad Hoc Review Group of R&D Statistics, STP (78) 6: 17–18.
164 Metadata: how footnotes make for doubtful numbers reasons why they may not, including the ability to distinguish R&D conceptually from other activities, unsuitable accounting systems, loosely drafted national questionnaire and fiscal or juridical incentives to answer inaccurately.”42 While most studies minimized the data’s limitations, some were also cautious— usually in technical notes—about the statistics. The NSF, for instance, regularly calculated the margin of error of its surveys, estimating in 1960 that the difficulties associated with the concept of research were responsible for discrepancies of up to 8 percent.43 It was also very careful in interpreting research categories (on basic research): as early as 1956, the NSF stated that its estimates were maximums at best because of differing interpretations and accounting practices.44 In 1960, after its third survey of industrial R&D, the NSF noted that “the accounting systems of some companies are not set up to yield accurate data on R&D. (. . .) Companies find it difficult to define boundaries differentiating basic from applied research, applied research from development, or development from production, testing or customer-service work.”45 The NSF would consequently modify and soften the definition of basic research for industry,46 and develop methods for estimating corporate basic research because of non-responses, which reached 40 percent in the 1980s.47
Comparing the incomparable The NSF put an end to variations in the methodology of R&D surveys because it monopolized and imposed its own standards on the field of S&T statistics in the United States. The OECD attempted to replicate the experience on an international scale in the 1960s, and achieved a certain measure of success. In 1965, C. Freeman and A. Young produced a study that compared R&D in OECD member countries.48 It was the first attempt to critically analyze R&D data and methodologies internationally,49 a few years before the results of the first international survey based on the Frascati manual appeared. The study included
42 As an example, the actual rate of non-responses in surveys has to do with the difficulty of the concepts and questions. Non-responses are not randomly distributed but biased with respect to certain characteristics of the population and the questionnaire. See: M. Akerblom (2001), Develop Proposed Standard Methodology for R&D Surveys, DSTI/EAS/STP/NESTI (2001) 14/PART2, p. 10. 43 NSF (1960), Funds for R&D: Industry 1957, op. cit., p. 97. 44 NSF (1956), Science and Engineering in American Industry: Final Report on a 1953–54 Survey, NSF 56-16, Washington, pp. 18, 48. 45 NSF (1960), Funds for R&D: Industry 1957, op. cit., pp. XII–XIII. 46 See Chapter 14. 47 NSF (1990), Estimating Basic and Applied R&D in Industry: A Preliminary Review of Survey Procedures, NSF 90-322, Washington. 48 C. Freeman and A. Young (1965), The Research and Development Effort in Western Europe, North America and the Soviet Union: An Experimental International Comparison of Research Expenditures and Manpower in 1962, Paris: OECD. 49 See also: C. Freeman (1962), Research and Development: A Comparison Between British and American Industry, National Institute Economics Review, 20, May, pp. 21–39.
Metadata: how footnotes make for doubtful numbers 165 an appendix called Sources and Methods that dealt with national differences in statistics and limitations of data, and which would serve as the model for the OECD “Sources and Methods” series (to be discussed later). The study revealed “big differences in definition and coverage” among countries.50 These concerned, to name but a few, the treatment of: the social sciences, capital expenditures, funds to R&D abroad, related scientific activities, government funding of industrial R&D, and scientific personnel (see Appendix 16 for details). These were precisely the differences that the Frascati manual, adopted in 1963 by OECD member countries, was designed to eliminate. The manual’s proposed standards were mainly concerned with four topics. First, norms were proposed for defining research as “systematic” search and as composed of three major categories of research (basic/applied/development). Second, activities were demarcated for statistical inclusion or exclusion: research/related scientific activities, development/production, research/teaching. Third, sectors (university, government, industry, non-profit) were precisely delineated. Finally, standards were suggested for surveying the units of research. As a result of the manual, the first international survey of R&D was conducted in seventeen countries in 1963–1964. The results were published in 1967 in a small booklet that discussed limitations very briefly, among other things, before any number was presented, but at the same time repeatedly mentioned that these did not affect the results. The following year, a complementary, and huge, document was published, entirely “designed to clarify national particularities and certain problems of a general nature which impede international comparability.”51 The fact that the methodological notes were published separately—and later—indicated that they were rarely read. The document, however, included three sets of limitations and notes. First, the introduction presented an overall evaluation of the data (pp. 17–26), and was followed by a discussion of the problem of exchange rates ( pp. 27–32). Second, a series of notes accompanied each table: footnotes (e.g. p. 37) and endnotes (e.g. p. 56), which later would become what is now known as the “standard footnotes.” Finally, the statistics for each sector were preceded by general notes (e.g. p. 73), and by notes for each of the surveyed countries (e.g. p. 77). The same model was subsequently used in each biennial publication of the survey. The technical document identified two general sources of errors in making international comparisons: (1) Reliability of data ● ● ●
Omission of important R&D performers; Non-responses of important R&D performers; Inaccurate grossing up, extrapolations, estimations.
50 C. Freeman and A. Young (1965), The R&D Effort in Western Europe, North America and the Soviet Union, op. cit., p. 19. 51 OECD (1968), A Study of Resources Devoted to R&D member countries in 1963/64: Statistical Tables and Notes, Paris, p. 3.
166
Metadata: how footnotes make for doubtful numbers
(2) Variations in concepts and definitions ● ● ●
Standards (Frascati manual) not followed; Standards interpreted differently; No standards in Frascati manual.
This was just the beginning of a long series of incompatible national practices that took on more and more importance: breaks in series caused by changes in definitions of concepts; differing periods of the national surveys (civil, academic, budgetary) and variations in their frequency (1, 2, 3 years; alternating sectors); non-comparable samples, unreliable estimates (of non-responding or nonsurveyed units); and differing coverage of sectors—all of which led the OECD to make estimations of national data with increasing frequency.52 In cases where differences from the prescribed standard occurred, the Secretariat worked closely with national authorities to rectify discrepancies. Inevitably, however, certain problems of comparability persisted. The tables for each sector are therefore preceded by an introductory note and by detailed country-by-country notes covering all known differences between national definitions and survey methods. (OECD (1968), A Study of Resources Devoted to R&D Member Countries in 1963/64: Statistical Tables and Notes, Paris, p. 15) The practice of including notes took on particular importance in the 1980s, because of two events. First, the second OECD ad hoc review group on R&D statistics dealt with the unacceptably large time lags in the publication of data,53 and also with the methodological notes: The single most prevalent concern expressed by users about the adequacy of OECD R&D data revolved around the extent to which data from different countries are comparable, both with respect to accuracy and definition of scope. On one level, it can be said that current publications provide an answer to this concern inasmuch as all data series are accompanied by often extensive country notes. However, it is evident that many users do not give detailed attention to such notes and so a better method of dealing with this problem is needed. (OECD (1978), Report of the Second Ad Hoc Review Group of R&D Statistics, op. cit., pp. 16–17) At the end of the 1990s, there was the same issue of “protecting users from themselves since many users, especially those close to policy-makers, do not
52 Ibid., p. 15. 53 On the sources of the lags, see OECD (1976), Methods of Accelerating the Collection and Circulation of R&D Data, DSTI/SPR/76.52.
Metadata: how footnotes make for doubtful numbers 167 understand, care about or choose to read about the statistical problems which are described in footnotes or in the sources and methods of official database publications.”54 The ad hoc review group therefore suggested that: “the Secretariat attempt to summarize the country notes with a view to highlighting those series of data for which problems of comparability are most acute.”55 The recommendations led to the development of “standard footnotes” for each statistical publication (see Appendices 17 and 18). The second event that contributed to the development of notes was a series of in-depth analytical R&D studies by the OECD Science and Technology Indicators Unit (STIU) in the 1970s. Three groups of problems were revealed by the analysis of sectoral R&D (university, industry, and government), problems to which I now turn. University R&D The data on university R&D have always been of very poor quality, so much so that an OECD draft report on trends in university R&D, intended for official publication, would never be published:56 the data were qualified as “rather unsatisfactory” because of “serious conceptual and practical problems.”57 As John Irvine et al. noted in a subsequent study on academic R&D for the UK government, the statistics on university R&D “are of increasingly limited utility for policy purposes.”58 The problem had been a primary concern for the third OECD review group,59 and the member countries’ experts themselves felt that: Some of the data used in the report (especially those for basic research) were not accurate enough to permit detailed conclusions to be drawn. (. . .) The major stumbling block was that a significant number of member countries do not actually survey R&D in the Higher Education sector but rather make estimates by applying coefficients. (. . .) [Moreover] the preliminary draft was not sufficiently oriented toward problems of policy interest mainly because standard OECD surveys do not provide R&D data specifically relevant to these questions. (OECD (1980), Report of the Meeting of NESTI, DSTI/SPR/80.7, pp. 2–3)
54 OECD (1999), Updating the STAN Industrial Database Using Short Term Indicators, DSTI/EAS/IND/WP9 (94) 13, p. 3. 55 OECD (1978), Report of the Second Ad Hoc Review Group of R&D Statistics, op. cit., p. 20 56 OECD (1979), Trends in R&D in the Higher Education Sector in OECD member countries Since 1965 and Their Impact on National Basic Research Efforts, STP (79) 20. 57 Ibid., p. 1. 58 J. Irvine, B. Martin, and P. Isard, Investing in the Future: An International Comparison of Government Funding of Academic and Related Research, Worchester: Billing and Sons Ltd, p. 5. 59 OECD (1985), Report of the Third Ad Hoc Review Group on Science and Technology Indicators, STP (85) 3.
168
Metadata: how footnotes make for doubtful numbers
The OECD thus accompanied the draft report on university R&D with a document identifying the main problems.60 How, for example, could a country spend twice as much as another on university research and yet report similar numbers of university personnel assigned to R&D? Why did expenditures on basic research differ by a ratio of 1 : 2 between otherwise similar countries? The answer was: Establishing R&D statistics for the higher education sector is particularly difficult compared with the situation for the other traditional sectors of R&D performance, as the accounting systems of the universities do not in general give any information about the amount of R&D financed by general university funds. These funds are given jointly to the activities performed in the universities: teaching, research and other activities. The producer of statistics has therefore to develop methods of identifying and measuring the R&D part separately from total activities in the universities. (OECD (1985), Methods Used in OECD Member Countries to Measure the Amount of Resources Allocated to Research in Universities with Special Reference to the Use of Time-Budget Series, DSTI/SPR/85.21, p. 1) The sources of the discrepancies were many, but for present purposes it will suffice to mention the major ones:61 coverage of the university sector differed according to country (some institutions, like university hospitals and national research councils, were treated differently); estimates were used in place of surveys because they were cheaper, and coefficients derived from the estimates were little more than informed guesswork and were frequently out of date; general university funds were attributed either to the funder or to the performer; the level of aggregation (fields of science classification) was generally not detailed enough to warrant analysis; finally, there was a great deal of subjectivity involved in classifying research activities according to a basic/applied scheme that was “no longer used in certain countries, although policymakers still persist in requesting such data in spite of its many shortcomings”62—the OECD itself no longer used the basic/applied classifications in its analyses.63 These difficulties led to a small study of national methods for measuring resources devoted to university research in 1981,64 which was updated in 1983,65
60 OECD (1979), National Methods of Measuring Higher Education R&D: Problems of International Comparison, STP (79) 21. 61 Some of these were already well-identified in 1969. See OECD (1969), The Financing and Performance of Fundamental Research in the OECD member countries, DAS/SPR/69.19, p. 4. 62 OECD (1985), Summary Record of the OECD Workshop on Science and Technology Indicators in the Higher Education Sector, DSTI/SPR/85.60, p. 24. 63 In its 1986 edition of Recent Trends in Total R&D Resources, the OECD did not include any data on basic research because of problems with the available data: OECD (1986), Recent Trends in Total R&D Resources in OECD member countries, DSTI/IP/86.05. 64 OECD (1981), Comparison of National Methods of Measuring Resources Devoted to University Research, DSTI/SPR/81.44. 65 OECD (1984), Comparison of National Methods of Measuring Resources Devoted to University Research, DSTI/SPR/83.14.
Metadata: how footnotes make for doubtful numbers 169 plus a workshop on the measurement of R&D in higher education in 198566 and, as a follow-up, a supplement to the Frascati manual in 1989,67 which was later incorporated into the manual as Annex 3. The supplement recommended norms for the coverage of the university sector, the activities and types of costs to be included in research, and measurement of R&D personnel. In general, the OECD attributed the difficulty of measuring university R&D to technical constraints. The first of these was related to university accounting systems: “Accounting systems in Higher Education institutions do not, in general, give information broken down according to [R&D incomes and expenditures]. This is mainly because such information, apart from being quite difficult to compile, is of limited interest to Higher Education institutions’ accountants.”68 The nature of university work also raised serious difficulties for the measurement of university research. First, since research is intimately connected with teaching, “it is difficult to define where the education and training activities of Higher Education staff and their students end and R&D activities begin, and vice versa.”69 Next, professors have very flexible work schedules: “more R&D is carried out in the university vacation periods than during the teaching terms. In addition, R&D does not necessarily take place within the constraints of recognized working hours. It may be carried out in the researchers’ homes, at week-ends or in the evenings. This means that they have more flexibility and freedom in terms of working hours than their counterparts in other sectors.”70 Governments do conduct fairly straightforward studies of industrial and governmental R&D, but the difficulties cited earlier, have led many to develop rather indirect—and much-criticized—means of measuring the investment in university research. The OECD manual maintains that governments can nonetheless overcome these difficulties, to the extent that they are willing to carry out surveys of university research.71 But this is precisely the problem: such surveys are not carried out. “[C]ountries have, over time, approached the problem of identifying and measuring R&D in different ways—influenced by, among other things, the time and financial resources available to carry out the data collection exercise, and also by the importance with which the national authorities rate Higher Education R&D, compared to R&D in other sectors of the economy.”72 This statement goes well beyond the methodological difficulties: in terms of measurement, governments are more concerned politically with firms and
66 OECD (1985), Summary Record of the OECD Workshop on Science and Technology Indicators in the Higher Education Sector, op. cit. 67 OECD (1989), The Measurement of Scientific and Technical Activities: R&D Statistics and Output Measurement in the Higher Education Sector, Paris. 68 Ibid., p. 23. 69 Ibid., p. 24. 70 Ibid., p. 12. 71 Ibid., pp. 34–35. 72 Ibid., p. 13.
170 Metadata: how footnotes make for doubtful numbers innovation than with university research. Despite the 1989 supplement to the Frascati manual, then, higher education data continued to be “the least satisfactory in terms of quantity, quality, and details (. . .). Countries appear to be less motivated to spend additional resources on improving their submissions of Higher Education R&D expenditures (HERD) and manpower (HEMP) data than for their industrial R&D statistics.”73 In fact, industrial R&D accounted for the bulk (60 percent) of total R&D in most of the OECD countries, and was prioritized by government policies long ago.74 The OECD concluded: There is relatively little that the OECD Secretariat itself can do to improve the comparability of the data that it issues, unless member countries themselves make the necessary efforts (. . .). (OECD (1985), Summary Record of the OECD Workshop on Science and Technology Indicators in the Higher Education Sector, op. cit., p. 24)
Business R&D Improving the quality of industrial R&D data was the second challenge the OECD’s STIU faced in its efforts to increase international comparability. Unlike the unpublished paper on university R&D, a study of trends in industrial R&D was issued in 1979.75 A few years earlier, the statistical unit had already identified the main problems with industrial R&D in an internal document,76 listing three broad classes of problems in comparability: comparisons across countries, across time, and across other economic variables. The major difficulties were the following: differences, that were indeed very large, between governmental and corporate estimates of public funds to industry, particularly in defense and aerospace;77 bad coverage of certain industries (agriculture, mining, and services) on the assumption that they did not undertake much R&D, or because they were included in another sector (government); uneven treatment of the ways in which governments indirectly support industrial R&D (in the form of loans or tax exemptions) which caused variations of over
73 OECD (1995), The Measurement of University R&D: Principal Problems of International Comparability, CCET/DSTI (95) 110. 74 The same bias holds for the private non-profit sector, where the relatively small size of R&D— around 5 percent—influenced the low priority given to it in statistics. See: K. Wille-Maus (1991), Private Non-Profit Sector and Borderline Institutions, DSTI/STII (91) 22, p. 5; P. Jones (1996), The Measurement of Private Non-Profit R&D: Practices in OECD member countries, DSTI/EAS/STP/NESTI (96) 7. 75 OECD (1979), Trends in Industrial R&D in Selected OECD Countries, 1967–1975, Paris. 76 OECD (1975), Statistical Problems Posed by an Analysis of Industrial R&D in OECD member countries, DSTI/SPR/75.77. 77 400 million DM in Germany, 362 million francs in France, and 17 million pounds in the United Kingdom.
Metadata: how footnotes make for doubtful numbers 171 9 percent in statistics; different classifications of R&D activities (principal industry or product field); changes in surveys over time, such as when over half of the increase in the number of firms performing R&D in France was due to the broadening of the survey, or such as the cases in which statistics evolved en dents de scie from year to year ( Japan); and the difficulty, if not the impossibility, of comparing R&D with economic, demographic, and trade statistics.78 Despite these limitations, the OECD believed the data inspired reasonable confidence:79 “Anyone who has worked with financial data on production, measures of labour force or on scientific activities is bound to acknowledge that R&D data are just as reliable as other existing economic and scientific statistics.”80 This is a variant of the argument for minimizing limitations. The OECD nevertheless added: “The Secretariat finds itself in a situation of apparent conflict where it is hard to reconcile the daily demand for highly detailed and internationally comparable information with the actual production of data available which are often aggregated to a much greater degree and difficult to compare at a detailed level” ( p. 16). In the decade that followed, the OECD invested considerable efforts in data harmonization in order to link R&D statistics with economic ones, and to enlarge the STAN (Structural Analysis) industrial database.81 In fact, “the establishment of the STAN database has revealed not only theoretical problems of comparability but the fact that the current system no longer produces a set of data suitable for purely R&D analysis (. . .). With current data, there are very few detailed indicators for which reliable international comparisons can be made across a reasonable number of countries.”82 The work involved three activities: (1) developing compatible classifications (applying the firm instead of an entire enterprise as the unit of analysis, and applying the product group rather than the principal activity as the unit of classification); (2) collecting data according to the standard industrial classification (SIC) while adding sub-classes for high-R&D intensive industries; and (3) distinguishing product from process R&D to better measure flows between industries. As a consequence, the OECD created a database called ANBERD (Analytical Business Enterprise R&D), which included many OECD estimates that differed from national data.83 Despite these efforts, huge methodological differences persisted between countries:84 (1) to avoid overly-expensive surveys, for example, countries usually
78 Other differences concerned the measurement of R&D personnel: the number of persons working on R&D varied from country to country according to whether it was measured for a given date or during a given period, and whether by occupation or qualification. 79 OECD (1978), Comparability and Reliability of R&D Data in the Business Sector, DSTI/SPR/78.4, p. 4. 80 Ibid., p. 3. 81 OECD (1990), Propositions concernant le traitement des données détaillées sur la R&D industrielle, DSTI/IP (90) 20. 82 J. F. Minder (1991), Treatment of Industrial R&D Data, DSTI/STII (91) 17, p. 3. 83 OECD (2001), ANBERD Database, NESTI Meeting, Rome May 14–15. 84 OECD (2000), Examples of National Practices in R&D Surveys, DSTI/EAS/STP/NESTI (2000) 19.
172 Metadata: how footnotes make for doubtful numbers considered only specific industries, measured only significant R&D activities, and excluded small companies. And, depending on the country, these “small” companies ranged in size from five to hundred employees, since the Frascati manual never proposed a threshold;85 (2) some countries performed estimates for nonrespondents, but these varied considerably and were of irregular quality;86 (3) questionnaires (and wording of questions) varied from country to country;87 (4) above all, “no country actually is using a pure statistical approach in surveying R&D activities,” for example, surveying all units or a statistical sample.88 R&D surveys in several countries are censuses of enterprises known or supposed to perform R&D, or a combination of a statistical census/sample approach and surveying known or supposed R&D performers approached based on lists of enterprises receiving government support or tax reductions for R&D. Taking into account information on previous or probable R&D therefore excludes a lot of enterprises, among them SMEs. Government R&D The third and final problem to which the OECD devoted itself was the difficulty involved in comparing government R&D.89 As with universities, few countries even bothered to survey government R&D.90 Rather, they usually relied on the functional allocation of the R&D appropriations reflected in the national budgets, which amounted to identifying budget items involving R&D. The OECD soon recognized a problem with this practice: appropriation only measured plans and intentions, whereas expenditures measured the money that was actually spent. As a result, the amount of government R&D funding reported by performers and funders was never the same. A recent Norwegian study compared the results obtained from surveys with those estimated from budgets.91 At the macro level, the two data sets gave roughly the same total government R&D expenditures.
85 For a summary of differences in coverage, population, and survey methods, see: M. Akerblom (2001), Develop Proposed Standard Methodology for R&D Surveys, DSTI/EAS/STP/NESTI (2001) 14/PART2. 86 The imputation methods used to estimate missing values varied. Some used information taken from the same survey, others from previous surveys or some related source, still others preferred statistical techniques. 87 C. Grenzmann (2001), Develop Standard Questionnaire Particularly for BE Sector, DSTI/EAS/STP/ NESTI (2001) 14/PART1. 88 M. Akerblom (2001), Develop Proposed Standard Methodology for R&D Surveys, op. cit., p. 7. 89 OECD (1972), The Problems of Comparing National Priorities for Government Funded R&D, DAS/SPR/72.59. This document became Chapter 2 of OECD (1975), Changing Priorities for Government R&D, Paris. 90 Exceptions are Canada and the United Kingdom. For methodologies used in European countries, see Eurostat (1995), Government R&D Appropriations: General University Funds, DSTI/STP/NESTI/SUR (95) 3, pp. 2–3. 91 O. Wiig (2000), Problems in the Measurement of Government Budget Appropriations or Outlays for R&D (GBAORD), DSTI/EAS/STP/NESTI (2000) 25.
Metadata: how footnotes make for doubtful numbers 173 Table 9.1 Government budget appropriations of outlays for R&D (GBAORD) in 1998 by ministry, survey results compared with national GBAORD figures (NIFU) and deviation between the two data sets (in millions of NOK) Ministry
Ministry of foreign affairs Ministry of education, research, and church affairs Ministry of justice and the police Ministry of local government and regional development Ministry of health and social affairs Ministry of children and family affairs Ministry of trade and industry Ministry of fisheries Ministry of agriculture Ministry of transport and communications Ministry of the environment Ministry of labor and government administration Ministry of finance and customs Ministry of defense Ministry of petroleum and energy All ministries Other Total GBAORD
Ministry
GBAORD
survey
(NIFU)
Deviation
249 4,343 8 116
335 4,090 7 154
⫺86 253 1 ⫺38
256 35 1,236 419 470 168 265 25 45 1,053 201 8,889 0 8,889
483 28 1,188 361 348 124 426 21 57 472 263 8,357 533 8,890
⫺227 7 48 58 122 44 ⫺161 4 ⫺12 581 ⫺62 532 ⫺533 ⫺1
The deviation was only 1 million out of a total of 8.9 billion NOK (Norwegian money unit). But large deviations were observed at the more detailed level in the case of defense, education, and health (see Table 9.1). The main sources of discrepancies were the difficulty in interpreting the concept of development, and the different treatment of RSA such as policy studies and evaluations. In the United States, another study found an approximately 30 percent difference between the government-funded R&D reported in the performerbased businesses survey and the R&D reported by the funder in the government R&D survey (see Table 9.2).92 For 1995, for example, the NSF countered that: Federal agencies reported $30.5 billion in total R&D obligations provided to industrial performers, compared with an estimated $21.7 billion in federal funding reported by industrial performers (. . .). Overall, government wide estimates equate to a “loss” of 31 per cent of federally reported R&D support. (NSF (1998), Science and Engineering Indicators, Washington, pp. 4 –44) 92 J. E. Jankowski (1999), Study on Federally Funded Industrial R&D: Summary of Findings from Company Interviews and Analyses of Collateral Data, DSTI/EAS/STP/NESTI (99) 2; J. E. Jankowski (2001), Relationship Between Data from R&D Funders and Performers, DSTI/EAS/STP/NESTI (2001) 14/PART7; OECD (2001), Reconciling Performer and Funder R&D, DSTI/EAS/STP/NESTI (2001) 13.
174
Metadata: how footnotes make for doubtful numbers
Table 9.2 Comparison of reported federal R&D activities with performer-reported expenditures for federal R&D ($ millions) Year
1970 1975 1980 1985 1990 1991 1992 1993 1994 1995 1996 1997 1998 (est.)
Budget
Total
Total
Total
Difference
Difference
authority
obligations
outlays
performerreported federal R&D expenditures
between R&D expenditures and budget authority
between R&D expenditures and outlays
14,911 19,039 29,739 49,887 63,781 65,898 68,398 69,884 68,331 68,791 69,049 71,653 73,639
15,336 19,039 29,830 48,360 63,559 61,295 65,593 67,314 67,235 68,187 67,655 69,830 72,114
l5,734 19,551 29,154 44,171 62,135 61,130 62,934 65,241 66,151 66,371 65,910 68,897 69,849
14,970 18,437 29,455 52,128 61,342 60,564 60,694 60,323 60,700 63,102 63,215 64,865 66,636
59 (602) (284) 2,241 (2,439) (5,334) (7,704) (9,561) (7,631) (5,689) (5,834) (6,788) (7,003)
(764) (1,114) 301 7,957 (793) (566) (2,240) (4,918) (5,451) (3,269) (2,695) (4,032) (3,213)
The gap was so large that the US Senate Committee on Commerce, Science, and Transportation asked the General Accounting Office (GAO) to review the procedure because of concerns over whether members of Congress could truly rely on the NSF’s data.93 The most likely causes were identified as: ●
●
●
A definite problem associated with a shift in the concept of R&D procurement over the past decade: the defense budget may include expenditures which are not considered as R&D in the Frascati manual. Financing of R&D is sometimes provided by an intermediary, making it difficult for the performer to know the original source of funds. Contracts for R&D often extend beyond one year.
The congressional document, however, went beyond the methodological problems and indicated that, for top R&D funding agency officials, NSF’s datacollection efforts were not a high priority, and therefore they devoted few resources to collect them. This fact called into question not only the NSF’s decades of efforts, but also the quality of its data.
93 GAO (2001), R&D Funding: Reported Gap Between Data From Federal Agencies and Their R&D Performers Results From Noncomparable Data, GAO-01–512R, Washington; M. E. Davey and R. E. Rowberg (2000), Challenges in Collecting and Reporting Federal R&D Data, Washington: Congressional Research Service. This is only one of the inconsistencies in the data. Another concerns the discrepancy between the NSF survey and data from the Securities and Exchange Commission.
Metadata: how footnotes make for doubtful numbers 175 To better harmonize national practices, a draft supplement to the Frascati manual specifically devoted to the measurement of government-funded R&D was completed in 1978.94 It dealt with the information available in budgetary documents, the coverage of the sector, the type of funds to include, and the classification of funds by socioeconomic objectives. The draft manual was never published as a separate publication. In fact, these data “play only a modest role in the general battery of S&T indicators and do not merit a separate manual” stated the OECD.95 Instead of a separate manual, the specifications were relegated in an abridged form to a chapter in the fourth edition of the Frascati manual.96 At the suggestion of the United States,97 the OECD recently decided to include a paragraph in the 2002 edition of the Frascati manual recognizing “the likelihood of differences in R&D expenditure totals between those estimated from the funders and those estimated from the performers of R&D.”98 In the 1990s, the OECD documented two more discrepancies in the data on government R&D. First, and following the Subsidies and Structural Adjustment project started in 1987,99 the Industry Committee reported how statistical analyses of government-funded R&D were biased by the failure to appropriately measure indirect governmental support of industrial R&D, such as tax incentives.100 The work led to a database on 1,000 public support programs (of which nearly 300 cover R&D and technological innovation), a manual on industrial subsidies published in 1995,101 and, with regard to S&T, a study on national technology programs.102 A few years later, during the OECD’s Technology, Productivity, and Job Creation horizontal project, the DSTI’s statistical division measured the impact of past methodological practices on the data, since the sums (of tax incentives) were not, as recommended by the Frascati manual (1993), imputed to
94 OECD (1978), Draft Guidelines for Reporting Government R&D Funding by Socioeconomic Objectives: Proposed Supplement to the Frascati Manual, DSTI/SPR/78.40. 95 OECD (1991), Classification by Socioeconomic Objectives, DSTI/STII (91) 19, p. 9. 96 For problems specifically related to the classification of government-funded R&D, see: Chapter 10. 97 J. E. Jankowski (2001), Relationship Between Data from R&D Funders and Performers, DSTI/EAS/STP/NESTI (2001) 14/PART7. 98 OECD (2001), Summary of the Main Conclusions of the Meeting on the Revision of the Frascati Manual held May 9–11, Annex to OECD (2001), Summary Record, DSTI/EAS/ STP/NESTI/M (2001) 1, p. 15. 99 OECD (1992), Subsidies and Structural Adjustment: Draft Report to the Council at Ministerial Level, DSTI/IND (92) 8. 100 OECD (1990), The Industry Committee on “Subsidies and Structural Adjustment” Project and Its Probable Implications for the Frascati Manual, DSTI/IP (90) 24; T. Hatzichronoglou (1991), The Measurement of Government R&D Funding in the Business Enterprise Sector, DSTI/STII (91) 30. 101 OECD (1995), Industrial Subsidies: A Reporting Manual, Paris; OECD (1997), OECD Sources of Data on Government Support for Industrial Technology: Coverage, Availability and Problems of Compilation and Comparison, DSTI/IND/STP/SUB/NESTI (97) 1; OECD (1997), Thematic Analysis of Public Support to Industrial R&D Efforts, DSTI/IND/SUB (97) 13/REV 1. 102 OECD (1993), The Impacts of National Technology Programs, DSTI/STP (93) 3. Officially published in 1995.
176
Metadata: how footnotes make for doubtful numbers
government R&D.103 The biases were particularly significant in the case of the United States, Canada, and Australia. To include these types of government R&D support, however, would entail extending the R&D survey to post-R&D activities and related scientific activities, and would “probably mean drafting a separate manual.”104 The second discrepancy concerned the increasing internationalization of R&D activities, which resulted in an incomplete picture of public R&D funding. European countries, for example, included as government R&D neither the estimated R&D content of their contribution to the European Community budget nor their receipts from abroad. Although the latter was of little statistical consequence in countries with large R&D efforts, its effects were much more strongly felt, in fact twice as much in Greece and Ireland, by small R&D-intensive countries.105 These two problems—government support to industrial R&D and internationalization—would lead to new methodological specifications. As was the case with other sectors then, problems remained despite normalization. To cite one more example, only central government appropriations are included in statistics from the United States, Switzerland, and Sweden because the amount of R&D supported by local authorities is thought to be negligible.
Metadata The persistence of differences in national practices led the OECD to start a regular series titled Sources and Methods. Each sector (government, business, higher education) was the object of detailed technical notes, which were either included in the corresponding statistical publication or published separately in greater detail—and other topics have recently been added (see Table 9.3).106 Sources and Methods are technical documents containing methodological notes from member countries and the OECD: “Countries are requested to send a set of methodological notes with their responses to OECD R&D surveys describing the situation in the survey year, any changes since the previous survey and differences
103 OECD (1998), Measuring Government Support for Industrial Technology, DSTI/EAS/STP/NESTI (98) 11. 104 A. Young (1999), Some Lessons of the Study on Government Support for Industrial Technology for Future Editions of the Frascati and Oslo Manuals, DSTI/EAS/STP/NESTI (99) 4, p. 7. 105 OECD (1997), Treatment of European Commission Funds in R&D Surveys: Summary of National Practices, DSTI/EAS/STP/NESTI/RD (97) 3; OECD (1998), Measuring the Internationalization of Government Funding of R&D, DSTI/EAS/STP/NESTI (98) 3. 106 A Database is currently in progress to integrate Sources and Methods for each sector. See: OECD (2001), OECD R&D Sources and Methods Database, DSTI/EAS/STP/NESTI (2001) 13; OECD (2000), OECD Access Database on R&D Sources and Methods, DSTI/EAS/STP/NESTI (2000) 32.
Metadata: how footnotes make for doubtful numbers 177 Table 9.3 Sources and methods Government R&D Socioeconomic objectives (1982, 1987, 1990)107 International funding (1998)108 Industrial support (1998)109 Business R&D (1992, 1997)110 Higher Education R&D (1997)111 R&D—General (2000)112 Technology balance of payments (2000)113
from the specifications in the Frascati manual.”114 The information so communicated to the OECD constitutes the bulk of Sources and Methods, and is called metadata. The OECD defines metadata as “the information associated with a statistical point or series which contains the information needed by a person who has no first hand experience of the survey underlying the data but who wished to interpret and analyze it and compare it with points or series from other sources, for example, from other countries or taken at other points in time.”115 The structure of Sources and Methods is quite simple: for each country, arranged alphabetically, the major points of departure from the OECD standards are presented and explained. These notes are far more detailed than the standard footnotes that appeared in published statistical repertories like Main Science and Technology Indicators (MSTI), where only the essential differences between countries are described (see Appendix 17). They deal with the coverage of the sector, the source of the data, the classification used, and the period and years available. A brief scan of the metadata, however, is enough to shake one’s confidence in the reliability of international R&D statistics. In fact, standard footnotes do not give the whole story. One has to look at Sources and Methods to really appreciate the
107 OECD (1982), Sources and Methods (Volume A): The Objectives of Government R&D Funding, DSTI/SPR/82.06; OECD (1987), Sources and Methods (Volume A): The Objectives of Government Budget Appropriations of Outlays for R&D, DSTI/IP/87.13. 108 OECD (1998), Measuring the Internationalization of Government Funding: Sources and Methods, DSTI/EAS/STP/NESTI/RD (98) 2. 109 OECD (1998), Measuring Government Funding of Industrial Technology: Sources and Methods, DSTI/EAS/STP/NESTI/RD (98) 4. 110 OECD (1992), Dépenses de R-D du secteur des entreprises dans les pays de l’OCDE: données au niveau détaillé des branches industrielles de 1973 à 1990, OECD/GD (92) 173; OECD (1998), R&D in Industry: Expenditures and Researchers, Scientists and Engineers—Sources and Methods, Paris. 111 OECD (1997), Measuring R&D in the Higher Education Sector: Methods Used in the OECD/EU member countries, DSTI/EAS/STP/NESTI (97) 2. 112 OECD (2000), R&D Statistics: Sources and Methods, Paris. 113 OECD (2000), Technology Balance of Payments: Sources and Methods, Paris. 114 OECD (1994), Metadata, Sources and Methods, DSTI/EAS/STP/NESTI (94) 12, p. 3. 115 Ibid., p. 2.
178
Metadata: how footnotes make for doubtful numbers
OECD statistics. To take but a few examples, the following standard footnotes are associated with the following national statistics: France Germany Japan Sweden United Kingdom United States
Provisional Estimate or projection Break in series Excludes GUF (General University Funds) Break in series Underestimate None (No data) Excludes capital expenditures Provisional
However, countries’ methodological notes indicate the following more serious limitations.116 France: the Centre National de la Recherche Scientifique (CNRS) is included in the higher education sector, whereas in other countries, such as Italy, this type of organization is classified as being in the government sector; as of 1997, the method used to evaluate R&D personnel has changed; as of 1992, data for enterprise and government sectors are not comparable to their 1991 counterparts. Germany: current data cannot be compared with pre-1991 data because of a statistical break caused by reunification; furthermore, the recent inclusion in German statistics of graduate students and grant recipients as active researchers constitutes another statistical break; as of 1993, R&D expenditures in the government sector include R&D performed abroad. Japan: there is an important overestimation of R&D personnel in Japan, by about 25 percent (and of GERD by about 15 percent), because the data are not expressed in full-time equivalents; the data on R&D by socioeconomic objectives are underestimated because military contracts are excluded. United Kingdom: business R&D includes funds that accrue from other sectors; due to privatizations, several organizations have changed sectors causing changes in statistics broken down by sector; hospitals are included in the government sector. United States: there was a break in its data series in 1991 because of a change in the method of allocating institutional funds to universities and the exclusion of capital expenditures in the university sector—these modifications reduced the US government’s apparent contribution by around 20–25 percent; the United States only includes the central government in its calculation of government R&D, and the social sciences and humanities are generally excluded from total R&D. Taken one by one, these limitations had negligible effects on the overall statistical results. When considered together, however, they summed up to quite 116 Taken from: OECD (1999), Basic Science and Technology Statistics, Paris.
Metadata: how footnotes make for doubtful numbers 179 large variations. Overall, one must accept the use of numbers “allowing a margin of error of a few billions [dollars].”117 As the first OECD experimental study linking R&D data to economic data explained:118 “in the present situation it is not possible to assess the accumulated global impact of all these separate distortions (. . .). If taken individually, the conceptual methodological or actual distortions between the R&D and non-R&D data in most cases have probably no very serious effect, but it is impossible to venture even a rough approximation of the cumulative impact of all of them.” Since that time, however, some authors have produced such estimates for some S&T indicators.119 While the basic technical notes which are indispensable for interpreting the statistics were usually systematically included in the OECD statistical series, very few of them were included in the S&T policy reports—a questionable omission considering, as we have seen, that the OECD had on occasion decided not to publish whole analyses (like that on university R&D) or specific numbers (on basic research, for example) because of the poor quality of the data. Whether one examines the Science and Technology Indicators series of the 1980s or current editions of Science, Technology and Industry Outlook, one would be hard pressed to find interpretative details or even rudimentary qualifications. The construction of statistics remains a black box, and the consequences this has for policy remain equally obscure. Since there is no way of evaluating the optimal level of resources a country should invest in S&T, governments must compare their performance to that of other countries in planning their policies, which are liable to move along the wrong track when important caveats are overlooked. Today, the sheer quantity of notes has grown so large, that national experts (NESTI) are beginning to downplay their importance and to suggest retaining “only those that are essential for the understanding of the data (mainly notes showing a break in series or a methodological change, an overestimation or an underestimation of the data).”120 A curious proposition, since it was the same group of experts who had first requested the notes over twenty years before.121
Conclusion Constructing statistics is no easy task, and using them is an art. Before the 1960s, there were wide differences in national statistics depending on the agency conducting the survey: definitions, coverage, demarcations, samples, and estimates 117 D. S. Greenberg (2001), Science, Money, and Politics: Political Triumph and Ethical Erosion, Chicago: University of Chicago Press, p. 79. 118 OECD (1976), Comparing R&D Data with Economic and Manpower Series, DSTI/SPR/76.45, pp. 6–7. 119 J. Anderson et al. (1988), On-Line Approaches to Measuring National Scientific Output: A Cautionary Tale, Science and Public Policy, June, pp. 153–161. 120 OECD (1998), How to Improve the Main Science and Technology Indicators: First Suggestions From Users, DSTI/EAS/STP/NESTI/RD (98) 9. 121 The absence of methodological notes and analysis of limitations was one of the criticisms of NSF’s Science Indicators in the 1970s. See: Chapter 6.
180 Metadata: how footnotes make for doubtful numbers were all different. As soon as one became interested in comparing countries, new problems of comparability began to appear: national measures were of unequal quality, each country had its own peculiarities (like the organization of the university sector), and the national practices of statistical agencies varied (year of survey, for example). As if these problems were not enough, a strange paradox was soon to appear: statistics from national R&D surveys did not correspond to those produced by innovation surveys, as seen in the previous chapter. Over the last forty years, the OECD has advanced several solutions for harmonizing R&D statistics. It first developed international standards in 1963, set up a system of footnotes in the 1980s, then calculated estimates for missing national data and implemented continual clarifications in its international norms (the Frascati manual is now in its sixth edition). These moves were accompanied by, or resulted in, two consequences. First, the construction of statistics eliminated national peculiarities. Normalization suppressed national differences in order to better compare countries. Second, an antirealist rhetoric was offered that minimized the limitations. While R&D statistics were increasingly used to support science policy analyses,122 to study the impact of government funding of R&D,123 and to understand productivity issues,124 the data’s limitations were minimized or rarely discussed in published analytical documents. As cited previously, C. Silver, chairman of the first OECD users group on R&D statistics, wrote, in the introductory remarks to the group’s report: I started my task as a skeptic and completed it converted—converted that is, to the view that policy makers use and even depend on R&D statistics and particularly on those giving comparisons of national efforts in particular fields. What I beg leave to question now is whether perhaps too much reliance is placed on these all-too-fallible statistics. (OECD (1973), Report of the Ad Hoc Review Group on R&D Statistics, STP (73) 14, p. 6) He was right. R&D statistics, like many social or economic statistics, are fallible numbers. Contrary to what one would expect, however, it is precisely the exercise of improving the statistics themselves that makes their reliability at times questionable: surveys (and their scope) are constantly improved because of changes in the evolution of concepts, statistical frameworks, and systems of national accounting; because of institutional restructuring, scientific and technological advances, and the emergence of new policy priorities; and above all,
122 See the 1980s series Science and Technology Indicators and the 1990s series Science, Technology and Industry Outlook. 123 OECD (2000), The Impact of Public R&D Expenditures on Business R&D, in Science, Technology and Industry Outlook, pp. 185–200. 124 OECD (2000), R&D and Productivity Growth: A Panel Data Analysis of 16 OECD Countries, DSTI/EAS/STP/NESTI (2000) 40.
Metadata: how footnotes make for doubtful numbers 181 because of the knowledge gained from experiences and revision work.125 It is the consideration of these “improvements” that frequently renders statistics poorly comparable over time. Comparisons in space, that is, between countries, also have their limitations. Again, contrary to what one would expect, the adoption of international standards did not eliminate differences.126 Although the Frascati manual “has probably been one of the most influential documents issued by this Directorate,”127 and although the STIU was “the only comprehensive source of reliable and internationally comparable S&T statistics,”128 international comparisons remain extremely difficult to do, if not altogether impossible, given the differences in national idiosyncrasies and practices. Yet avoiding international (or temporal) comparisons is virtually unthinkable, since statistics are meaningless without them.129
125 OECD (1994), Report on the Conference on Science and Technology Indicators in Central and Eastern European Countries, CCET/DSTI/EAS (94) 12, p. 8. 126 The first edition of the Frascati manual suggested that national “variations may be gradually reduced” with standardization, p. 6. 127 OECD (1979), Notes by the Secretariat on the Revision of the Frascati Manual, DSTI/SPR/79.37, p. iii. 128 OECD (1985), Report of the Third Ad Hoc Review, op. cit., p. 7. 129 S. Patriarca (1996), Numbers and Nationhood: Writing Statistics in 19th Century Italy, Cambridge: Cambridge University Press, p. 165.
10 Tradition and innovation The historical contingency of science and technology statistical classifications
In the last few years, the US National Research Council (NRC) has repeatedly criticized the National Science Foundation (NSF) for not measuring appropriate dimensions of S&T.1 The research enterprise has changed considerably since the 1950s when the first R&D surveys were conceived, but the scope of the surveys and the concepts measured, according to the NRC, have not really kept pace with this change: “Current science and technology indicators and data fall woefully short of illuminating a major part of the story of American industry in the 1990s (. . .).”2 This was only one of the many recent criticisms that have been voiced by OECD member countries. In fact, not so long ago, users of R&D statistics frequently complained that the data collected and published by the OECD were too aggregated and insufficiently detailed. In the last ten years, Australia, as I document shortly, has been one of the main countries to argue for a better breakdown of R&D data by field of science and by socioeconomic objective. The limitations discussed in this chapter have to do with the classifications used for measuring R&D and the way countries have applied them. Classifications are ways of imposing order onto the world, natural or social.3 They separate the world into different entities that are subsequently aggregated to give indicators. In so doing, they simplify reality into specific dimensions, often without consideration for the heterogeneity of the world. Of interest for us, the study of classifications reveals as much about the classifier as about the classified. Classifications
1 National Research Council (2000), Measuring the Science and Engineering Enterprise: Priorities for the Division of Science Resources Studies, Washington; NRC (1999), Securing America’s Industrial Strength, Washington; NRC (1997), Industrial Research and Innovation Indicators, Washington; NRC (2002), Using Human Resource Data to Track Innovation, Washington. 2 NRC (1997), op. cit., p. 43. 3 M. Foucault (1966), Les mots et les choses, Paris: Gallimard; C. Perelman (1970), Réflexions philosophiques sur la classification, in Le champ de l’argumentation, Brussels: Presses universitaires de Bruxelles, pp. 353–358; J. Goody (1977), The Domestication of the Savage Mind, Cambridge: Cambridge University Press; N. Goodman (1978), Ways of Worldmaking, Indianapolis: Hackett Publishing; M. Douglas and D. Hull (1992), How Classification Works: Nelson Goodman Among the Social Sciences, Edinburgh: Edinburgh University Press.
Tradition and innovation 183 carry people’s representations of the (measured) world and disclose their understanding of the very world they are intended to help measure. A peculiar characteristic of the R&D classifications is that they were borrowed from statistical series that were not specifically designed for measuring R&D. Official documents generally attributed this state of affairs to the need for comparability with other statistics, namely national accounts, government expenditures, economic indicators, and education statistics. The third NSF industrial R&D survey, for example, was moved from the Department of Labor to the Bureau of Census in 1957 to improve comparability with industrial statistics.4 Similarly, the first edition of the OECD Frascati manual stated that the classification of R&D data by economic sector “corresponds in most respects to the definitions and classifications employed in other statistics of national income and expenditure, thus facilitating comparison with existing statistical series, such as gross national product, net output, investment in fixed assets, and so forth.”5 This chapter proposes a completely different hypothesis, namely that current R&D classifications, at least those proposed in the OECD Frascati manual, were borrowed from other classifications simply by virtue of their availability. With regard to the NSF’s Science Indicators series, the General Accounting Office (GAO) of the United States referred to this procedure as “operationalism”: the tendency to use existing measurements rather than developing new ones supported by explicit models of S&T.6 The OECD and its member countries were consequently presented with an important problem: how do you link R&D data if, although comparable to other statistics, they remained poorly comparable among themselves? What should be done, for example, if the objectives of government funding of R&D cannot be linked to university performed R&D because the activities of the two sectors—government and university—are classified differently? This chapter complements the previous one, which dealt with the limitations of R&D statistics. The chapter is divided into two parts. The first part shows how current classifications break down R&D data into the main economic sectors of national accounts: government, business, and university. It shows that these classifications came mainly from the United Nations, with slight modifications. Drawing upon the last two revisions of the Frascati manual, which coincided with a decade of increasing policy demands for more detailed R&D statistics, the second part discusses the characteristics and problems of current classifications.
The system of national accounts The first edition of the Frascati manual suggested classifying R&D by dimension. One of the central dimensions proposed was the basic versus applied character of
4 NSF (1960), Funds for R&D in Industry: 1957, Washington: NSF 60-49, pp. 97–100. 5 OECD (1963), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of R&D, Paris, p. 21. 6 See: Chapter 6.
184
Tradition and innovation
research activities. Chapter 14 will deal at length with this classification. Another important dimension was concerned with economic sectors. In line with the System of National Accounts (SNA), and following the practice of the NSF7—the first organization to survey all economic sectors systematically—the manual recommended classifying R&D according to the following main economic sectors: business, government, and private non-profit.8 To these three sectors, however, the OECD, following the NSF again, added a fourth one: higher education. The following rationale was offered for the innovation: The definitions of the first three sectors are basically the same as in national accounts, but higher education is included as a separate main sector here because of the concentration of a large part of fundamental research activity in the universities and the crucial importance of these institutions in the formulation of an adequate national policy for R&D. (OECD (1963), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Development, op. cit., p. 22) The SNA, now in its fourth edition, was developed in the early 1950s and conventionalized at the world level by the United Nations.9 At the time, R&D was not recognized as a category of expenditures that deserved a specific mention in the national accounts.10 The same holds true today: during the last revision of the SNA ten years ago, the United Nations rejected the inclusion of R&D “because it was felt that it opened the door to the whole area of intangible investment.”11 The United Nations decided instead to develop a functional classification of expenditures that would make such items as R&D visible in the system of national accounts by way of what was called “satellite accounts.”12 Despite its alignment with the system of national accounts, the Frascati manual nevertheless used a different system of classification in a number of cases,13 including, for example, the coverage of each sector (government, university, and
7 K. Arnow (1959), National Accounts on R&D: The National Science Foundation Experience, in NSF, Methodological Aspects of Statistics on R&D: Costs and Manpower, Washington: NSF, pp. 57–61; H. E. Stirner (1959), A National Accounting System for Measuring the Intersectoral Flows of R&D Funds in the United States, in NSF, Methodological Aspects of Statistics on R&D: Costs and Manpower, Washington: NSF, pp. 31–38. 8 Households, that is, the sector of that name in the SNA, was not considered by the manual. 9 OECD (1952), Standardized System of National Accounts, Paris; United Nations (1953), A System of National Accounts and Supporting Tables, Department of Economic Affairs, Statistical Office, New York. 10 Only institutions primarily engaged in research were singled out as a separate category. 11 J. F. Minder (1991), R&D in National Accounts, DSTI/STII (91) 11, p. 3. 12 See annex 11 of the 1993 edition of the Frascati manual. 13 S. Peleg (2000), Better Alignment of R&D Expenditures as in Frascati Manual with Existing Accounting Standards, OECD/EAS/STP/NESTI (2000) 20; OECD (2001), Better Alignment of the Frascati Manual with the System of National Accounts, DSTI/EAS/STP/NESTI (2001) 14/PART8.
Tradition and innovation 185 business). Still, the manual’s specifications allowed one to follow the flows of funds between sectors (by way of a matrix), specifically between funder and performer of R&D, and to construct the main R&D indicator: the GERD (Gross Expenditures on R&D), defined as the sum of R&D expenditures for the four previously-identified sectors. The OECD system of R&D classification was peculiar in that each sector had its own classification. Whereas in most official surveys the units are analyzed according to a common system of classification (every individual in a population, for example, is classified according to the same age structure), here three different kinds of units were surveyed and classified separately. The business sector was classified according to industry group, the university (and private non-profit) sector according to field of science (FOS) or scientific discipline, and the government sector according to function or socioeconomic objective (SEO) of R&D funding. The principal recommendations regarding these classifications were made in the first edition of the Frascati manual. Business classification The 1963 edition of the Frascati manual proposed classifying business R&D according to the International Standard Industrial Classification (ISIC), which had been in existence for more than a decade (since 1948). Four main divisions were originally suggested:14 1 2 3 4
agriculture, forestry, hunting, and fishing mining and quarrying manufacturing industry construction, utilities, commerce, transport, and services.
The manual further specified that it might sometimes be useful to subdivide the last two categories into sub-categories, such as the separation of aircraft from transportation equipment because each was in itself particularly research-intensive and of special interest from the standpoint of R&D. Today, the recommended classification, in line with the third revision of ISIC (1993), lists twelve main divisions, the more detailed being for the manufacturing industries (see Appendix 19). Grouping by industry was only one of the ways business R&D could be classified. Following the NSF, the OECD also suggested a classification by product field as early as 1970. If applied, it would allow for more detailed information and permit the R&D of large multi-product enterprises to be properly classified. In fact, with classification by industry, each enterprise was classified according to its principal activity. This caused significant differences between allocation of R&D expenditures by industry and product field.15 It resulted in overestimations for certain industries, like 14 OECD (1963), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Development, op. cit., p. 22. 15 J. Morgan (2001), Assess Ways to Improving Product Fields Data, DSTI/EAS/STP/NESTI (2001) 14/PART10.
186 Tradition and innovation manufacturing enterprises, or in underestimations for others, like service enterprises. According to a recent evaluation, the range of discrepancy between the two classifications varied from 5.1 percent (Sweden) to 8.3 percent (Australia).16 In general, countries considered the product field classification more appropriate than the industrial one, and several collected such data,17 but only 55 percent used it.18 The main argument for not collecting the information was that adding such questions in surveys would impose a burden on businesses. An important question regarding the classification of the business sector was “whether the R&D should be classified with the content of the R&D activity itself or with the end use. For example, should R&D on an electrical motor for agricultural machinery be classified under electrical or agricultural machinery?”19 In general, the philosophy of R&D statistics had been to classify data according to the purpose of the research activities. This was the case for the character of research (basic or applied), and was also the case, as we shall see below, for classifying government R&D. In the case of business R&D, however, the OECD recommended, although without insisting, content over purpose. Thus, “R&D for rubber tires for aircraft should be classified under rubber tires, and the subsequent flow of embodied R&D to the aircraft industry be captured by I/O [input/output] techniques.”20 University classification The Frascati manual’s first edition (1963) proposed classifying university (and private non-profit) R&D by field of science or scientific discipline. Six broad fields (including approximately thirty sub-fields) were and still are recommended, as follows (see Appendix 20 for details):21 1 2 3 4 5 6
natural sciences engineering medical sciences agriculture social sciences humanities and fine arts.
The classification came directly from previous OECD work on scientific and technical personnel.22 In the 1950s, the OECD conducted three international
16 Ibid., p. 6. 17 OECD (1994), Report on the Mini-Survey on the Availability of Product Field Data, DSTI/EAS/STP/NESTI (94) 11. 18 OECD (2000), Review of the Frascati Manual: Use of Product Field Classification, DSTI/EAS/ STP/NESTI/RD (2000) 5. 19 J. F. Minder (1991), Treatment of Industrial R&D Data, DSTI/STII (91) 17, p. 7. 20 Ibid. 21 Social sciences and humanities are covered only as of 1973. 22 OEEC (1968), A Study of Resources Devoted to R&D in OECD member countries in 1963/64: Statistical Tables and Notes, Paris, p. 25.
Tradition and innovation 187 surveys intended to assess the supply of and demand for scientific and technical personnel.23 The classification used was itself based on a UNESCO recommendation dating from 1958.24 The UNESCO classification was never intended to measure R&D activity, however, but rather the scientific and technical qualifications of personnel. In fact, the Frascati manual admitted: It may be desirable at a later stage to work out a new classification specially suited to the requirements of R&D statistics and taking into account the growing importance of inter-disciplinary fields of research. The UNESCO classification is mainly suited to the needs of manpower measurement. (OECD (1963), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Development, Paris, op. cit., p. 25) Nevertheless, the Frascati manual conventionalized the classification by field of science for university sector R&D for lack of a better alternative.25 It would thus never really classify research projects and their content, but rather the department or qualification of the researcher under whom the research was conducted. As UNESCO noted in 1966: “the classification recommended for higher education is not completely satisfactory when used for categories of personnel [or research expenditures] actually engaged in scientific and technical activities. The discipline of study or faculty granting the degree does not always adequately describe the fields of specialization in the actual employment situation outside the academic field.”26 As a result, less than half of OECD member countries report data by field of science to the OECD today.27 As is evident in the following quotation from 1991,28 delegates at the Frascati manual revision meeting in 2000 again expressed dissatisfaction over the absence of details in most classification systems: “Most of the OECD member countries give information/data with regard to fields of science and technology at a high level of aggregation,” (. . .).
23 OECD (1955), Shortages and Surpluses of Highly Qualified Scientists and Engineers in Western Europe, Paris; OECD (1957), The Problem of Scientific and Technical Manpower in Western Europe, Canada and the United States, Paris; OECD (1963), Resources of Scientific and Technical Personnel in the OECD Area, Paris. 24 The recommendation was concerning the International Standardization of Educational Statistics, now known as ISCED. 25 Better classifications are in fact available in Australia, the Netherlands, and the United States, as well as in J. Irvine, B. Martin, and P. Isard, Investing in the Future: An International Comparison of Government Funding of Academic and Related Research, Worchester: Billing and Sons Ltd. 26 UNESCO (1966), Statistical Data on Science and Technology for Publication in the UNESCO Statistical Yearbook, UNESCO/CS/0666.SS-80/4, pp. 3–4. 27 K. W. Maus (2000), R&D Classification, DSTI/EAS/STP/NESTI/RD (2000) 14. 28 E. L. Rost (1991), Fields of Science and Technology, DSTI/STII (91) 8, p. 6; OECD (1991), Exemples de classification par domains scientifiques et techniques, DSTI/STII (91) 9.
188 Tradition and innovation “Only a few classification systems include research fields like biotechnology, materials research and information technology.” (OECD (2000), Review of the Frascati Manual: Use of Field of Science Classification DSTI/EAS/STP/NESTI/RD (2000) 7; OECD (2000), Outcomes of the Frascati Manual Revision Meeting Held on 13–14 March 2000, DSTI/EAS/STP/NESTI (2000) 30, p. 3) All in all, as the Australian delegate stated, classification by field of science was “based on conceptions of discipline boundaries that were more relevant to the state of science in the first half of the 20th Century.”29 Classifications by field of science, like most classifications, are retrospective, and therefore “lag behind the progress of science.”30 Their usefulness was therefore continually called into question: A large part of the 1960s push to obtain detailed R&D [classifications] stemmed from the belief that it would come to fill a major role in priority setting at the national level (. . .). Instead, the data proved to be highly useful for a wide variety of strategic and analytical purposes at a slightly lower level. (D. Byars and K. Bryant (2001), Review of FOS Classification, DSTI/EAS/STP/NESTI (2001) 14/PART11, p. 3) Government classification Of the three main economic sectors, it was in the government sector that the OECD departed most from available standards. Included in the system of national accounts was an international Classification of the Functions of Government (COFOG) that was rejected by the OECD group of experts (NESTI) for reasons explained below. This was at odds with the principle of comparability used for choosing the two previous classifications, and to a draft recommendation made in 1978.31 Instead, the OECD opted for the European Nomenclature for the Analysis and Comparison of Science Programmes and Budgets (NABS). The OECD began collecting data on socioeconomic objectives of governmentfunded R&D in the early 1970s, and introduced corresponding standards in the third edition of the Frascati manual (1975).32 The method was supplied by the European Commission. A task group of European statisticians was set up as early as 1968 by the Working Group on Scientific and Technical Research Policy (CREST) in order to study central government funding of R&D. The purpose
29 K. Bryant (2000), An Outline Proposal to Extend the Detail of the SEO and FOS Classifications in the Frascati Manual, DSTI/EAS/STP/NESTI (2000) 27, p. 3. 30 UNESCO (1969), List of Scientific Fields and Disciplines Which Will Facilitate the Identification of Scientific and Technical Activities, UNESCO/COM/CONF.22/10, p. 4. 31 OECD (1978), Draft Guidelines for Reporting Government R&D Funding by Socio-Economic Objectives: Proposed Supplement to the Frascati Manual, DSTI/SPR/78.40, p. 9. 32 The first two editions of the Frascati manual included preliminary and experimental classifications only.
Tradition and innovation 189 was to “indicate the main political goals of government when committing funds to R&D.”33 The implicit goal was to contribute to the “construction” of a European science policy and budget. To this end, the commission adopted the NABS produced by the group of experts in 1969,34 and published a statistical analysis based on the classification.35 In line with the spirit of the Brooks report, which had argued for changes in the objectives of government-funded R&D,36 the OECD Directorate for Science, Technology, and Industry (DSTI) adopted the European Commission’s approach for obtaining appropriate statistics.37 The recommended classification had the following eleven categories or motives for government-funded R&D (see Appendix 21 for details),38 which allowed, for example, the development of specific statistics on R&D on energy in the 1970s: 1 agriculture 2 industrial development 3 energy 4 infrastructures 5 environment 6 health 7 social development and services 8 earth and atmosphere 9 advancement of knowledge 10 space 11 defense. The classification of government-funded R&D was determined by the following fundamental fact: few governments actually conducted surveys of government R&D. Most preferred to work with budget documents, because, although less detailed and accurate than survey data, the information was easier and cheaper to obtain.39 Among the methodology’s advantages was speed, since the data were
33 Eurostat (1991), Background Information on the Revision of the NABS, Room document to the Expert Conference to prepare the revision of the Frascati manual for R&D statistics, OECD. 34 The first NABS was issued in 1969, revised in 1975 (and included in the 1980 edition of the Frascati manual) and again in 1983 (to include biotechnology and information technology, not as categories, but broken down across the whole range of objectives). In 1993, improvements were made in the environment, energy, and industrial production categories. 35 EEC (1970), Research and Development: Public Financing of R&D in the Community Countries, 1967–1970, BUR 4532, Brussels. 36 OECD (1971), Science, Growth, and Society: A New Perspective, Paris. 37 The first OECD (experimental) analysis of data by socioeconomic objectives was published in 1975; OECD (1975), Changing Priorities for Government R&D: An Experimental Study of Trends in the Objectives of Government R&D Funding in 12 OECD member countries, 1962–1972, Paris. 38 Since 1993, the expression “government-financed R&D” has been replaced by GBAORD (Government Budget Appropriations and Outlays for R&D). 39 Eurostat (2000), Recommendations for Concepts and Methods of the Collection of Data on Government R&D Appropriations, DSTI/EAS/STP/NESTI (97) 10, p. 3.
190 Tradition and innovation extracted directly from budget documents without having to wait for a survey. But it also had several limitations, among them the fact that national data relied on different methodologies and concepts, and on different administrative systems.40 With regard to the classification, it reflected intention to spend, rather than real expenditures. Moreover, data were difficult to extract from budgets because they lacked the required level of detail: “the more detailed the questions are, the less accurate the data become” because it was not always possible to define the specific NABS sub-level in the budget—budget items can be quite broad.41 At first, OECD statisticians were also confronted with a wide range of budgetary and national classification systems in member countries, systems over which they had relatively little control: The unit classified varied considerably between countries (. . .) because national budget classification and procedures differ considerably. In some countries, such as Germany, the budget data are available in fine detail and can be attributed accurately between objectives. In others, such as the United Kingdom and Canada, the budgetary data are obtained from a survey of government funding agencies which is already based on an international classification. However, in others again such as France, the original series are mainly votes by ministry or agency. (OECD (1990), Improving OECD Data on Environment-Related R&D, DSTI/IP (90) 25, p. 9) To better harmonize national practices, a draft supplement to the Frascati manual, specifically devoted to the measurement of socioeconomic objectives of government R&D, was completed in 1978,42 but was never released as a separate publication. As I cited earlier, these data “play only a modest role in the general battery of S&T indicators and do not merit a separate manual” stated the OECD.43 Instead of being incorporated into a manual, the specifications were abridged and relegated to a chapter in the fourth edition of the Frascati manual, and a Sources and Methods document was produced in 1982.44 In the meantime, criticisms persisted. Seventy-five percent of countries actually used the classification, but only 30 percent were satisfied with it:45 “It is not a classification in that it does not have a truly logical structure and it contains 40 Eurostat (2000), The Frascati Manual and Identification of Some Problems in the Measurement of GBAORD, DSTI/EAS/STP/NESTI (2000) 31. 41 OECD (2000), The Adequacy of GBAORD Data, DSTI/EAS/STP/NESTI (2000) 18, p. 3. 42 OECD (1978), Draft Guidelines for Reporting Government R&D Funding by Socio-Economic Objectives: Proposed Supplement to the Frascati Manual, DSTI/SPR/78.40. 43 OECD (1991), Classification by Socio-Economic Objectives, DSTI/STII (91) 19, p. 9. 44 In 1991, Australia again proposed that there should be a supplement to the manual dealing with detailed classification by socioeconomic objectives and fields of science. See: OECD (1992), NESTI Meeting, DSTI/STII/STP/NESTI/M (92) 1. 45 OECD (2000), Review of the Frascati Manual: Use of SEO Classification DSTI/EAS/STP/ NESTI/RD (2000) 8.
Tradition and innovation 191 overlapping categories and gaps in coverage,” claimed the Australians.46 In fact, the OECD list was a compromise between the NABS, Scandinavian countries’ classification (grouped under Nordsfork), and national lists, notably the NSF’s. The NABS and the COFOG (SNA) had similar categories,47 but different methodologies for collecting data. In fact, the OECD chose to classify and measure the purposes (or intentions) of government R&D expenditures rather than the content, whether by area of relevance or end result.48 The COFOG differed from the NABS in three respects:49 (1) the COFOG classified expenditures, while the NABS measured appropriations; (2) until recently, the COFOG did not distinguish different types of research when they were an integral part of a government program; and (3) nor did the COFOG explicitly separate basic from applied research, putting the former under general public services and the latter under the function concerned.50 These were the three main reasons why the OECD member countries decided to use the NABS instead of the COFOG for classifying government-funded R&D. The NABS was really no better that the COFOG, however, when it came to the classification of general university funds (GUF): governments do not orient this money when they fix their budgets, and consequently these funds could never be broken down by socioeconomic objectives.
Classification problems Classifications are rarely static. They evolve over time as new things, issues or policy priorities emerge: user demands for R&D data is always changing, and the environment is constantly evolving. All three R&D classifications have in fact been regularly adjusted over the last forty years, as reflected, for example, in each revised edition of the Frascati manual. Such changes often caused important breaks in the statistical series, however. This was probably one of the most difficult problems to resolve, and much time was devoted to this task. In fact, when one uses classifications that are not specifically developed for the purpose of measuring R&D, the challenges multiply rapidly. Three major problems have limited the usefulness of the classifications for R&D purposes. The first concerned the degree of details for policy analysis. With regard to the business sector, for example, two industries have recently received considerable attention in policies: the service industry and the information and communication sector. However, until recently, statistical coverage of the service 46 K. Bryant and D. Byars (2001), Review of SEO Classification, DSTI/EAS/STP/NESTI (2001) 14/PART12. 47 While the main categories of COFOG did not correspond perfectly to the NABS and the OECD classification, subcategories allowed one to develop links with COFOG. For comparisons between classifications, see: OECD (1978), Draft Guidelines for Reporting Government R&D Funding by Socio-Economic Objectives: Proposed Supplement to the Frascati Manual, DSTI/SPR/78.40, pp. 40–71. 48 OECD (1972), The Problems of Comparing National Priorities for Government Funded R&D, DAS/SPR/72.59. This document became Chapter 2 of OECD (1975), Changing Priorities for Government R&D, Paris. 49 OECD (1996), Revision of COFOG, DSTI/EAS/STP/NESTI (96) 16. 50 OECD (1991), Classification by Socio-Economic Objectives, DSTI/STII (91) 19, p. 5.
192 Tradition and innovation industry was still very poor because of the manufacturing bias of most economic classifications: “the service industries have been thought of mainly as importing technology (. . .). This was unrealistic, in that the services include a number of industries for which the main economic activity is technology supply (commercial R&D firms, software houses, etc.) and in that, even for industries where change is initially brought about by embodied technology, this subsequently leads firms to carry out independent R&D.”51 In fact, in seven OECD countries, one-quarter or more of all R&D expenditures in the business sector are in the service industries. But few countries actually included all of the service industries in their national surveys. Only communication and computers were covered, because they were presumably the most R&D-intensive sectors in services as a whole. Consequently, “the high discrepancy in the share of service R&D [in data surveys] results [not only from the presumed lesser importance of the service sector but] also from different methods of surveying services’ R&D data.”52 Reliable information was also missing for the “information and communication sector,” a “strategic” sector that every government targeted for support. After many years of complaints, a new classification has now been proposed by two OECD working parties.53 It classified enterprises according to their principal activity, and included seven classes for manufacturing and four classes for services. There is still work to be done, however, before the sector is properly defined through a classification based on product rather than industry. Besides the lack of detail in the classifications, a second problem with the classifications was related to the treatment of horizontal projects or products. Some authors have argued that most classifications are generally ill suited for intermediate or transitional states.54 Similarly, generic technologies, multidisciplinary projects, and problem-oriented research have always been poorly classified as R&D categories, if at all. This was the case for the environmental sciences, which conduct research in a number of areas that are not confined to the standard category of classification by socioeconomic objective or to the environment category of ISIC.55 This was also the case for biotechnology, for which a definition and a list of included technologies for statistical purposes have only recently been proposed.56 And finally, this was also the 51 A. Young (1996), Measuring R&D in the Services, OECD/GD (96) 132, p. 7. 52 C. Grenzmann (2000), Measurement of R&D in Services, DSTI/EAS/STP/NESTI (2000) 11, p. 4; OECD (1996), Replies to the Mini-Survey on R&D in the Services, DSTI/EAS/STP/NESTI (96) 6; OECD (1997), Progress Report on Services R&D, DSTI/EAS/STP/NESTI (97) 11; A. Young (1996), Measuring R&D in the Services, op. cit., p. 19. 53 OECD (1998), Measuring the ICT Sector, Paris. 54 M. Douglas (1966), Purity and Danger, London: Routledge (2001), p. 97; see also: H. Ritvo (1997), The Platypus and the Mermaid and Other Figments of the Classifying Imagination, Cambridge: Harvard University Press. 55 OECD (1998), Trends in Environmental R&D Expenditures, DSTI/STP/TIP (98) 10. 56 OECD (2000), Reviewing and Refining the Definitions of Biotechnology, DSTI/EAS/STP/NESTI (2000) 7; OECD (2000), Ad Hoc Meeting on Biotechnology Statistics: Discussions and Recommendations, DSTI/EAS/STP/NESTI (2000) 9; OECD (2001), Discussion and Recommendations for Future Work, DSTI/EAS/STP/NESTI (2001) 7; OECD (2001), Biotechnology Statistics in OECD member countries: Compendium of Existing National Statistics, DSTI/EAS/STP/NESTI (2001) 2.
Tradition and innovation 193 case in the health sector, which cut across numerous classifications and categories, while actual measures were confined to the health category of socioeconomic objectives in classification by government R&D, to the medical sciences in the classification by field of science, or to the pharmaceutical industry in the ISIC classification.57 A third problem with R&D classifications was the difficulty of establishing links between data.58 Since each of the three main sectors had its own system of classification, level of aggregation and collection method, it was always difficult, for example, to relate socioeconomic objectives of government funding to industrial or university R&D activities (see Table 10.1). As a result, some countries preferred to apply the (university) classification by field of science to the government sector59 instead of applying a classification by socioeconomic objective “in order to be able to make comparisons with R&D in the higher education sector.”60 In fact, although there was always a desire for consistency between classifications,61 no consensus was ever developed for classifying government R&D according to either one of the two classifications (field of science or socioeconomic objective), and both are currently recommended by the Frascati manual. Similarly, problems of comparability affected the business sector. When, at the beginning of the 1990s, the DSTI launched a project to create the STAN database that would link R&D data to economic indicators, it had to face the fact that every statistical series—economic as well as R&D—had its own classification: “As most of these databases were created in different environments and at different times, the categories were picked as a function of the needs which each specific database was to serve without particular concern about comparable coverage of categories among different databases.”62 Linking data is the only way to answer policy demands for measurements of the performance and impacts (or outcomes) of S&T. Since at least the mid-1980s, 57 A. Young (2000), Proposals for Improving the Availability and International Comparability of the HealthRelated R&D Data Overseen by NESTI, DSTI/EAS/STP/NESTI (2000) 28; A. Young (2000), Assessment of International Practices for the Compilation of Data on R&D Related to Health and Preparation of Guidelines for Improved Data Collection, DSTI/EAS/STP/NESTI (2000) 29 and Annex; A. Young (2001), Develop Methodologies for Better Derivation of R&D Data on Hospitals and Clinical Trials, DSTI/EAS/STP/NESTI (2001) 14/PART4. 58 Linkage of various existing data had recently been identified by the OECD as a way to bring forth new indicators, partly because of budget constraints—linking existing data would be far less expensive than developing totally new indicators. See: OECD (1996), Conference on New S&T Indicators for a Knowledge-Based Economy: Summary Record of the Conference Held on 19–21 June 1996, DSTI/STP/NESTI/GSS/TIP (96) 5; OECD (1996), New Indicators for the Knowledge-Based Economy: Proposals for Future Work, DSTI/STP/NESTI/GSS/TIP (96) 6. 59 Or to the business sector in the case of UNESCO surveys. See: UNESCO (1969), Report of the Session Held in Geneva, 2–6 June 1969, UNESCI/COM/CONF.22/7, p. 9. 60 OECD (1992), NESTI Meeting, DSTI/STII/STP/NESTI/M (92) 1. 61 OECD (2000), Review of the Frascati Manual: Use of SEO Classification DSTI/EAS/STP/NESTI/RD (2000) 8; K. W. Maus (2000), R&D Classification, DSTI/EAS/STP/NESTI/RD (2000) 14; K. W. Maus (1991), Sector Sub-Classification in the Government Sector, DSTI/STII (91) 21. 62 OECD (1991), Choosing Economic Activity Categories From Revised Classifications for STIID Databases, Room document, expert conference to prepare the revision of the Frascati manual for R&D statistics, p. 19.
194
Tradition and innovation
Table 10.1 1993 The Frascati manual R&D classifications—first level Business (Industries)
Government (SEO)
University (FOS)
Agriculture Mining Engineering Manufacturing Medical sciences Electricity Construction Services
Agriculture Infrastructures Environment Health Social development and services Earth and atmosphere Advancement of knowledge Space Defense
Natural sciences Industrial development Energy Agriculture Social sciences Humanities and fine arts
governments have all asked for indicators that would relate inputs to outputs. Statisticians are only now beginning to direct their efforts toward this task.
Conclusion Agencies are usually proud of developing their own classifications because they are adapted to their own needs and interests. By aligning themselves with the System of National Accounts, however, state statisticians responsible for R&D statistics effectively reduced the amount of control that they could exercise over the data. Although borrowed classifications did undergo some modifications, they had nonetheless been developed for purposes other than classifying R&D, and therefore carried with them considerable biases, the consequences of which were not fully anticipated. The Frascati manual classified R&D according to three (of the four) sectors of the system of national accounts, plus a fourth sector of its own—higher education. The sole variable common to each sector was money, that is, R&D expenditures. It allowed people to calculate the GERD and to measure financial flows between sectors. But each sector had its own system of classification. The business sector was classified by industry; only the principal activity of the enterprise was classified, however, rather than the whole range of products on which R&D was expended. The university sector was classified according to scientific discipline; this classification was modeled after the demarcations of traditional university departments, and was more useful for classifying scientific and technical personnel than research projects. The government sector was classified either according to scientific discipline or to socioeconomic funding objective; the list of socioeconomic objectives classified intentions rather than real expenditures, was more oriented toward industrial motives rather than social problems, and did not solve the problem of classifying the general funds allotted to universities. Two motives drove the choice of classifications for R&D. The first, as disclosed in official documents, was the need to compare R&D data to other variables, mainly economic ones. The second motive for choosing existing classifications rather than constructing new ones was their availability. The business and
Tradition and innovation 195 university sector classifications were produced by the United Nations, while the government sector classification was developed by the European Union. Existing classifications were not without problems, however. First, the level of aggregation prevented detailed analyses. Second, horizontal issues like interdisciplinary or problem-oriented research could not be clearly measured. Third, links between sectors were difficult to establish. There were two methodological choices concerning surveys which contributed to the limitations of classifications. One was the unit surveyed. The questionnaires were addressed to the organizations responsible for R&D with respect to their research activities as a whole. It was therefore not the research projects themselves that were classified, but the principal R&D activity of the unit. As the OECD indeed recognized as early as 1968, “it would be difficult to get a good (. . .) breakdown unless data were available by projects.”63 The second methodological choice, paradoxically, has nothing to do with the classifications themselves, but rather with the way the OECD collected data. Classifications were, in fact, often more detailed than they might have at first appeared in statistical and analytical reports. But the problem was with the countries that did not collect detailed enough data, or that did so with different classifications. Since the OECD relied on national data to construct its statistics, harmonization could only be conducted at a very aggregated level in international tables. The OECD could not really innovate if the countries themselves relied on national traditions.
63 OECD (1968), A Study of Resources Devoted to R&D in OECD member countries in 1963/64: Statistical Tables and Notes, Paris, p. 24.
Part II
Using science and technology statistics Throughout its history, the OECD’s member countries have developed a relative consensus on how to measure S&T. R&D surveys based on the Frascati manual were the model used to produce the main statistics, followed thirty years later by innovation surveys. Together, these two sources of data led to what official statisticians considered to be the most appropriate measure of S&T. What was peculiar to S&T measurement, at least at the international level, was that no fundamental debates occurred on the construction of indicators per se. The measurement of S&T has always been taken for granted at the official level. While educational indicators, for example, led to major controversies in the 1980s—controversies in which the OECD participated, often as a promoter of such indicators—no similar debates occurred regarding S&T indicators.1 Two facts explain this situation. First, S&T indicators are not used in legislation: S&T policies are not based on the compulsory use of numbers as norms or rules.2 Second, few governments got involved in bibliometrics, the most contested indicator in S&T. There certainly had been criticisms and skepticism concerning certain S&T indicators. Bibliometrics was one such. Because too few countries accepted the indicator, the OECD never produced systematic data on publications and citations, although it did use some occasionally. There were other criticisms that had less impact on the OECD, however, namely criticisms concerning output indicators. Although patents, technological balance of payments and high technology trade were widely-contested indicators at the national level for methodological reasons, this never prevented the organization from regularly publishing numbers and developing, to various degrees, analytical work aimed at policy-makers based on these numbers. What certainly mobilized people politically on several occasions, however, was the use of statistics and indicators to talk about public policy issues. Over the period 1930–2000, there had been several conflicts and public debates using specific indicators that divided the countries and organizations depending on the 1 Contrast, for example, OECD (1987), Evaluation of Research, produced by the DSTI, with the OECD/CERI publications on the International Indicators and Evaluation of Educational Systems (INES) project of the 1980s. 2 For the recent use of performance indicators in university funding in OECD member countries, see: B. Jongbloed and H. Vossensteyn (2001), Keeping up Performance: an International Survey of Performance-Based Funding in Higher Education, Journal of Higher Education Policy and Management, 23 (2), 127–145.
numbers they used or the interpretation they made of them. Part II is devoted to analyzing five such debates. A perennial question in S&T policy was the level of resources to devote to S&T, and how to allocate funding in order to achieve a balance between types of research; that is, what is the appropriate equilibrium between fundamental and applied research? Policy-makers, economists, and statisticians tried to use statistics to answer the question, and defined precise ratios to help allocate financial resources (see Chapter 11). No formula worked as planned by the “rationalists”, however. What was used rather mechanically, however, was the GERD/GNP indicator, and rankings of countries based upon it. Under the influence of the OECD, every member country used the indicator to compare its performance against other countries, among them the United States. The practice led, with time, to the development of benchmarking exercises and scoreboards of indicators that “provide practical means for countries to formulate their own performance targets, for example, to attain a ranking among the top five OECD countries (. . .).”3 The origin of this practice was the debate on technological gaps (see Chapter 12). We already alluded to Gaps, and presented the indicators developed in the OECD study, but without discussing to what extent the exercise was politically motivated. According to G. F. Ray, it is Schumpeter’s time lag between invention and innovation that has been re-baptized as the technological gap.4 For C. Freeman, however, the term was “originally introduced as a technical term in international trade theory” and “has passed into general use as a loose description of the disparities in scientific and technical resources and attainments, or in the levels of technology in use, between Europe and the United States.”5 Be that as it may, the OECD contributed largely to the public debate in the 1960s, with one of the first international studies comparing countries’ R&D performance. Soon, the numbers were taken over politically by the Europeans to sustain their case for more efforts in S&T. The French alerted the public to the potential threat of American domination. The British Prime Minister H. Wilson proposed the creation of a “European technological community,” as part of the second round of bidding for Britain’s entry into the Common Market. The Italian Foreign Minister Amintore Fanfani suggested to the US Secretary of State that the United States set up a tenyear Technological Marshall Plan. The Americans, for their part, denied that there were gaps between Western Europe and the United States. To America, the gaps were rather between the USSR and the free world. The third important policy debate concerned human resources in S&T (see Chapter 13). In 1994, the OECD member countries reached a relative consensus on how to measure these resources when they adopted the Canberra manual.6 But before this date, important debates occurred concerning the shortages of scientists and engineers, and on the brain drain. At the time, no real standards 3 OECD (2001), Benchmarking Business Policies: Project Proposal, DSTI/IND (2001), p. 5. 4 G. F. Ray (1969), The Diffusion of New Technology: A Study of Ten Processes in Nine Industries, National Institute Economic Review, 48, May, pp. 40–100. 5 C. Freeman (1967), Research Comparisons, Science, 158, October 17, p. 463. Contrary to what Freeman suggested, however, early trade theory was rather concerned with leads and (imitation) lags, not gaps. 6 OECD (1995), Manual on the Measurement of Human Resources Devoted to S&T, OECD/GD (95) 77.
existed, and the data were very limited. This situation did not, however, prevent people from using the available numbers to take positions, even knowing that the numbers did not entirely support their conclusions. Here, the political agenda was driving the use of statistics. The fourth debate concerned basic research. Basic research is a fundamental category of S&T statistics. Chapter 14 argues that it was statistics that helped to crystallize the concept of basic research: it gave a more solid social and political identity to university research. By measuring its share of total R&D funding, academics and their representatives could convince politicians and bureaucrats to increase the level of money devoted to basic research. However, as soon as governments started asking for more oriented results from university research,7 interest in the statistics faded. Several countries abandoned the official production of numbers on basic research, while others invented hybrid concepts such as strategic research. These alternatives have not yet convinced a majority of countries, and strategic research remains as difficult to measure as basic research. All four debates show that statistical agencies and government departments sometimes worked hand in hand. However, there have also been some real tensions historically between statisticians and policy-makers. Statistics are often presented in the literature as a companion to policy, but those who produce them often see things differently. In Part I, I mentioned that the NSF had a fixation with indicators in and of themselves, with few qualitative statements. The President and Congress felt obliged to ask other departments for the assessments they needed on S&T. In fact, the NSF was and still is an autonomous body that understood its mandate in the light of its principal constituents: the researchers. At the opposite end of the spectrum, the OECD statistical unit was from the start entirely dependent on the Directorate of Scientific Affairs (DSA). This hierarchy explained the different philosophies between published documents, for example, between NSF’s Science Indicators (SI) and the OECD study on technological gaps. SI was a document containing many indicators, but few if any assessments of the state of S&T, while Technological Gaps was devoted to answering specific policy questions with a few selected indicators. Similarly, the OECD’s Main Science and Technology Indicators concentrated on only four indicators ( R&D, patents, technological balance of payments, high technology trade), those most requested by users, according to the OECD—ignoring the words and recommendations of its own second ad hoc review group. Tensions between statisticians and policy-makers have thus been a permanent feature of the history of S&T statistics. Forces mitigating for the autonomy of statisticians were always present, or always got resurrected. But other forces, mainly bureaucratic, sometimes managed to limit this freedom. If, however, autonomy did prevail at certain times, this did not imply neutrality. Quite the contrary.
7 At the same time, more or less, as the idea developed that research was less clear-cut than always suggested, often mixing both basic and applied dimensions.
11 The most cherished indicator Gross Domestic Expenditures on R&D (GERD)
The OECD Frascati manual, now in its sixth edition, is the international standard for conducting national surveys on R&D. It essentially develops two measurements of investment (or inputs) into S&T: the financial resources invested in R&D, and the human resources devoted to these activities. To properly conduct surveys of R&D, the manual suggests precise definitions of R&D and specifies which activities fall under this heading, as well as those that should be excluded. Each of the two measures can be analyzed in terms of three dimensions. The first is the type or character of the research, which is basic, applied or concerned with the development of products and processes. This is a fundamental classification scheme in S&T measurement. The second dimension is the sectors that finance or execute the research: government, university, industry, or non-profit organizations. It is these institutions that are the objects of measurement, and not the individuals of which they are composed. Finally, in relation to this latter dimension, monetary and human resources are classified by discipline in the case of universities (and non-profit organizations), by industrial sector or product in the case of firms, and by function or socioeconomic objective in the case of governments. The main indicator to come out of the Frascati manual is Gross Domestic Expenditures on R&D (GERD)—the sum of R&D expenditures in the following four economic sectors: business, university, government, and non-profit.1 According to a recent survey by the OECD Secretariat, GERD is actually the most cherished indicator among OECD member countries,2 despite the frequent suggestion that human resources are a better statistic,3 and despite unanimous demand for output indicators. This chapter explains where the indicator comes from. The first part presents early efforts to measure R&D on a national scale in order to determine a country’s science or research “budget.” The second discusses how the NSF improved
1 The measure includes R&D funded from abroad, but excludes payments made abroad. 2 OECD (1998), How to Improve the MSTI: First Suggestions From Users, DSTI/EAS/STP/NESTI/RD (98) 9. 3 See: Chapter 13.
202
Gross Domestic Expenditures on R&D
upon previous experiments, to the point where the OECD conventionalized the agency’s choices and methodologies. The third part discusses the uses of the indicator and the role the OECD played in its diffusion.
The first exercises on a national budget Before the 1950s, measurement of R&D was usually conducted on individual sectors. Organizations surveyed either industrial or government R&D, for example, but very rarely aggregated the numbers to compute a “national research budget” (see Appendix 2). The first such efforts arose in the United Kingdom and the United States, and were aimed at assessing the share of expenditures that should be devoted to science (and basic science) compared to other economic activities, and at helping build a case for increased R&D resources. J. D. Bernal was one of the first academics to perform measurement of science in a Western country. He was also one of the first to figure out how much was spent nationally on R&D—the “budget of science,” as he called it. In The Social Function of Science (1939), Bernal estimated the money devoted to science in the United Kingdom using existing sources of data: government budgets, industrial data (from the Association of Scientific Workers) and university grants committee reports.4 He had a hard time compiling the budget, however, because “the sources of money used for science do not correspond closely to the separate categories of administration of scientific research” (p. 57). “The difficulties in assessing the precise sum annually expended on scientific research are practically insurmountable. It could only be done by changing the method of accounting of universities, Government Departments, and industrial firms” (p. 62). The national science budget was nevertheless estimated at about £4 million for 1934, and Bernal added: “The expenditure on science becomes ludicrous when we consider the enormous return in welfare which such a trifling expenditure can produce” (p. 64). Bernal also suggested a type of measurement that became the main indicator of S&T: the research budget as a percentage of the national income. He compared the United Kingdom’s performance with that of the United States and the USSR, and suggested that the United Kingdom should devote between 1.5 percent and 1 percent of its national income to research (p. 65). The number was arrived at by comparing expenditures in other countries, among them the United States which invested 0.6 percent, and the Soviet Union which invested 0.8 percent, while the United Kingdom spent only 0.1 percent. This certainly seems a very low percentage and at least it could be said that any increase up to tenfold of the expenditure on science would not notably interfere with the immediate consumption of the community; as it is it represents only 3 per cent of what is spent on tobacco, 2 per cent of what is spent on drink, and 1 per cent of what is spent on gambling in the country. (p. 64) 4 J. D. Bernal (1939), The Social Function of Science, Cambridge, MA: MIT Press, 1973, pp. 57–65.
Gross Domestic Expenditures on R&D 203 “The scale of expenditure on science is probably less than one-tenth of what would be reasonable and desirable in any civilized country” (p. 65). The next experiment to estimate a national budget was conducted in the United States by V. Bush.5 Using primarily existing data sources, the Bowman committee—one of the four committees involved in the report—estimated the national research budget at $345 million (1940). These were very rough numbers, however: “since statistical information is necessarily fragmentary and dependent upon arbitrary definition, most of the estimates are subject to a very considerable margin of error” (p. 85). The committee showed that industry contributed by far the largest portion of the national expenditure, but calculated that the government’s expenditure expanded from $69 million in 1940 to $720 million in 1944. It also documented how applied rather than basic research benefited most from the investments (by a ratio of 6 : 1), and developed a rhetoric arguing that basic research deserved more resources from government. The committee added data on national income in its table on total expenditures, and plotted R&D per capita of national income on a graph. But nowhere did the committee use the data to compute the research budget as a percentage of national income, as Bernal had. It was left to the President’s Scientific Research Board to innovate in this respect. In 1947, the Board published its report Science and Public Policy, which estimated a national R&D budget for the second time in as many years.6 With the help of a questionnaire it sent to 70 industrial laboratories and 50 universities and foundations, the board in fact conducted the first survey of resources devoted to R&D using precise categories, although these did not make it “possible to arrive at precisely accurate research expenditures” because of the different definitions and accounting practices employed by institutions (p. 73). The board estimated the US budget at $600 million (annually) on average for the period 1941–1945. For 1947, the budget was estimated at $1.16 billion. The federal government was responsible for 54 percent of total R&D expenditures, followed by industry (39 percent), and universities (4 percent). Based on the numbers obtained in the survey, the board proposed quantified objectives for science policy. For example, it suggested that resources devoted to R&D be doubled in the next ten years, and that resources devoted to basic research be quadrupled (p. 6). The board also introduced into science policy the main science indicator that is still used by governments today: R&D expenditures as a percentage of national income. Unlike Bernal however, the board did not explain how it arrived at a 1 percent goal for 1957. Nevertheless, President Truman subsequently incorporated this objective into his address to the American Association for the Advancement of Science (AAAS) in 1948.7 The last exercise in constructing a total R&D figure, before the NSF entered the scene, came from the US Department of Defense in 1953.8 Using many 5 6 7 8
V. Bush (1945), Science: The Endless Frontier, op. cit., pp. 85–89. President’s Scientific Research Board (1947), Science and Public Policy, op. cit., p. 9. H. S Truman (1948), Address to the Centennial Anniversary, Washington: AAAS Annual Meeting. Department of Defense (1953), The Growth of Scientific R&D, Office of the Secretary of Defense (R&D), RDB 114/34, Washington.
204
Gross Domestic Expenditures on R&D
different sources, the Office of the Secretary of Defense (R&D) estimated that $3.75 billion, or over 1 percent of the Gross National Product (GNP), was spent on research funds in the United States in 1952. The report presented data regarding both sources of expenditures and performers of work: “The purpose of this report is to present an over-all statistical picture of present and past trends in research, and to indicate the relationships between those who spend the money and those who do the work.” The office’s concepts of sources (of funds) and performers (of research activities) would soon become the main categories of the NSF’s accounting system for R&D. The statistics showed that the federal government was responsible for 60 percent of the total,9 industry 38 percent and non-profit institutions (including universities) 2 percent. With regard to the performers, industry conducted the majority of R&D (68 percent)—and half of this work was done for the federal government— followed by the federal government itself (21 percent) and non-profits and universities (11 percent).
An accounting system for R&D According to its mandate, the NSF started measuring R&D across all sectors of the economy with specific and separate surveys in 1953: government, industry, university, and others. Then, in 1956, it published its “first systematic effort to obtain a systematic across-the-board picture”10—at about the same time as the United Kingdom did.11 It consisted of the sum of the results of the sectoral surveys for estimating national expenditures.12 The NSF calculated that the national budget amounted to $5.4 billion in 1953.13 The NSF analyses made extensive use of GNP. For the NSF, this was its way to relate R&D to economic output: “despite the recognition of the influence of R&D on economic growth, it is difficult to measure this effect quantitatively,” stated the NSF.14 Therefore, this “analysis describes the manner in which R&D expenditures enter the gross national product in order to assist in establishing a basis for valid measures of the relationships of such expenditures to aggregate economic output” (p. 1). The ratio of research funds to GNP was estimated at 1.5 percent for 1953, 2.6 percent for 1959, and 2.8 percent for 1962. The NSF remained careful, however, with regard to interpretation of the indicator: “Too little is presently known about
9 The Department of Defense and the Atomic Energy Commission were themselves responsible for 90 percent of the federal share. 10 NSF (1956), Expenditures for R&D in the United States: 1953, Reviews of Data on R&D, 1, NSF 56-28, Washington. 11 Advisory Council on Scientific Policy (1957), Annual Report 1956–57, Cmnd 278, London: HMSO. 12 The term “national” appeared for the first time only in 1963. See: NSF (1963), National Trends in R&D Funds, 1953–62, Reviews of Data on R&D, 41, NSF 63-40. 13 The data were preliminary and were revised in 1959. See: NSF (1959), Funds for R&D in the United States, 1953–59, Reviews of Data on R&D, 16, NSF 59-65. 14 NSF (1961), R&D and the GNP, Reviews of Data on R&D, 26, NSF 61-9, p. 2.
Gross Domestic Expenditures on R&D 205 Table 11.1 Transfers of funds among the four sectors as sources of R&D funds and as R&D performers, 1953 (in millions) Sector
Sources of R&D funds Federal Government Industry Colleges and universities Other institutions Total
R&D performers Federal government ($)
Industry ($)
Colleges and universities ($)
Other institutions ($)
Total ($)
970
1,520
280
50
2,820
2,350
20
3,870
130 30 460
970
2,370 20 70
130 50 5,370
the complex of events to ascribe a specified increase in gross national product directly to a given R&D expenditure” (p. 7). In the same publication, the NSF innovated in another way over previous attempts to estimate the national budget: a matrix of financial flows between the sectors, as both sources and performers of R&D, was constructed (see Table 11.1). Of 16 possible financial relationships (four sectors as original sources, and also as ultimate users), 10 emerged as significant (major transactions). The matrix showed that the federal government sector was primarily a source of funds for research performed by all four sectors, while the industry sector combined the two functions, with a larger volume as performer. Such national transfer tables were thereafter published regularly in the bulletin series Reviews of Data on R&D15 until a specific and more extensive publication appeared in 1967.16 The matrix was the result of deliberations conducted in the mid-1950s at the NSF on the US research system17 and on demands to relate S&T to the economy: “An accounting of R&D flow throughout the economy is of great interest at present (. . .) because of the increasing degree to which we recognize the relationship between R&D, technological innovation, economic growth and the economic sectors (. . .),” suggested H. E. Stirner from the Operations Research Office at Johns Hopkins University.18 But “today, data on R&D funds and personnel are perhaps
15 Reviews of R&D Data, Nos. 1 (1956), 16 (1959), 33 (1962), 41 (1963); Reviews of Data on Science Resources, no. 4 (1965). 16 NSF (1967), National Patterns of R&D Resources, NSF 67-7, Washington. 17 “Our country’s dynamic research effort rests on the interrelationships—financial and nonfinancial—among organizations”. K. Arnow (1959), National Accounts on R&D: The NSF Experience, in NSF, Methodological Aspects of Statistics on Research and Development: Costs and Manpower, NSF 59-36, Washington, p. 57. 18 H. E. Stirner (1959), A National Accounting System for Measuring the Intersectoral Flows of R&D Funds in the United States, in NSF, Methodological Aspects of Statistics on R&D: Costs and Manpower, Washington: NSF, p. 37.
206
Gross Domestic Expenditures on R&D
at the stage of growth in which national income data could be found in the 1920s.”19 Links with the System of National Accounts (SNA), a recently developed system then in vogue among economists and governments departments,20 were therefore imagined: The idea of national as well as business accounts is a fully accepted one. National income and product, money flows, and inter-industry accounts are well-known examples of accounting systems which enable us to perform analysis on many different types of problems. With the development and acceptance of the accounting system, data-gathering has progressed at a rapid pace. (H. E. Stirner (1959), A National Accounting System for Measuring the Intersectoral Flows of R&D Funds in the United States, op. cit., p. 32) Soon, an important problem emerged from the matrix: the inconsistency between source and performer data, as reported by K. Arnow: “In the long run, over a period of years, the national totals for R&D expenditures derived from both approaches would show closely related and occasionally coinciding trend lines. For any given year, however, national totals based on the two approaches would probably differ.”21 We saw in Chapter 9 that the numbers still do not coincide. The main reasons identified concerned the following: ●
Sources – Source organizations do not know the extent to which, and when, recipients of funds may use the money. – Sources do not always know whether they are making what may be called a final or through transfer to a performer or whether the recipient of the funds will distribute them further to other organizations which may be performers or re-distributors.
●
Performers – Some of the amounts reported by the performers may represent funds which were not specifically allocated to R&D by the original source. – Performers do not always know the original source of money when the funds have passed through several hands before reaching them.
19 K. Arnow (1959), National Accounts on R&D: The NSF Experience, op. cit., p. 61. 20 S. S. Kuznets (1941), National Income and its Composition, 1919–1938, New York: NBER. The SNA, now in its fourth edition, was developed in the early 1950s and conventionalized at the world level by the United Nations: United Nations (1953), A System of National Accounts and Supporting Tables, Department of Economic Affairs, Statistical Office, New York; OECD (1958), Standardized System of National Accounts, Paris. 21 K. Arnow (1959), National Accounts on R&D: The NSF Experience, op. cit., p. 59.
Gross Domestic Expenditures on R&D 207 These limitations led to a decision that a system of R&D accounts should be based on performer reporting, since this offers the best available information on where R&D is going on. The NSF decision—as well as the matrix—became international standards with the adoption of the OECD Frascati manual by member countries in 1963. The OECD was the source of the nomenclature used today to talk about the national research budget: GERD.22 In line with the SNA, and following the NSF, the manual recommended classifying (and aggregating) R&D according to the following main economic sectors: business, government, and private non-profit.23 To these three sectors, the OECD, following the NSF again, added a fourth one: higher education. The following rationale was offered for the innovation: The definitions of the first three sectors are basically the same as in national accounts, but higher education is included as a separate main sector here because of the concentration of a large part of fundamental research activity in the universities and the crucial importance of these institutions in the formulation of an adequate national policy for R&D. (OECD (1963), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Development, op. cit., p. 22) The first edition of the OECD Frascati manual justified the classification of R&D data by economic sector as follows: it “corresponds in most respects to the definitions and classifications employed in other statistics of national income and expenditure, thus facilitating comparison with existing statistical series, such as gross national product, net output, investment in fixed assets and so forth.”24 A deliberate attempt, then, to link R&D to the economy.
The mystique of ranking The OECD is responsible for the worldwide diffusion of the GERD indicator and, above all, the GERD/GNP ratio. According to the OECD, an indicator “that is particularly useful for making international comparisons is to compare R&D inputs with a corresponding economic series, for example, by taking GERD as a percentage of GNP.”25 In fact, the American GERD/GDP ratio of the early 1960s, that is 3 percent, as mentioned in the first paragraphs of the first edition
22 OECD (1963), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Development, Paris, pp. 34–36. 23 Households, that is, the sector of that name in the SNA, was not considered by the manual. 24 Ibid., p. 21. 25 OECD (1994), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of R&D, Paris, p. 28.
208
Gross Domestic Expenditures on R&D
of the Frascati manual, became the ideal to which member countries would aim, and which the OECD would implicitly promote.26 The generalized use of the indicator at the OECD started in the early 1960s. The first such exercise was conducted by Freeman et al., and published by the OECD in 1963 for the first ministerial meeting on science.27 The terms of future OECD statistical studies were fixed from that point on. The authors documented a very rapid increase in R&D expenditures in the 1950s, greater than the rise in GNP (p. 22). They also showed a positive relationship between R&D and GNP: advanced industrial countries typically spent more than 1 percent of their GNP on R&D (p. 23). Finally, among the group of industrialized countries, two sub-groups were distinguished: high (over 1 percent) and low GERD/GNP (under 1 percent) (pp. 24–25). The second exercise occurred as the result of the first international survey on R&D conducted in 1963–64. The analysis was presented at the second OECD ministerial meeting on science in 1966, and published officially in 1967.28 The report was designed to examine the level and structure of R&D efforts in member countries. Three kinds of R&D data analysis were conducted—and these would become the standard used in the ensuing decades: (1) general measures or indicators in absolute terms (GERD) and in relative terms (GERD/GNP); (2) breakdowns of R&D expenditures by economic sector, R&D objective and type of activity; and (3) specific analyses of economic sectors: government, business, higher education, and non-profit. The OECD analysis of the first International Statistical Year (ISY) results was conducted using groups of countries, classified according to size and economic structure. The United States was chosen as the “arithmetic” standard (index ⫽ 1,000), and the graphs of the report pictured accordingly. The United States was put in its own category, followed by “sizable industrialized countries,” “smaller industrialized countries,” and “developing countries” (p. 8):29 1 2 3 4
United States France, Germany, Italy, Japan, United Kingdom Austria, Belgium, Canada, Netherlands, Norway, Sweden Greece, Ireland, Portugal, Spain, Turkey.
26 OECD (1963), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of R&D, op. cit., p. 5. In fact, at the time of the first edition of the Frascati manual, the US GERD/GDP was 2.8 percent. See: NSF (1962), Trends in Funds and Personnel for R&D, 1953–61, Reviews of Data on R&D, 33, NSF 62-9, Washington; NSF (1963), National Trends in R&D Funds, 1953–62, op. cit. 27 OECD (1963), Science, Economic Growth and Government Policy, Paris. 28 OECD (1967), A Study of Resources Devoted to R&D in OECD Member Countries in 1963/64: The Overall Level and Structure of R&D Efforts in OECD Member Countries, Paris. 29 Other categorizations aiming to group European countries into broader economic entities more similar in size to the United States were also used: Western Europe and Common Market countries. But the same trends were observed: “The United States spends three times as much on R&D as Western Europe and six times as much as the Common Market”, p. 19.
Gross Domestic Expenditures on R&D 209 The report concentrated on the discrepancies between the United States and European countries. It showed that the United States’ GERD was highest in absolute terms as well as per capita (p. 15), and that it had the most scientists and engineers working on R&D (p. 17). There is a great difference between the amount of resources devoted to R&D in the United States and in other individual member countries. None of the latter spend more than one-tenth of the United States’ expenditure on R&D (. . .) nor does any one of them employ more than one-third of the equivalent United States number of qualified scientists and technicians. (as per OECD report, p.19) Finer analyses30 were conducted at three levels. First, the four basic sectors— government, non-profit, higher education, and business enterprise—were analyzed. OECD measurements showed that “in all the sizable industrialized countries except France, about two-thirds of the GERD is spent in the business enterprise sector” (p. 23). “In the developing countries [of Europe] R&D efforts are, conversely, concentrated in the government sector” (p. 25). The OECD also showed that industrial R&D was highly concentrated: “83 per cent of total industrial R&D is carried out by the 130 companies [mainly American] with R&D programmes worth over $10 million each” (p. 43), and “government supports a higher proportion of R&D in selected industries [aircraft, electrical, chemical] in the United States than any other industrialized member country” (p. 51). Second, R&D objectives were examined within three broad areas: (1) atomic, space, and defense; (2) economic (manufacturing, extraction, utilities, agriculture, fishing, forestry); and (3) welfare and miscellaneous (health, hygiene, underdeveloped areas, higher education). The results showed, among other things, that twothirds of the United States’ total R&D resources were devoted to the first category (p. 28). Finally, research activities were broken down by type—basic, applied, and development. It was calculated that the United States (and the United Kingdom) spent more on development than any other category (p. 34). Also noteworthy was the fact that “the higher education sector is less important than might be expected, undertaking less than half of total basic research in the United Kingdom and the Netherlands, and less than two-thirds in all the other industrialized countries except Norway” (p. 34). This kind of study continued with the next biennial surveys. In 1975, the OECD published its third study on international R&D statistics.31 The quality of the data had considerably improved, at least with regard to detail. Although the social sciences and humanities were still excluded from the R&D survey, there were more refined classifications with regard to R&D by industry, scientific field,
30 These looked at both the sources of funding for, and the performers of, R&D. 31 OECD (1975), Patterns of Resources Devoted to R&D in the OECD Area, 1963–1971, Paris.
210
Gross Domestic Expenditures on R&D
and socioeconomic objective. Statistics were also a lot more numerous (and sophisticated!)32 than in the 1967 report. The numbers showed that the United States continued to be the largest R&D performer in the OECD area, “spending more than all the other responding countries taken together” (p. 9). But the OECD comparisons were now conducted vis-à-vis five groups of countries, and not only versus the United States. The groupings were constructed on the basis of the performance of countries based on both GERD and GERD/GNP, and allowed the OECD to invent the concept of “R&D intensity” (pp. 14–15): Group I:
Large R&D and Highly R&D Intensive France, Germany, Japan, United Kingdom, United States Group II: Medium R&D and Highly R&D Intensive Netherlands, Sweden, Switzerland Group III: Medium R&D and R&D Intensive Australia, Belgium, Canada, Italy Group IV: Small R&D and R&D Intensive Austria, Denmark, Finland, Ireland, Norway Group V: Small R&D and other Greece, Iceland, Portugal, Spain. The report documented a “leveling off ” of R&D expenditures. The phenomenon was measured in two ways (pp. 19–21). First, annual growth rates of GERD and R&D manpower were stable or declining in seven countries over the period 1963–71. Second, GERD/GNP was stable or declining for nine countries, among them the United States. Three conclusions were drawn from the statistics (p. 23). First, the principal change since the publication of the results of the first ISY has been the absolute and relative decline in the resources devoted to R&D by the United States and the United Kingdom and the re-emergence of Japan and Germany as major R&D powers. Second, differences between member countries narrowed slightly: in 1963, nearly 60 per cent of all OECD R&D scientists and engineers worked in the United States, as against about 20 per cent in the (enlarged) Common Market and 20 per cent elsewhere (of which 15 per cent was in Japan). By 1971, the corresponding shares were: United States, less than 55 per cent; Common Market, virtually no change; other countries, 25 per cent (of which 20 per cent was in Japan).
32 OECD (1973), Analyzing R&D Statistics by the Méthode des Correspondances: A First Experimental Approach, DAS/SPR/73.92.
Gross Domestic Expenditures on R&D 211 Third, “there was a ‘leveling off ’ in the amount of resources devoted to R&D in about half the countries in the survey.” This was only one of the main issues of the OECD report. The other was its “stress on the role of the business enterprise sector” (p. 25)—because it is the “prime performer of R&D” (p. 47)—and the respective roles of (or balance between) public and private R&D (p. 85). The report noted a slight decrease in the share of government R&D funding, but a substantial increase in the percentage of GERD financed by business funds (p. 27). In most (fifteen) countries, the business enterprise sector was the most important sector for performance of R&D, with about two-thirds of the national effort in Groups I and II, and over half in Group III (p. 47). Only Australia and Canada differed from this pattern, with about one-third of the R&D performed by industry. All in all, “over the period . . . countries seem to have drawn together (. . .): the role of industry increased in nine countries,” reported the OECD (p. 49). The increasing interest in the business sector at the Directorate for Science, Technology and Industry (DSTI) was a direct consequence of the then-current debate on technological gaps.33 One of the conclusions of the OECD study on the issue was that innovation was at the heart of discrepancies between the United States and Europe.34 The obvious solution for national governments was to support industry’s’ efforts, and for the OECD to continue putting emphasis on industrial statistics. A specific analysis of industrial R&D trends published in 1979, and a Science and Technology Indicators series begun in 1984, would specifically contribute to the latter. Trends in Industrial R&D (1979) continued the previous analyses on the levelingoff of R&D funding, especially in “the new economic context since the energy crisis of 1973.”35 The study was originally undertaken by an OECD group of experts examining “science and technology in the new economic context.”36 It concluded that “the new economic context does not seem to have had a major impact” (p. 16), since no change was observed in the overall level of industrial R&D, although a slight increase of 8 percent occurred between 1967 and 1975 (p. 14). Privately-funded industrial R&D grew by about 30 percent (p. 16), mainly before the crisis, but was offset by a decline in government support, above all in the United States. The report also noted a significant redistribution (and convergence) of industrial R&D in the OECD area, as efforts in the United States and the United Kingdom have declined, and those in Japan and Germany have increased ( p. 17). The core of the report, however, was devoted to analyzing trends in nine groups of manufacturing industries, each industry group being discussed in terms of its share of the three principal areas of performance: United States, EEC
33 34 35 36
See: Chapter 12. OECD (1970), Gaps in Technology, Paris. OECD (1979), Trends in Industrial R&D in Selected OECD Member Countries, 1967–1975, Paris, p. 5. The group’s main results were published as: OECD (1980), Technical Change and Economic Policy, Paris.
212
Gross Domestic Expenditures on R&D
countries, and others, notably Japan. The study included only the main eleven OECD countries—classified into two groups: major and medium industrial R&D countries—because “they perform 97 percent of all industrial R&D in the OECD area” (p. 11), although a small final chapter (9 pages out of a total of 200 pages) discussed “small” countries. The series Science and Technology Indicators (STI) followed, with three editions published in 1984, 1986, and 1989. The first edition dealt wholly with R&D, while the other two added some new indicators. These exercises were perfect examples of ranking countries and then assessing their efforts against the best performers. The series and its successor were a further step in the OECD’s philosophy of ranking countries using the GERD indicator. The 1984 edition started with an overall view of R&D in the OECD area, in line with the 1975 report. The main results were threefold: (1) slower growth in R&D expenditures in the 1970s compared to the 1960s, although higher than 1970 GNP growth; (2) the United States remained the main performer of R&D, but its share of total R&D declined by 6 percent in the 1970s, while that of Japan increased by 4 percent and that of the European Community remained relatively unchanged (slight gain of only 1 percent); (3) the share of government R&D in public budgets diminished in almost all countries, as did the share of the university sector.
Grouping of countries in STI—1984 High: Unites States, Japan, Germany, France, United Kingdom Medium: Italy, Canada, Netherlands, Sweden, Switzerland, Australia, Belgium Low: Austria, Norway, Denmark, Yugoslavia, Finland, New Zealand, Ireland Others: Spain, Portugal, Turkey, Greece, Iceland. Following the general overview of the OECD area, four groups of countries were constructed according to their GERD, each group discussed in a separate chapter. This constituted the core of the report (260 pages out of a total of more than 330 pages), and was preceded by a short discussion on grouping exercises. The report refused to use any country as a yardstick or “norm”: The United States is far from being a typical OECD country (. . .). Many authors simply take the resource indicator concerned for the United States and for one or two other major spenders as a “norm”, as they are the technological leaders to whose R&D patterns the other countries should be aspiring in relative if not in absolute terms. However, here we shall take a different approach. For each R&D resource indicator we shall try and establish what the typical OECD country spends and then identify the exceptions. This “typical” OECD country is not defined in precise [a priori and unique] statistical terms [arithmetic average, median, etc.] but is based on
Gross Domestic Expenditures on R&D 213 observations of tables for individual indicators (industrial R&D, defense R&D, energy R&D). (OECD (1984), OECD Science and Technology Indicators, Paris, p. 24) Nevertheless, the OECD analyzed countries’ performances according to groups labeled with normative names (high, medium, and low GERD). While each group was treated separately, the overview chapter continued to compare countries and rank them, generally against the largest five, because “once we have identified and discussed what happened to R&D in these five countries [the United States, Japan, Germany, the United Kingdom, and France] we have more or less explained what happened to R&D in the OECD area as a whole” (p. 20). Over and over again, the organization conducted its analysis with recurrent comparisons using expressions like “the largest spenders,” and those in “first place,” or “at the upper end of the range.” The OECD’s grouping was founded on the following rationale: “it is only meaningful to make absolute comparisons between countries which devote broadly the same amounts to R&D in that they face the same degree of constraint in allocating resources” (p. 22). For the OECD, however, there remained more important groups (high GERD) than others (low GERD) and, within each of them, there were winners (generally the bigger countries) and losers (the smaller ones). With the second edition of STI in 1986, grouping of countries was reduced to just three categories—large, medium, and small countries—and this grouping was not used in the analysis, but only in graphs (e.g. p. 22) and tables (pp. 86ss). The dimension used for the grouping was country size, although this was not defined explicitly. With regard to R&D, the main message of the report was similar to the previous one: (1) R&D funding increased by 3.5 percent annually between 1969 and 1981; (2) the United States lost a few percentage points between 1969 and 1983 (from 55 percent of OECD GERD to 46 percent); Japan gained several percentage points (from 11 percent of OECD GERD to 17 percent), but the European Community’s position had not changed; (3) the business enterprise sector had taken over from the public sector as the main funder of R&D, with two-thirds of GERD, while the share of universities continued to decline. This last point (industry’s increasing share of GERD) became a target which several countries thereafter suggested in their policy documents.
Grouping of countries in STI—1986 Large:
United States, Japan, Germany, France, United Kingdom, Italy, Canada Medium: Spain, Australia, Netherlands, Sweden, Belgium, Switzerland, Austria, Yugoslavia Small: Denmark, Norway, Greece, Finland, Portugal, New Zealand, Ireland, Iceland.
214
Gross Domestic Expenditures on R&D
In this second edition, a new type of ranking appeared: industries were classified into three groups with regard to their R&D intensity:37 high, medium, or low. The first group corresponded to what the OECD called “high technology industries,” that is, industries that spent over 4 percent of turnover on R&D.38 This was one more ranking for which the performance of countries was evaluated in terms of share of high-technology industries, growth, market share, and trade balance. The third edition of STI (1989) did not change very much, continuing the previous trends. The same message as in the previous two editions, and the same grouping as in the last report, prevailed. One characteristic of the previous reports, however, gained increased emphasis: the analysis and tables were regularly presented according to what the OECD called geographical zones: OECD (in which the United States and Japan were separately identified, as well as the seven largest countries as a category or group), EEC, Nordic countries, and others. A fourth STI report was envisaged, but never completed.39 In fact, after 1989, the DSTI statistical unit would never again publish official reports wholly devoted to the analysis of its R&D survey. Instead, it published regular statistical series (without analysis) on one hand (like Main Science and Technology Indicators), and on the other hand, contributed to the policy analyses conducted at the DSTI. The main contribution was the Science and Technology Policy: Review and Outlook series, and its successor—Science, Technology and Industry Outlook. The first two editions of the series contained very few statistics. Policy trends and problems were treated mostly in qualitative language, although the first edition (1985) included a very brief discussion of countries grouped according to GERD/GNP (p. 18), and the second (1988) contained a series of statistical tables, mainly on scientific papers, in an appendix. With the third edition (1992) and those following, however, an overview text reminiscent of the STI series was included as a separate chapter or section. It had the same structure, indicators, and breakdowns as before, but less discussion broken down by country groups and rankings. In fact, what characterized the new series, above all from the 1996 edition on, was more diversity in the sources of statistics (other than just R&D numbers) (Table 11.2).
37 The very first OECD statistical exercise on “research-intensive industries” is to be found in OECD (1963), Science, Economic Growth and Government Policy, op. cit., pp. 28–35, and OECD (1970), Gaps in Technology, Paris, pp. 206–212 and 253–260. For criticisms of the indicator, see: L. Soete (1980), The Impact of Technological Innovation on International Trade Patterns: The Evidence Reconsidered, Science and Technology Indicators Conference, September 15–19, Paris, OECD, STIC/80.33; K. S. Palda (1986), Technological Intensity: Concept and Measurement, Research Policy, 15, pp. 187–198; D. Felsenstein and R. Bar-El (1989), Measuring the Technological Intensity of the Industrial Sector: A Methodological and Empirical Approach, Research Policy, 18, pp. 239–252; J. R. Baldwin and G. Gellatly (1998), Are There High-Tech Industries or Only High-Tech Firms? Evidence From New Technology-Based Firms, Research Paper Series, No. 120, Statistics Canada. 38 Aerospace, Computers, Electronics, Pharmaceuticals, Instruments, and Electrical Machinery. 39 OECD (1989), Summary Record of the NESTI Meeting, STP (89) 27, p. 10.
Gross Domestic Expenditures on R&D 215 Table 11.2 Treatment of the GERD in STI and Science, Technology and Industry Outlook Science and Technology Indicators 1984 407 pages 1986 125 pages 1989 137 pages Science, Technology and Industry Outlook 1985 101 pages 1988 123 pages 1992 273 pages 1994 341 pages 1996 344 pages 1998 328 pages 2000 258 pages
407 pages 63 pages 130 pages none none 20 pages 63 pages 12 pages 24 pages 12 pages
Building on its work for the knowledge-based economy,40 the DSTI is about to develop a new kind of ranking called benchmarking. Benchmarking “enables each country to compare itself with the best performer in a particular area and to situate itself above or below the OECD averages.”41 Since 1997, thematic studies have applied the “best practice” approach to S&T.42 Now, OECD countries will be systematically compared according to different indicators, based on a refinement and extension of indicators developed for the 2001 STI Scoreboard on the Knowledge-Based Economy.43 The philosophy of the exercise reads as follows: This comparison will provide practical means for countries to formulate their own performance targets, e.g.: to attain a ranking among the top five OECD countries in terms of ICT [information and communication technology] intensity or to double venture capital supply as a share of GNP within the next five years. (OECD (2001), Benchmarking Business Policies: Project Proposal, op. cit., p. 5) The OECD has rarely been so explicit about the aims of its statistical work.
Conclusion As early as 1967, the OECD warned countries against uncritical use of the GERD/GNP indicator: “Percentages of GNP devoted to R&D are useful in
40 OECD (1999), STI Scoreboard: Benchmarking Knowledge-Based Economies, Paris; OECD (2001), STI Scoreboard: Towards a Knowledge-Based Economy, Paris. 41 OECD (1997), Benchmarking for Industrial Competitiveness, DSTI/IND (97) 26, p. 3. 42 OECD (1997), Policy Evaluation in Innovation and Technology: Toward Best Practices, Paris; OECD (1998), OECD Jobs Strategy: Technology, Productivity and Job Creation, Paris. 43 OECD (1997), Benchmarking for Industrial Competitiveness, op. cit.; OECD (1998), Benchmarking of Business Performance for Policy Analysis, DSTI/IND (98) 15; OECD (2001), Benchmarking Business Policies: Project Proposal, DSTI/IND (2001) 10.
216
Gross Domestic Expenditures on R&D
comparing a country’s R&D effort with resources devoted to competing national objectives or to track its growth over time. International comparisons of GNP percentages are, however, not good yardsticks for science planning.”44 Certainly, “the percentage of GNP devoted to R&D varies directly with per capita GNP. [But] this appears to be true at the top and bottom of the scale” only.45 Again in 1975, the OECD stated: Around the time of the publication of the first ISY results, many member countries were expanding their R&D efforts, and the percentage of GNP devoted to R&D was considered an important science policy indicator for which targets were to be set. This enthusiasm for GNP percentages has waned. For most, growth has seldom reached the more optimistic targets (notably the oft-quoted figure of 3 per cent of GNP). (OECD (1975), Patterns of Resources Devoted to R&D in the OECD Area, 1963–1971, op. cit., p. 23) In fact, the indicator was not without its dangers. First, as the OECD itself admitted, “international comparisons might lead to a situation where, for prestige reasons, countries spend more on R&D than they need or can afford.”46 Second, the indicator said nothing about the relationship between the two variables: is the GNP of a country higher because it performs more R&D, or are R&D expenditures greater because of a higher GNP?47 R&D expenditures and the gross national product show a high degree of correlation. The conclusion, of course, cannot be drawn that one of these is cause and the other effect—in our modern economy they are closely interlinked and that is the most we can say.48 (R. H. Ewell (1955), Role of Research in Economic Growth, Chemical and Engineering News, 33 (29), p. 2981) Finally, the indicator and the comparisons based upon it did not take into account the diversity of countries, sectors, or industries.49 Despite these warnings, it was the OECD itself that contributed to the widespread use of the indicator. In every statistical publication, the indicator was 44 OECD (1967), A Study of Resources Devoted to R&D in OECD member countries in 1963/64: The Overall Level and Structure of R&D Efforts in OECD member countries, op. cit., p. 15. 45 Ibid., p. 19. 46 OECD (1966), Government and the Allocation of Resources to Science, Paris, p. 50. 47 B. R. Williams (1964), Research and Economic Growth: What Should We Expect?, Minerva, 3 (1), pp. 57–71; A. Holbrook (1991), The Influence of Scale Effects on International Comparisons of R&D Expenditures, Science and Public Policy, 18 (4), pp. 259–262. 48 For similar warnings, see also: J.-J. Salomon (1967), Le retard technologique de l’Europe, Esprit, December, pp. 912–917. 49 K. Hughes (1988), The Interpretation and Measurement of R&D Intensity, Research Policy, 17, pp. 301–307; K. Smith (2002), Comparing Economic Performance in the Presence of Diversity, Science and Public Policy, 28 (4), pp. 267–276.
Gross Domestic Expenditures on R&D 217 calculated, discussed, and countries ranked according to it, because “it is memorable,”50 and is “the most popular one at the science policy and political levels, where simplification can be a virtue.”51 The OECD regularly compared countries within each of its policy series Reviews of National Science Policy52 and Science, Technology and Industry Outlook,53 and was emulated by others. For example, the United Nations and UNESCO developed specific GERD/GNP objectives for developing countries,54 as well as objectives for funding of developing countries by developed countries;55 national governments systematically introduced the GERD/GNP target into their policy objectives to argue for more and more R&D resources, that is, the equivalent percentage to that in the United States.56 A country not investing the “normal” or average percentage of GERD/GNP always aimed for higher ratios, generally those of the best-performing country: “the criterion most frequently used in assessing total national spending is probably that of international comparison, leading perhaps to a political decision that a higher target for science spending is necessary if the nation is to achieve its proper place in the international league-table.”57 Thus, the OECD erred in 1974 when it wrote: “The search for ‘Magic Figures’ of the 1960s, namely the percentage of GNP spent on R&D, has lost much of its momentum and relevance.”58 The indicator still remains the most cherished by governments today.
50 OECD (1984), Science and Technology Indicators, Paris, p. 26. 51 OECD (1992), Science and Technology Policy: Review and Outlook 1991, Paris, p. 111. The French translation reads as follows: “le plus prisé parmi les responsables de la politique scientifique et des hommes politiques, pour lesquels la simplification se pare parfois de certaines vertus”, p. 119. 52 The series covered every country starting in 1962. 53 See, for example: OECD (1985), Science and Technology Policy Outlook, Paris: pp. 20–21. 54 See, for example: United Nations (1960), Declaracion de Caracas, New York; United Nations (1971), World Plan of Action for the Application of Science and Technology to Development, New York, pp. 55–61. 55 United Nations (1971), Science and Technology for Development, New York. 56 R. Voyer (1999), Thirty Years of Canadian Science Policy: From 1.5 to 1.5, Science and Public Policy, 26 (4), pp. 277–282; C. Lonmo and F. Anderson (2003), A Comparison of International R&D Performance: An Analysis of Countries that Have Significantly Increased their GERD/GDP Ratios During the Period 1989–1999, Ottawa: Statistics Canada; J. Sheehan and A. Wycoff (2003), Targeting R&D: Economic and Policy Implications of Increasing R&D Spending, STI Working Paper, DSTI/DOC (2003) 8. For recent exercises on R&D targets in Europe, see: CEC (2002), More Research for Europe: Towards 3 percent of GDP, COM(2002) 499; CEC (2003), Investing in Research: An Action Plan for Europe, COM(2003) 489, pp. 7–16. 57 OECD (1966), Government and the Allocation of Resources to Science, op. cit., p. 50. 58 OECD (1974), The Research System, Vol. 3, Paris, p. 174.
12 Technological gaps Between quantitative evidence and qualitative arguments
One element that characterizes most national statistics today, as well as the OECD, as discussed in the previous chapter, is their comparative basis. It is commonplace to argue that the only way for a country to assess its performance in S&T is by comparing its efforts to those of the past, or to those of other countries. Indeed, most national policy documents start by drawing a picture of the world context or of their main competitors, often illustrated with statistics. The OECD is no exception to this rule. As an international organization, the OECD has always looked at S&T policies within a comparative framework. A given country was distinguished, categorized, and evaluated either against other countries, or according to standards or norms, the latter being those of the “best-performing” country. Today, this philosophy of examining policy manifests itself through studies on best practices, benchmarking exercises and scoreboards of indicators. The OECD ranking exercises conducted over the period 1963–2000 were documented in the previous chapter. This chapter documents another such exercise, one that had a strong influence on European S&T policies and statistical work: the measurement of technological gaps between Western Europe and the United States of America. In the 1960s, French bureaucrats and journalists launched a debate on the American domination of European S&T. Echoing UK Prime Minister Harold Wilson,1 J.-J. Salomon (under the pen name J.-J. Sorel), head of the Science Policy Division at the OECD Directorate for Scientific Affairs (DSA), summarized the debate in the following terms:2 The technological development of the United States will thus be the mark of a new stage of growth (and power) to which the European countries, despite their progress, will find themselves threatened with not being able to attain (p. 761). [The real debate] is on the consequences in the medium and the
1 Wilson warned “of an industrial helotry under which we in Europe produce only the conventional apparatus of a modern economy, while becoming increasingly dependent on American business for the sophisticated apparatus which will call the industrial tune in the 70’s and 80’s”: H. Wilson, cited in J.-J. Servan-Schreiber (1968), The American Challenge, translated from the French by R. Steel, New York: Athenaeum House, p. 78. 2 J.-J. Sorel (1967), Le retard technologique de l’Europe, Esprit, November, pp. 755–775.
Technological gaps 219 long term, which could lead to a difference of scale between the scientific and technical business in the United States and in Europe, that is, in the threat of domination that it contains. For industrialized countries, it is perhaps on the field of science and of technology that their future independence will be decided (p. 774). ( J.-J. Sorel (1967), Le retard technologique de l’Europe, Esprit, November) The OECD—and its consultants—mostly fed the debate on gaps as early as 1963,3 but also produced new and important quantitative analyses aimed at documenting the issue. Between 1965 and 1970, an experimental international statistical comparison, nine sector studies, one analytical report, and a synthesis were published. This was the first exercise to compare countries based on several S&T indicators in order to draw policy conclusions. The OECD synthesis opposed, to a certain degree, the French fears: the gap was not technological— this was only an effect—but institutional and cultural. At about the same time, the government of the United States also undertook its own analysis of the problem and concluded similarly: “The problem of the technological gap is only partly technological. Psychological, political, economic, and social factors are probably more important,” stated the US Interdepartmental Committee on the Technological Gap (known as the Hornig Committee). This chapter looks at the debate and the statistics used to support the case for technological gaps. It is divided into four parts. The first traces the origins of the concept of technological gaps to the debate on productivity gaps in the 1950s. The second part examines the French discussions regarding discrepancies in S&T between Western Europe and the United States. It was these discourses which politicized the debate and which introduced it into the field of S&T. The third part presents the OECD results and conclusions arising from a two-year study on the issue. The latter part, using a document specifically declassified for this analysis, looks at the US reaction and response to the debate.
The productivity gap In 1948, the United States launched the European Recovery Program (ERP) or Marshall Plan, aimed at participating in the reconstruction of Europe. Five billion dollars were devoted to stimulate greater efficiency in European industrial production through the introduction of American production techniques, styles of business organization, and labour-management partnerships. The vehicles for achieving this goal included a variety of technical-assistance projects, engineering schemes,
3 OECD (1963), Science, Economic Growth and Government Policy, Paris; C. Freeman, A. Young, and J. Fuller (1963), The Plastics Industry: A Comparative Study of Research and Innovation, National Institute Economic Review, 26, pp. 22–49; C. Freeman, C. J. E. Harlow, and J. K. Fuller (1965), R&D in Electronic Capital Goods, National Institute Economic Review, 34, pp. 40–91.
220
Technological gaps and productivity surveys that were launched in Europe with the aid of American experts (. . .). (M. J. Hogan (1987), The Marshall Plan: America, Britain, and the Reconstruction of Western Europe, 1947–1952, Cambridge: Cambridge University Press, p. 142)
For the Americans, the panacea for European economic recovery was to increase productivity.4 The productivity movement, originally launched by the Marshall Plan, was amplified by the United Kingdom.5 In 1948, L. Rostas, a statistician in the UK Board of Trade (Department of Trade and Industry) published an influential report comparing the productivity of UK and US industry, and showing a considerable disparity or gap in favor of the United States in most of the twenty or more industrial sectors studied.6 At the same time, the newly created UK Advisory Council on Science Policy (ACSP) set up a group of industrialists, trades union representatives, scientists and engineers to report on how S&T could best contribute to increasing the nation’s industrial productivity. The (Gibbs) report stated that in the short run, research could have little immediate effect on productivity levels.7 The effort should be focused on inculcating a rational, scientific approach in industry, and by adapting operations research methods that had been so successful during the war. These would also be the solutions favored by the UK Committee on Industrial Productivity,8 and by the Anglo-American Council on Productivity which participated actively in the organization of the US Technical Assistance and Productivity Program of the ERP. To manage and distribute the American aid, European countries set up the Organization for European Economic Co-Operation (OEEC) in 1948 at the request of the United States. The following year, the Council of the OEEC set up a group of experts (WP3), which led to a regular program on productivity supervised by a committee for Productivity and Applied Research, which was set up in 1952. A year later, the OEEC established the European Productivity Agency (EPA) as a condition for receiving the second aid program (after the Marshall Plan expired) from the 4 C. S. Maier (1977), The Politics of Productivity: Foundations of American International Economic Policy After World War II, International Organization, 31, pp. 607–633; D. Ellwood (1990), The American Challenge and the Origins of the Politics of Growth, in M. L. Smith and M. R. Peter (eds), Making the New Europe: Unity and the Second World War, London: Pinter, pp. 184–199; D. W. Ellwood (1997), The Marshall Plan and the Politics of Growth, in R. T. Griffiths (ed.), Explorations in OEEC History, Paris: OECD, pp. 99–107. 5 A. King (1992), The Productivity Movement in Post-War Europe, 18 pages, unpublished; J. Tomlinson (1994), The Politics of Economic Measurement: The Rise of the Productivity Problem in the 1940s, in A. G. Hopwood and P. Miller (eds), Accounting as Social and Institutional Practice, Cambridge: Cambridge University Press, pp. 168–189; J. Tomlinson (1996), Inventing Decline: The Falling Behind the British Economy in the Postwar Years, Economic History Review, 49 (4), pp. 731–757. 6 L. Rostas (1948), Comparative Productivity in British and American Industry, National Institute of Economic and Social Research, Cambridge: Cambridge University Press. 7 Advisory Council on Scientific Policy (1948), First Annual Report, Cmd. 7465, London. 8 See First Report of the Committee on Industrial Productivity, Cmd. 7665, London: HMSO, 1949; Second Report of the Committee on Industrial Productivity, Cmd. 7991, London: HMSO, 1950.
Technological gaps 221 United States ($100 million). By 1955, the EPA had an operational staff of 200, representing some 45 percent of the OEEC’s total operational staff.9 When the EPA was first set up, European economic recovery was practically completed, but “the original attitude of mind still persisted. The tendency was still to try above all to make up the ground lost in Europe (. . .). The high productivity of American firms was due to their operating conditions as much as to their technical advances (. . .).”10 The EPA therefore continued the kind of projects initiated by the ERP. According to R. Grégoire, director of the EPA, over the period 1953–1958 three phases characterized the agency.11 The first he called the technological phase, and it was driven by the “illusion that the United States had discovered, thanks to the war, so many new processes, so many new production methods, that to bridge the gap it would above all be necessary to strive to make up for this technological advance” (p. 208). In a study on the role which American investments had played in assisting the post-war economic recovery of Western Europe, the OEEC summarized this view as follows: “The United States capital [carried] with it improved technology, efficient production and sale methods, patents, management, skilled personnel and fresh ideas, all elements from which the economies of the most advanced European countries can derive higher productivity.”12 The belief in American technology led to missions to the United States, the diffusion of scientific and technical information (conferences, centers, digests, surveys), and activities on cooperation in applied research. The second phase of the EPA was motivated by the idea that it was managerial and social factors that were responsible for productivity: “the difference between the average productivity of American businesses and that of European businesses can be mostly explained by a better conception of business management and a better social climate” (p. 212). The EPA therefore decided that it should “concentrate mainly on management problems and the improvement of co-operation between management and labour.”13 This led to missions and conferences of experts, but also to the setting up of training centers on management and of national productivity centers, conferences on the administration and organization of research, the inculcating of scientific methods in industry (operational research), the development of productivity measurement techniques, and surveys on the attitudes of labor toward technological change.14 The last phase, according to Grégoire, saw a return to technological considerations: “we seem to have discovered (. . .) the extraordinary deficiency of technical personnel in Europe” (p. 216). Indeed, Europe was now afraid “of being 9 W. A. Brusse and R. T. Griffiths, Exploring the OEEC’s Past: The Potentials and the Sources, in R. T. Griffiths (ed.), Explorations in OEEC History, Paris: OECD, p. 27. 10 OEEC (1959), Report of Working Party no. 26 of the Council, C (59) 215, p. 5. 11 R. Grégoire (1958), L’Agence Européenne de Productivité, in G. Berger et al. (eds), Politique et technique, Paris: Presses universitaires de France: pp. 197–218. For the kind of projects initiated by the Agency, see: OECD (1965), Répertoire des activités de l’EPA, 1953–1961. 12 OEEC (1954), Private United States Investment in Europe and the Overseas Territories, Paris, p. 31. 13 OEEC (1959), Report of Working Party No. 26 of the Council, C (59) 215, p. 5. 14 A. King (1992), The Productivity Movement in Post-War Europe, op. cit.
222
Technological gaps
outdistanced by the United States and the USSR.”15 By 1957–1958, it was recognized that “new technological developments were important elements in determining the long-term rate of growth.”16 “The strictly narrow concept of productivity, which was appropriate to the economic situation when the Agency was created, should now give way to a wider concept,” claimed the OEEC working party (WP26) on the future of the EPA.17 According to several people and organizations, however, the emphasis should continue to be placed rather on management factors.18 In fact, two groups of countries struggled on this issue at the EPA. One group was concerned with “traditional” activities, which were those pertaining to increasing productivity (the Nordic group of countries, plus Belgium), the other group with problems relating to S&T, notably the training of scientific and technical personnel (France, Italy, US, OEEC Secretariat).19 But for WP26, it was clear that S&T “should be given relatively more importance in the future Agency programmes than they have had in the programmes of the EPA.”20 Indeed, the OEEC had also been starting to become more active in the field of S&T for some years,21 mainly because “the future development of the European economy demanded increased numbers of highly trained scientists and technologists.”22 The rationale of WP26 concentrated on comparing European and American performance: Between the highly developed, science-based industries of the United States and the explosive development of Russian technology, Europe sits uneasily. (. . .) True, Europe has the great advantage of the tradition and maturity of its scientific institutions, and particularly those for fundamental research. (. . .) But this is not enough. (. . .) Europe has, as a region, been slow to exploit in production the discoveries of its laboratories. (OEEC (1959), A Programme for European Co-operation in Science and Technology, C/WP26/W/4, p. 2) 15 Boel (1997), The European Productivity Agency, 1953–61, in R. T. Griffiths, Explorations in OEEC History, Paris: OECD, pp. 99–107, p. 117. 16 OEEC (1959), Report of Working Party no. 26 of the Council, op. cit., p. 6. 17 Ibid., p. 8. 18 See R. F. Kuisel (1988), L’American Way of Life et les missions françaises de productivité, Vingtième siècle, 17, January–March, pp. 21–38; R. F. Kuisel (1993), The Marshall Plan in Action, in Le plan Marshall et le relèvement économique de l’Europe, symposium held at Bercy on March 21–23, 1991, Comité pour l’histoire économique et financière de la France, pp. 335–357. 19 B. Boel (1997), The European Productivity Agency: Politics of Productivity and Transatlantic Relations, 1953–61, PhD Dissertation, Department of History, Copenhagen: University of Copenhagen, p. 70. 20 OEEC (1959), Report of Working Party No. 26 of the Council, op. cit., p. 8. 21 Four areas characterized the early activities of the organization: (1) creation of an atmosphere of public understanding (for which it organized conferences on the administration and organization of research, and the improvement of basic education); (2) provision of scientists and engineers (for which a working party on shortages was set up, countries reviewed and international surveys conducted); (3) cooperation in applied research (roads, water, ships, metal, etc.); and (4) dissemination of scientific information (by networking with the national information centers involved—among other things through STI from Eastern Europe, and SME; by conducting surveys on industrial needs). 22 R. Sergent (1958), Coopération scientifique et technique: note sur les activités de l’OECE, Memorandum, January 22.
Technological gaps 223 “It is no longer possible for each of its constituent countries to undertake the amount of research necessary for its security and prosperity.”23 But “most of our governments have evolved little in the way of a coherent national science policy, while the concept of scientific research and development as an important and integral feature of company investment is foreign to the thought of most of European industry.”24 The working party proposed merging the EPA Committee of Applied Research (CAR) and the OEEC Committee of Scientific and Technical Personnel (STP) under a Committee of Scientific Research (CSR), and the setting up of a seven to ten-year program based on the Wilgress report.25 Indeed, in 1959, Dina Wilgress was asked by the Secretary-General to visit member countries to discover their approaches to S&T. He reported: “It is in Western Europe that most of the great scientific discoveries have taken place (. . .) but in the race for scientific advance, the countries on the Continent of Europe stood comparatively still for more than two decades while the Soviet Union and North America forged ahead.”26 The sources of the problem were many: the educational system was “better fitted for turning out people trained in the liberal arts than in science and technology”; there were prejudices against those who work with their hands, and few applications of the results of science; there were also a lack of resources for science, too great an emphasis on short-run profits and not enough on investment for the future, small-sized firms not so science-minded, and inadequacy of university facilities and technical training. It was in this context that the newly created OECD (1961) turned to the promotion of national science policies. To better enlighten these policies, the OECD would conduct R&D surveys and economic studies of science, and borrow the EPA notion of the productivity gap, which became, mainly under the influence of the French, the technological gap.
French ambitions The 1960s was a period when the French opposed the Americans on every front: politics, business, and culture.27 During this period, France was also the first European country to denounce a technological gap between Western Europe and America: in 1964, P. Cognard, from the French Délégation générale de la recherche scientifique et technique (DGRST), extended the then-current debate on American domination to S&T with a kind of “manifesto” published in the journal of the directorate, Le Progrès scientifique. He was followed, three years later, but in a more subtle way, by Jean-Jacques Servan-Schreiber, editor of the weekly L’Express. Both men 23 OEEC (1959), A Programme for European Co-operation in Science and Technology, C/WP26/W/4, pp. 2–3. 24 Ibid., p. 3. 25 See also: OEEC (1959), Report of Working Party No. 26 of the Council, C (59) 215; OEEC (1959), Work in the Scientific Sector, C/WP26/W/22; OEEC (1961), Preliminary Draft of the Programme of the Committee for Scientific Research, EPA/AR/4185. 26 OECD (1959), Co-operation in Scientific and Technical Research, C (59) 165, p. 14. Officially published in 1960. 27 R. Kuisel (1993), Seducing the French: The Dilemma of Americanization, Berkeley: University of California Press.
224
Technological gaps
alerted the public to the danger of political and economic dependence on the United States if something were not done rapidly in S&T on the European continent. A political manifesto Cognard started his “manifesto” as follows:28 Numerous are those who think that (. . .) Europe is on the point of making up for its slowness compared to the United States (. . .). [Unfortunately, they are basing themselves] on a somewhat outdated conception of productive wealth, dating back to an age when the classical factors of production were built only on capital, manpower and primary materials. (p. 2) For Cognard, “a new step in the industrial revolution is underway which will be marked by a systematic use of scientific progress in industry” (p. 9). According to Cognard, the American superiority in S&T “risks creating a science gap to the benefit of the United States” (p. 2), “a loss of balance from which our economic freedom of action could suffer” (p. 6), and risks creating two categories of firms: the pioneers and the followers (p. 11). “He who has technological superiority is master” (p. 11): We are permitted to fear several difficulties in the future, of which the first have appeared or will appear in all new or high-tech industries, that is, all industries in which first expansion, and then survival, are intimately conditioned by scientific concentration and a very significant innovative power (. . .). The industry of the latter part of this century will be a refined industry or “grey-matter” industry (. . .). These will be businesses with considerable laboratories and brain-power, working in complete symbiosis with the greatest scientists, with the firm idea of rapidly drawing from research and from the latest advances in basic science all the elements likely to prompt the greatest possible innovation in their productions. (p. 8) Cognard concluded his essay as follows: Certainly it would be absurd to systematically oppose oneself to the introduction into a country of a foreign firm which brings in a superior technology and thus contributes to economic progress and to improvement of the standard of living in the welcoming country (. . .). Nevertheless, we do not well see how a Nation could maintain its political independence if such penetration becomes generalized, and if a large part of its means to design and to produce are subordinated to the technical and economic decisions of foreign firms. (p. 14) 28 P. Cognard (1964), Recherche scientifique et indépendance, Le Progrès scientifique, 76, September, pp. 1–15.
Technological gaps 225 The American challenge Servan-Schreiber’s book was a best seller for several weeks.29 As Arthur Schlesinger reported in his foreword to the English edition: “In France no book since the war, fiction, or non-fiction, sold so many copies in its first three months” (p. vii). According to Servan-Schreiber, American firms were seizing power within the European economy with foreign investments that “capture those sectors of the economy most technologically advanced, most capable to change, and with the highest growth rates” (p. 12). “Fifteen years from now it is quite possible that the world’s third greatest industrial power, just after the United States and Russia, will not be Europe, but American industry in Europe” (p. 3). For Servan-Schreiber, electronics was symptomatic of the situation: “Electronics is the base upon which the next stage of industrial development depends (. . .). A country which has to buy most of its electronics abroad will be in a condition of inferiority” (p. 13), and will remain “outside the mainstream of civilization” (p. 14). “It is a historical rule that politically and economically powerful countries make direct investments (and gain control) of less-developed countries” (p. 12). Echoing Cognard, Servan-Schreiber framed the problem as a dilemma:30 “Restricting or prohibiting investments is no answer, since this would only slow down our own development” (p. 17): “We must admit once and for all that American investment brings important, and even irreplaceable, benefits” (p. 24). “Yet if Europe continues to sit passively as US investments flood the Continent, our whole economic system will be controlled by the Americans” (p. 17). Conclusion: “If American investment is really part of the phenomenon of power, the problem for Europe is to become a great power” (p. 27). For Servan-Schreiber, however, American investments were only part of the problem.31 In fact, the success of Americans was due to a number of factors, like firm size and capital availability, and above all, high R&D investments, federal spending, higher education and new methods of organization and management. “The American challenge is not basically industrial or financial. It is, above all, a challenge to our intellectual creativity and our ability to turn ideas into practice” (p. 101). For Servan-Schreiber, European countries needed to create a real common market (“only on a Europe-wide level, rather than a national one, could we hope to meet the American challenge” (p. 111)) and develop a European technological community by way of a real European science policy (not based on the politics of “fair return” according to each country’s financial contribution).
The OECD study on technological gaps It was in this context that the second OECD ministerial conference on science held in 1966 asked the Secretariat to study “national differences in science and technical 29 J.-J. Servan-Schreiber (1968), The American Challenge, op. cit. 30 The same argument was repeated by Servan-Schreiber on pp. 26, 38–39. 31 For a good analysis of the problem of American investment in France, see: A. W. Johnstone (1965), United States Direct Investment in France: An Investigation of the French Charges, Cambridge: MIT Press.
226
Technological gaps
potential” between member countries.32 The OECD had, in fact, recently published an experimental international statistical comparison, adding fuel to the debate (by documenting an R&D gap between the United States and western Europe),33 and was completing the analysis of its first international survey data on R&D, to be published in 1967, the preliminary results of which were presented to the ministers.34 The latter survey would concentrate on the discrepancies between the United States and European countries. It showed that the United States’ GERD (Gross Domestic Expenditures on R&D) was highest in absolute terms as well as per capita (p. 15), and that it had the most scientists and engineers working on R&D (p. 17): There is a great difference between the amount of resources devoted to R&D in the United States and in other individual member countries. None of the latter spend more than one-tenth of the United States’ expenditure on R&D (. . .) nor does any one of them employ more than one-third of the equivalent United States number of qualified scientists and technicians, reported the OECD. (p. 19) The context within which the OECD introduced its report on R&D was the then-current debate on technological gaps. The organization refused, however, to use either the term “debate” or “gaps”: “It is hoped that this report will contribute to the clarification of existing public discussions on this matter, in particular in connection with technological disparities between member countries” (p. 5), that is, between the United States and Western Europe. A year later, however, the OECD published Gaps in Technology.35 The project started at the end of 1966 and was, according to the OECD, “the first time that a study on the technological differences between Member countries has been undertaken.”36 For the OECD, the analysis of the problem could not “be further advanced without intensive study in specific industrial sectors.”37 To this end, a working group was set up, chaired by Jacques Spaey from Belgium, and composed of representatives from France, Germany, Italy, Norway, the United Kingdom, and the United States, to answer the following three questions:38 ●
● ●
What are the differences between member countries in their scientific and technical potential? What is the nature of the differences? What action is appropriate to ensure that members’ potential will be increased?
32 OECD (1966), The Technological Gap, SP(66) 4. 33 C. Freeman and A. Young (1965), The R&D Effort in Western Europe, North America and the Soviet Union, Paris. 34 OECD (1967), The Overall Level and Structure of R&D Efforts in OECD Member Countries, Paris. 35 OECD (1968), Gaps in Technology: General Report, Paris. 36 OECD (1967), Gaps in Technology Between Member Countries: Check-List, DAS/SPR/67.3, p. 2. 37 OECD (1966), Differences Between the Scientific and Technical Potentials of the Industrially Advanced OECD Member Countries, DAS/SPR/66.13, p. 2. 38 OECD (1966), Working Group on Gaps in Technology Between Member Countries, DAS/SPE/66.16.
Technological gaps 227 At the suggestion of the United States, industrial sectors were chosen for specific studies, and a check list sent to member countries in early 1967 to obtain information on the economic performance of each industrial sector, on the role of R&D and innovation in their economic performance, and on factors which stimulate or hinder R&D and innovation.39 As a result, the OECD produced three types of documents: a synthesis report,40 an analytical report,41 and six sectoral studies.42 Overall, the OECD collected information on three related aspects of the problem of technological disparities: (1) differences in the development of national scientific and technological capabilities; (2) differences in performance in technological innovation; (3) economic effects of 1 and 2. With regard to S&T capabilities, the OECD looked at graduates, the migration of scientists and engineers, and R&D. Concerning the production of graduates, Gaps in Technology found that “the United States appears to put relatively much more emphasis on pure science than on technology [while] the European effort in technology surpasses the United States’ effort in both relative and absolute terms” (p. 12). Turning to the migration of scientists and engineers, the OECD stated: “Europe has lost in recent years approximately 2,000 scientists and engineers annually,” but the report immediately added: “significant rates of emigration are, however, limited to a few countries only, and they are, moreover, concerned with one-way flows only” (p. 12). But it was the statistics from the first international survey on R&D that were the main variable used here: “in 1964, the United States devoted 3.4 per cent of GNP to R&D, the economically-advanced European OECD countries together 1.5 per cent, the European Economic Community 1.3 per cent, Canada 1.1 per cent and Japan 1.4 per cent” (p. 13). The largest disparity in R&D was found to be in industry: “no firm in any European country has an R&D programme of this magnitude” (more than $100 million per annum) (p. 13). In basic research, the United States has a strong position in most fields of fundamental research, but above all in fields where heavy capital and maintenance expenditures, and a large number of highly qualified scientists (above Ph.D. level) are necessary . . . European fundamental research units are generally smaller. (p. 13) Government funding of R&D was also higher in America: “the United States devoted four and a half times as much public money to R&D as industrialized Western Europe,” although it is highly concentrated in defense, space and nuclear
39 OECD (1967), Gaps in Technology Between Member Countries: Check-List, op. cit. 40 OECD (1968), Gaps in Technology: General Report, op. cit. 41 OECD (1970), Gaps in Technology: Comparisons Between Countries in Education, R&D, Technological Innovation, International Economic Exchanges, Paris. 42 Scientific instruments, electronic components, electronic computers, plastics, pharmaceuticals, non-ferrous metals.
228
Technological gaps
energy (p. 13). “While it has not been the aim of the United States policy to support industries or products directly for commercial purposes, the indirect commercial effects have been considerable” (p. 14). On the second item—innovation—the conclusions were similar in tone: Firms based in the United States have had the highest rate of original innovation over the past 15 to 20 years. Of the 140 innovations studied, they have originated approximately 60 per cent. United States firms also have the largest share of world exports in research-intensive product groups (about 30 per cent), and the largest monetary receipts for patents, licensing agreements, and technological know-how (between 50 and 60 per cent of total OECD receipts). (p. 15) One conclusion that appears irrefutable: United States firms have turned into commercially successful products the results of fundamental research and invention originating in Europe. Few cases have been found of the reverse process. (p. 17) With regard to the diffusion of innovation, the report found that “the United States have the highest level of diffusion of new products and processes, but many other member countries have had higher rates of increase in the diffusion of new products and processes over the past 10 to 15 years. However, rates of increase in diffusion have been much higher in Japan (. . .)” (p. 17). But above all, for the OECD, “differences between member countries in performance in originating innovations do not appear to have had any [negative] effects on member countries’ overall economic growth performance” (p. 18). Finally, with regard to the economic impacts (or outcomes) of S&T, the OECD looked at two indicators. First, flows of technology: “The United States’ receipts for patents, licenses, etc. account for 57 per cent of total receipts in OECD countries” (p. 19). Second, trade statistics showed that “the United States tends to have a trading advantage over other member countries in newer, more sophisticated products” but, again, “there is no indication that the United States advantage in those goods where scientific capability and innovation skills are important has had deleterious consequences for other countries” (p. 18). Overall, in the view of the OECD, the causes of the gap were not R&D per se: scientific and technological capacity is clearly a prerequisite but it is not a sufficient basis for success (. . .). The market—size and homogeneity, including that portion made possible by Government procurement—is in fact a very important factor conditioning the realization of scientific and technological potential (. . .). Nevertheless, a broader market would, in and of itself, not solve the problem. (p. 23)
Technological gaps 229 Because other factors are equally important, among them: size of firms, role of government support, industrial rather than public support, economic climate, educational and social environment, and management. The conclusions of the OECD study were reinforced by a second study contracted to Joseph Ben-David.43 Using several indicators,44 Ben-David documented a gap in applied research between Europe and the United States, and suggested that the origins of the gap went back to the beginning of the twentieth century: to the failure in Europe to develop adequate research organizations and effective entrepreneurship in the exploitation of science for practical purposes. Briefly stated, European universities were not oriented enough toward economic and social needs: academics still considered science essentially as a cultural good. To change the situation would, according to Ben-David, require long-term policies involving structural changes.
The American reaction For the Americans, the problem of Europe was a management problem—applying available technology—and their position may even have influenced the OECD conclusions. D. F. Hornig, special assistant for Science and Technology, and appointed in November 1966 by President Johnson to study the issue, stated:45 “McNamara said it was a management gap, some of us said it was an education gap, but Pierre Masse in France, I think put it together best. He said, “It all adds up to an attitude gap.” We educate more people; we educate them to a higher level; we find our management is more enterprising . . .” To clarify the issue, his colleague I. L. Bennett, assistant director at the OST (Office of Science and Technology), suggested in the newspaper Le Monde:46 what I advise is an organized and concerted initiative to demystify the gap . . . It is only in making the distinction between the real facts and the illusions engendered by an emotional reaction or by political opportunism that we can define the real dimensions of the problem . . . To this end, we have supported with all our heart the major study on industrial sectors which is underway at the OECD. In general, American officials tended to dismiss the technological gap with Europe as a non-problem, or at least as a problem that the US government can do little to help solve. While admitting that the United States is ahead of Europe in computers, electronics, aviation, and space, Americans pointed out other areas
43 OECD (1968), Fundamental Research and the Universities: Some Comments on International Differences, Paris. 44 Balance of trade in technological know-how, technological inventions, publications, Nobel prizes. 45 Transcript, D. F. Hornig Oral History, Interview I, 12/4/68, by D. G. McComb, Internet Copy, Lyndon Baines Johnson Library, p. 29. 46 I. L. Bennett (1967), L’écart entre les États-Unis et l’Europe occidentale est un fait réel qu’il importe avant tout de définir, Le Monde Diplomatique, February, p. 5.
230
Technological gaps
where the United States is behind—metallurgy, steel, and shipbuilding. They also noted the German superiority in plastics, the Dutch preeminence in cryogenics, and the positive balance of trade for the European Economic Community in synthetic fiber. “If the Atlantic Community nations are really at a technological disadvantage vis-à-vis the United States today, how have most of them managed to outstrip the United States in production growth and in expansion of their foreign trade during the last decade?”47 The views of R. H. Kaufman, Vice-President of the Chase Manhattan Bank, were representative of the American position. At a conference organized by the Atlantic Institute in Rome in 1968, he suggested:48 “Much of the confusion regarding technology stems from conflicting definitions” (p. 15). By this, Kaufman meant that innovation did not originate solely, or even mainly, in R&D, but that management, marketing and the use of technologies were, for example, equally important. According to Kaufman, there were more lags than a gap: A gap suggests an inequality at one point in time—a vacuum that must somehow be filled. However, this is not completely accurate, for there has never been a uniform technological level between peoples . . . Leads and lags are normal phenomena [and] change hands many times. (p. 17) “There is nothing new about Europe being technologically behind the United States in a number of fields,” wrote Kaufman. “What is new is the mounting concern about a current or potential threat that these technological lags may pose for Europe, in particular, as well as for the whole world” (p. 22). “Europe’s technological lags have been confined to certain industries; and up to now, they have hindered neither the region’s economic growth, nor its balance of payments, nor its capacity to innovate” (p. 22). But “why is there such a wide disparity between these findings and the strong feeling of many Europeans,” asked Kaufman (p. 37)? He offered three explanations. First, the “popular tendency to extrapolate developments in the spectacular industries [like computers and electronics] to the rest of the economy” (p. 37). Second, “the inadequate appreciation of the significance of the diffusion of technical knowledge across the Atlantic” (p. 40), that is, the inevitable international aspects of knowledge that manifest themselves in technology flows (patents and licenses) and
47 Science (1966), Hornig Committee: Beginning of a Technological Marshall Plan?, December 9, pp. 1307–1309. 48 R. H. Kaufman (1970), Technology and the Atlantic Community, in The Atlantic Institute, The Technology Gap: US and Europe, New York: Praeger, pp. 13–101. The Atlantic Institute has published extensively on Western Europe/United States economic relationships since 1966: C. Layton (1966), Trans-Atlantic Investments, Atlantic Institute; Atlantic Institute (1966), Atlantic Cooperation and Economic Growth I, report of a Conference held at Fontainebleau; Atlantic Institute (1966), Atlantic Cooperation and Economic Growth II—Planning for the 1970s, Report of a Conference held in Geneva; A. T. Knoppers (1967), The Role of Science and Technology in Atlantic Relationships, Atlantic Institute.
Technological gaps 231 foreign direct investments. Third, social and political concerns: “European opinion is concerned that the world’s productive effort may be undergoing a reallocation, with all advanced techniques and productivity improvements emanating from the United States . . . Many Europeans resent the fact that US companies dominate certain of their industries” (p. 47). Other aspects of European anxieties identified by Kaufman related to the brain drain—a “highly emotional term invented by the British” (p. 48)—and to the nuclear age, where “a strong technological base is conducive to military power” (p. 49). The lag is being used as an excuse to make improvements in Europe’s educational structure, its management practices, its salary scales for scientists and engineers, its industrial structure through mergers and consolidations, and its expenditures for instrumentation in R&D departments. And, of course, Britain has used the problem to bolster its case for joining the EEC (European Economic Commission). (p. 50) For Kaufman, the real causes of the European technological lag were: economic (labor shortage, small market, small companies, lack of competitive climate), technological (emphasis on basic rather than applied research), management (bad training of managers, lack of commercialization), policies (tax policy, patent system), and social (attitudes toward business, educational system) (pp. 52–80). Other American authors offered similar analyses. For R. R. Nelson,49 gaps between US and Europe were a long-standing phenomenon that had existed for over 100 years, but concern was “greatly sharpened in the early post World War II years when, as a result of the war, disparities between US and European economic capabilities were particularly great” (p. 12). “What is new is a far sharper awareness of the situation, and, among at least some Europeans, a relatively new deepseated concern about its significance” (p. 15). Nelson offered four basic reasons for European concern: trade, international direct investment, science, and military strength (pp. 15–19). These “led some people to view certain consequences as inseparable—loss of foreign policy autonomy in certain key respects, reduced national control over the domestic economic system, and a threat to national economic well-being and growth” (p. 19). But, “to a considerable extent the power of the European economy to produce goods and services is as high as it is because of the technological progressivity of the United States” (p. 21). For Nelson, the debate was rather political: “Not being behind technologically in the most revolutionary fields has been, or is becoming, an aspect of national sovereignty” (p. 33), and equivalent to “assigning high value to independence options, and underestimating the price” (p. 34).
49 R. R. Nelson (1967), The Technology Gap: Analysis and Appraisal, P-3694-1, Santa Monica, California: RAND. See also: R. R. Nelson (1971), World Leadership, the Technological Gap and National Science Policy, Minerva, 9 (3), pp. 386–399.
232
Technological gaps
R. S. Morse from MIT held similar views:50 “Discussions about the technological gap are often undertaken by individuals who understand neither science and technology nor the problem associated with its application” (p. 84). “The United States has a greater total capability in advanced technology than any other country, but there is little evidence that such technology, per se, is solely responsible for its economic growth rate or standard of living” (p. 84). “If there is some gap between the US and Europe to which Europeans should direct their attention, it is not the technological gap, but rather a management gap” (p. 85–86). Morse then goes on to list a number of factors that seemed fundamental to rapid progress: cooperative environment (between university, government, and business), personnel mobility (between sectors), attitude of top management, new enterprises, venture capital, and competition.51 Several Europeans agreed with the diagnosis. C. Freeman, author of the first OECD analyses of international statistics on R&D, concluded:52 “To describe or to understand a ‘technology gap’, one must go beyond comparisons of R&D inputs” (pp. 464): “it is clearly possible to have a highly productive R&D system but a disproportionately small flow of economically successful innovations and a slow rate of diffusion” (p. 464), because “successful innovations often demand management qualities of a higher order” (p. 466). In accordance with these specifications, Freeman concluded: there are some grounds for believing that, both in the Soviet Union and in Britain (though for rather different reasons), the flow of profitable innovations and the speed of their diffusion has been somewhat disappointing in relation to the input of resources into growth-oriented R&D, and probably also in relation to the output of R&D. (p. 465) J.-J. Salomon also admitted that there were disparities between the United States and Europe:53 “If there is a greater aptitude among American businesses to take advantage of the products of research, it is due to factors of design and of management as much as, if not more than, to factors of measurement (. . .). The technological gap is in large part a managerial gap.” J.-P. Poullier, consultant at the French National Center for Information on Productivity in Business and co-author of an influential study by Edward Denison which calculated that education and technology were responsible for 60 percent
50 R. S. Morse (1967), The Technological Gap, Industrial Management Review, Spring, pp. 83–89. 51 For more and similar arguments from Americans, see: J. B. Quinn (1966), Technological Competition: Europe vs. USA, Harvard Business Review, July/August, pp. 113–130; G. E. Bradley (1966), Building a Bigger Atlantic Community Market, Harvard Business Review, May/June, pp. 79–90; A. Kramish (1967), Europe’s Enigmatic Gap, P-3651, RAND, Santa Monica, California; J. Diebold (1968), Is the Gap Technological?, Foreign Affairs, January, pp. 276–291. 52 C. Freeman (1967), Research Comparisons, Science, 158, October 27, pp. 463–468. 53 J.-J. Sorel (1967), Le retard technologique de l’Europe, op. cit., p. 764.
Technological gaps 233 of the differences in growth rates between America and Europe,54 concluded what only a European could have publicly said: If a major objective of Europe is to catch up with the income and productivity of the United States, a high degree of emulation of the American pattern is unavoidable, for economics responds to a certain rigor and discipline. Europe may choose not to pay the price America paid, but then it must accept without infantile recriminations a level of income second to that of the United States. Frankly stated, a large number of comments and explanations of the technological gap are unworthy of the great cultural and intellectual environment on which Europeans like to pride themselves. ( J.-P. Poullier (1970), The Myth and Challenge of the Technological Gap, in The Atlantic Institute, The Technology Gap: US and Europe, New York: Praeger, p. 125) Finally, A. Albonetti, director of international affairs and economic studies at the National Committee for Nuclear Energy (CNEN), using several statistics, “demonstrated” that there was a gap between Europe and the United States, but over time “there exists a parallel trend which tends to minimize this gap” and “scientific research is, for the time being, rather confined to the smoothing of this disparity.”55 The above authors were only some of the individuals who took part in the debate, and only the first to analyze the issue.56 In the 1980s and 1990s, scholars would continue debating the issue, although with new theoretical frameworks.57
The official response The constant talk about technological gaps, including that at the OECD, strongly angered the United States, and had at least two impacts on the American government. First, the US President created a committee to study the issue and report rapidly to him on actions to be taken. Second, the Department of Commerce (DOC) started publishing a series of statistics on technology-intensive industries, which gave rise to indicators on high technology. 54 E. E. Denison and J.-P. Poullier (1967), Why Growth Rates Differ: Postwar Experience in Nine Western Countries, Washington: Brookings Institution. 55 A. Albonetti (1967), The Technological Gap: Proposals and Documents, Lo Spettatore Internazionale Rome (English edition), Part 1, Vol. 2 (2–3), p. 264. 56 For other authors of the time who held the same discourse, see: E. Moonman (ed.) (1968), Science and Technology in Europe, Harmondsworth: Penguin; R. Gilpin (1968), France in the Age of the Scientific Estate, Princeton: Princeton University Press; C. Layton (1969), European Advanced Technology: A Programme for Integration, London: Allen and Unwin. 57 For the recent literature, see: Research Policy, special issue, 16, 1987; J. Fagerberg (1994), Technology, and International Differences in Growth Rates, Journal of Economic Literature, 32, pp. 1147–1175; J. Fagerberg, B. Verspagen, and N. von Tunzelmann (1994), The Economics of Convergence and Divergence: An Overview, in The Dynamics of Technology, Trade and Growth, Hants: Edward Elgar, pp. 1–20; J. Fagerberg and B. Verspagen (2002), Technology-Gaps, Innovation-Diffusion and Transformation: An Evolutionary Interpretation, Research Policy, 31, pp. 1291–1304.
234
Technological gaps
The Interdepartmental Committee on the Technological Gaps In November 1966, the US Government set up an Interdepartmental Committee on the Technological Gap to examine the problem of disparities between the United States and Western Europe.58 The committee had discussions with key European governmental, industrial and university leaders, consulted what little empirical literature there was on the subject, cooperated with the OECD working group, above all on the sector studies, and conducted a survey of American direct investments in Europe. It delivered its report to the President in December 1967.59 The committee admitted that there was a technological gap: disparities in R&D (p. 5) and innovation (pp. 6, 10–11) between the United States and Europe, and American “control” (80 percent), by way of direct investment, of European technology-intensive industries (pp. 13–14). However, the committee added that “there is a growing consensus between ourselves and the Europeans on the real nature of the technological gap” (p. 3): “the problem of the technological gap is only partly technological. Psychological, political, economic, and social factors are probably more important” (p. i). “The Europeans are coming to understand that they need to solve a complex series of problems involving education, productivity, capital markets, managerial attitudes and procedures, economies of scale, mechanization, restrictive business practices, and generally inefficient work habits” (p. 16). For the committee, “the technological gap problem is a current manifestation of the historical differences between Europe and the United States in aggressiveness and dynamism, reflecting the American frontier past and its restless quest for progress and change” (pp. ii and 12). It is “one aspect of the broad disparities in power and economic strength between the United States and a fragmented Europe which will be a recurrent problem for a long time to come” (p. iii). Briefly stated, the position of the committee was the following: ●
●
●
The European lag in technological know-how is largely in a few sectors of advanced technology or technology-intensive industries. An economically more significant lag is in European abilities to utilize available technology. This lag is due to a number of long-standing structural factors such as underinvestment in education, less aggressive and skilled management, less profitoriented social customs and work habits, slowness in industrial modernization, small size of firms and national markets, conservative investment attitudes, lack of mobility and an inadequate number of highly trained personnel.
58 The committee was composed of representatives from the following organizations: Department of State, Department of Defense, Department of Commerce, NASA, Council of Economic Advisers and Atomic Energy Commission. In addition, observers from three organizations were invited to attend the meetings: the Department of the Treasury, the Department of Justice, and the Special Representative for Trade Negotiations. 59 Report of the Interdepartmental Committee on the Technological Gap, Report submitted to the President, December 22, 1967, White House. Declassified 10-07-2002 (National Archives).
Technological gaps 235 According to the committee, Europeans actually faced a dilemma: European countries are anxious to benefit to the maximum extent from US technological advances while avoiding the possibility of American technological/industrial domination. This combination of aims has resulted in an ambivalent approach. On the one hand, they are considering essentially protective measures. On the other, they would like the broadest access to the results of US government-financed R&D. (p. 9) What did the committee recommend as the American strategy? Its suggestions were first of all motivated by the fact that “although European concerns about the technological gap may be exaggerated, they may nonetheless result in European counteractions to discriminate against American firms or products and in other measures that would pose political and economic difficulties for the United States” (p. iii). “This political sensitivity must be taken seriously” (p. 10). “The continuing problem for the United States is to assure that movement toward Europeanism does not develop into a force with political and economic goals that are inimical to those of the United States” (p. 19). “Our policy planning should deal with the possibility that some European countries react to the prospect of American industrial take-over and technological domination by imposing restrictive measures” (p. 20). The committee was also motivated by another idea: “the only long-range cure for the disparities problem lies in actions which must be taken by Europeans themselves (. . .). There is little that the US government can or should do by way of direct assistance” (p. 73). But “this does not mean inaction on our part” (p. 72), added the committee. The committee suggested adopting the attitude of a friendly neighbor: “the US government must view the problem as an important one and adopt a posture and policies that do not feed the exploitation of these concerns abroad” (p. 5). “The United States can play a significant complementary role—primarily through promoting scientific and technological cooperation and through the mutual reduction of obstacles to the flow of technology and related trade” (p. 73). Among other things, the United States should (p. 74ss) ●
●
● ●
Stress that the United States and Europe have a joint stake in technological and economic progress; that our future prosperity is mutually interdependent; and that all stand to gain by promoting an open technological market, the international flow of scientific and technological advances, as well as management and organizational skills. Acknowledge (in low-key) that there is a United States–European gap in ability to utilize technological know-how, and to a certain extent in technological know-how per se, but the basic actions to strengthen Europe must be taken by the Europeans themselves. Cooperate in R&D activities with Western Europe. Emphasize that the technological gap issue reveals an essential need for effective integration of Western Europe.
236
Technological gaps
High technology indicators At the request of the interdepartmental committee, the Department of Commerce conducted one of the first surveys of American investments and operations in Europe. The report served as a background document to the final report of the committee, and was entitled The Nature and Causes of the Technological Gap between the United States and Western Europe.60 As a follow-up, the committee recommended that the Department of Commerce “conduct on a continuing basis in-depth analytical studies on the economic and technological questions related to technological disparities and to the international flow of technology, trade, and investments” (p. v). The Department of Commerce indeed responded with further studies and reports that brought to the scene the concept of high technology and the decline of the United States in these industries. M. T. Boretsky, director of the Technological Gap Study Program (1967–1969) at the Department of Commerce, launched the research program. The concept of high technology goes back to early OECD work—and before. Up to then, the OECD defined “research-intensive industries” as those that had a high R&D/sales ratio.61 Boretsky used three measures to construct his category of high technology products:62 R&D, S&T manpower, and skills. The following were thus identified as “technology-intensive products”: chemicals, non-electrical machinery, electrical machinery and apparatus (including electronics), transportation equipment (including automobiles and aircraft), scientific and professional instruments and controls. The industries responsible for these products represented 14 percent of GNP in the United States, employed 60 percent of all scientific and engineering manpower and performed 80 percent of non-defense industrial R&D. Boretsky’s calculations showed that, in the early 1970s, the United States was in danger of losing its preeminence in advanced technologies, particularly those that are important in world trade. American exports of technology-intensive manufactured products were leveling off, according to Boretsky. This was so mainly because of the narrowing of the gap with other OECD countries, and because of faster growth rates in these countries. Ironically, “if, in the 1960s, any country’s economically-relevant R&D performance could be described as having had the characteristics of a gap, the description should have been accorded to the United States rather than to the major countries of Europe, or to Japan,” concluded Boretsky.63 60 As recommended by the committee, the document was to be published if possible prior to the OECD ministerial meeting (March 1968), but never was. Furthermore, the accompanying copy to the final report has been lost. 61 OECD (1963), Science, Economic Growth and Government Policy, Paris, pp. 29–33. 62 M. Boretsky (1971), Concerns About the Present American Position in International Trade, Washington: National Academy of Engineering, pp. 18–66; M. Boretsky (1975), Trends in US Technology: A Political Economist’s View, American Scientist, 63, pp. 70–82; Science (1971), Technology and World Trade: Is There Cause for Alarm, 172 (3978), pp. 37–41; M. Boretsky (1973), US Technology: Trends and Policy Issues, Revised version of a paper presented at a seminar sponsored by the Graduate Program in Science, Technology and Public Policy of the George Washington University, Washington. 63 M. Boretsky (1973), US Technology: Trends and Policy Issues, op. cit., p. 85.
Technological gaps 237 The Department of Commerce continued to develop and improve on the indicator in the following years,64 and use of the indicator soon spread to other countries and to the OECD.65 In the 1980s, the indicator became a (highlycontested) indicator of much value to official statisticians and governments.
Conclusion Technological gaps have been one of the principal historical factors that have influenced national and international work on S&T policies and statistics. Everyone has found something in the data to document their own case. Numbers were cited by the pro-gap theorists—mainly Europeans who reminded people that the United States’ effort was much above Europe’s at 3.4 percent of GNP— by the skeptics who proposed the idea that the American superiority was due only to defense (62 percent of R&D), and not civil R&D, and by the Americans themselves: US performance came mainly from the efforts of industry on development (over 65 percent of R&D), which Europe could emulate. These comparisons led to the current practice of ranking countries, and of assessing their performance against that of the United States. Whether the statistics helped shape policy agendas and priorities remains to be documented, but it certainly shaped political discourses, policy documents, and analytical studies. It is probably inevitable that international comparisons and, above all, international statistics, lead to such discourses. Emulation between countries, mimicry, and convergence probably have to be accepted as indirect effects of statistical standardization. And indeed, the OECD had a major influence on the most recalcitrant country: the United States. Following the OECD study on technological gaps, the United States began nourishing some fears and apprehensions of its own.66 Today, such fears are qualified as a case of “statistical myopia”: there was in fact no long-run slowdown.67
64 M. Boretsky (1971), Concerns About the Present American Position in International Trade, in National Academy of Engineering, Technology and International Trade, Washington; R. K. Kelly (1976), Alternative Measurements of Technology-Intensive Trade, Office of International Economic Research, Department of Commerce; R. K. Kelly (1977), The Impact of Technology Innovation on International Trade Patterns, US Department of Commerce, Washington; US Department of Commerce (1983), An Assessment of US Competitiveness in High Technology Industries, International Trade Administration; L. A. Davis (1982), Technology Intensity of US Output and Trade, US Department of Commerce, International Trade Administration, Washington; V. L. Hatter (1985), US High Technology Trade and Competitiveness, US Department of Commerce, International Trade Administration, Washington; L. A. Davis (1988), Technology Intensity of US, Canadian and Japanese Manufacturers Output and Exports, Office of Trade and Investment Analysis, Department of Commerce. 65 See: Chapter 7. 66 Besides Boretsky, see: H. Brooks (1972), What’s Happening to the US Lead in Technology, Harvard Business Review, May–June, pp. 110–118. For an analysis of the debate, see: R. R. Nelson (1990), US Technological Leadership Where Did It Come From and Where Did It Go, Research Policy, 19, pp. 117–132; R. R. Nelson and D. Wright (1992), The Rise and Fall of American Technological Leadership: The Postwar Era in Historical Perspective, Journal of Economic Literature, 30, pp. 1931–1964. 67 M. Darby (1984), The US Productivity Slowdown: A Case of Statistical Myopia, American Economic Review, 74, pp. 301–322.
238
Technological gaps There has undoubtedly been a protracted fall off from the early postwar peak and it certainly was pronounced. But it is that peak which looks like the aberration, and the decline from it may well prove to be a return to historical growth rates in labor productivity. (M. Beaumol (1986), Productivity Growth, Convergence, and Welfare: What the Long-Run Data Show, American Economic Review, 76 (5), p. 1081)
The technology gap issue also had an important impact on the emergence of a European S&T policy.68 Together with the Action Committee for a United States of Europe founded by Jean Monnet, France was an aggressive promoter for a European science policy in the 1960s. What the French had in mind, however, “was not merely cooperation in science and technology but eventually a common policy toward American economic policies and, especially, investments.” In 1965, the French proposed two studies as a first step toward a common policy. The first was to be a comparison of public and private civilian scientific research programs already carried out by members of the EEC (European Economic Community). Presumably, such an inventory would provide the basis for a European division of scientific labor. Secondly, the French proposed that there should be a determination of which industrial sectors of the EEC countries were most vulnerable to foreign competition or takeover, due to the inadequacy of their research effort vis-à-vis that of outside countries, namely the United States. (R. Gilpin (1968), France in the Age of the Scientific Estate, op. cit., pp. 418–419) In October 1967, the European science ministers selected six areas of cooperation in S&T, and agreed that concrete steps be taken to develop a science policy for the EEC. Besides policy, the technological gaps issue also considerably influenced the statistical work of the EEC, to the point that it is the European Commission which most faithfully pursues work on productivity and technological gaps between Europe and the United States today, within its annual Innovation Scoreboard 69 and its annual Competitiveness report,70 among others. According to the Commission, “the average research effort in the Union is only 1.8 per cent of Europe’s GDP, as against 2.8 percent in the United States and 2.9 per cent in Japan. What is more, this gap seems to be on the increase.”71
68 R. Gilpin (1968), France in the Age of the Scientific Estate, op. cit., pp. 415–420; L. Guzzetti (1995), A Brief History of European Union Research Policy, Brussels: European Commission, pp. 35–38. 69 CEC (2000), Innovation in a Knowledge-Driven Economy, COM(2000) 567. See also: CEC (2002), More Research for Europe: Towards 3 per cent of GDP, COM(2002) 499. 70 EC (2001), European Competitiveness Report, Luxembourg. 71 CEC (2000), Towards a European Research Area, COM(2000) 6, January 18, pp. 4–5.
13 Highly qualified personnel Should we really believe in shortages?
The measurement of research and development (R&D) is composed of two basic sets of data: money spent on R&D, and human resources devoted to R&D. The previous chapters have dealt at length with the former. For several people, however, above all some of the pioneers of S&T statistics (C. Freeman, R. N. Anthony, W. H. Shapley, and C. Falk), human resources are much more appropriate than money as a measure of S&T activities.1 This idea goes back, at least, to the US National Research Council (NRC) surveys on industrial research in the early 1930s. But it also owes its importance to the US President’s Scientific Research Board: “the ceiling on research and development activities is fixed by the availability of trained personnel, rather than by the amounts of money available. The limiting resource at the moment is manpower.”2 Several problems surrounding the measurement of scientific and technical personnel are similar to those encountered in the measurement of R&D expenditures, since both types of statistics share basic categories, and are broken down according to the same institutional classifications. However, two methodological problems are specific to the measurement of human resources devoted to R&D. First, there is the problem of definition: what is a scientist?3 In fact, the response 1 R. N. Anthony (1951), Selected Operating Data for Industrial Research Laboratories, Boston, MA: Harvard Business School, pp. 3–4: “In view of these difficulties [accounting methods and definitions], we decided to collect only a few dollar figures (. . .) and to place most of our emphasis on the number of persons”; W. H. Shapley (1959), in NSF, Methodological Aspects of Statistics on R&D: Costs and Manpower, op. cit., p. 13: “Manpower rather than dollars may be a preferable and more meaningful unit of measurement”; C. Freeman (1962), Research and Development: A Comparison Between British and American Industry, National Institute Economic Review, 20, May, p. 24: “The figures of scientific manpower are probably more reliable than those of expenditures”; C. Falk, and A. Fechter (1981), The Importance of Scientific and Technical Personnel Data and Data Collection Methods Used in the United States, Paper presented for the OECD Workshop on the Measurement of Stocks of Scientific and Technical Personnel, October 12–13, p. 2: “At the current time STP data seem to be the only feasible indicator of overall scientific and technical potential and capability and as such represent a most valuable, if not essential, tool for S&T policy formulation and planning.” 2 President’s Scientific Research Board (1947), Science and Public Policy, New York: Arno Press, 1980, p. 15. 3 NSF (1999), Counting the S&E Workforce: It’s Not That Easy, Issue Brief, NSF 99-344, SRS Division, Washington.
240
Highly qualified personnel: should we really believe in shortages?
varies according to whether a country measures qualifications or occupations. A scientist is someone who works in research (occupation), but statisticians often satisfied themselves with measuring people who graduated in science, even if they do not work in the field. Second, there is the problem of measurement: should we measure by head-counts or full-time equivalents (FTEs)?4 The answers to these questions were standardized in the OECD Frascati manual, although countries interpret them differently. This chapter is concerned with the origins of statistics on scientists and engineers in OECD countries and their relationship to science policy issues.5 It argues that early debates about human resources in S&T created two fictions: the shortages of scientists and engineers, and the brain drain. It shows how the discourses on personnel shortages and the brain drain owe their existence to statistics, and how statistics—with their deficiencies and methodological difficulties—generated controversies. Incomplete statistics, however, never prevented people from taking firm positions on scientific and technical human resources issues. The first part of this chapter discusses how human resources related to S&T came to be measured as a result of World War II or, as some called it, the “war drain.”6 Throughout the post-war period, the United States was involved in a discourse on the shortages of scientists and engineers while Great Britain was involved in a discourse on the brain drain. This part seeks to understand how and why these discourses developed. The second part extends the argument to (other European countries and) the OECD, showing how its preoccupation with the reconstruction of Europe after the war was the driving force behind its early efforts at measuring S&T. The third part documents the recent shift in the OECD Directorate of Science, Technology, and Industry (DSTI) measurement from statistics about researchers involved in R&D (occupation) to new indicators on the supply and demand of scientists and engineers (qualification).
Reminiscences of war The measurement of scientific and technical personnel in OECD countries was from the start motivated by the consequences of World War II on the number of qualified personnel. On these matters, the first collections and uses of statistics in public debates came mainly from the United States and Great Britain. Other countries—Canada for example—also documented the phenomenon, but nowhere was the impact of these debates more important than in the United States and Great Britain. 4 OECD (2000), R&D Personnel Measured in Full-Time Equivalents, DSTI/EAS/STP/NESTI (2000) 15; OECD (2001), Assess Changes to Improve Collection of Headcount Data, DSTI/EAS/STP/NESTI (2001)14/PART14; OECD (2001), Measurement of FTE on R&D, DSTI/EAS/STP/NESTI (2001) 14/PART16. 5 To properly document the full range of issues involved would require a book-length study. For present purposes, I concentrate only on the main episodes and the most influential countries. 6 F. B. Jewett (1947), The Future of Scientific Research in Postwar World, reprinted in J. C. Burnham (ed.) (1971), Science in America: Historical Reflections, New York: Holt, Reinhart, and Winston, p. 405.
Highly qualified personnel: should we really believe in shortages? 241 Deficits and shortages in the American workforce World War II had an enormous impact on science in the United States. Not only had the war demonstrated the importance of government support for scientific research—a fact well documented in the literature—but it also, according to some, slowed the production of scientists and engineers in the country. World War II had absorbed nearly all physically-fit American graduate students into the armed forces. In 1945, V. Bush commented: “We have drawn too heavily for nonscientific purposes upon the great natural resource which resides in our young trained scientists and engineers. For the general good of the country too many such men have gone into uniform (. . .). There is thus an accumulating deficit of trained research personnel which will continue for many years.”7 Because it would take at least six years after the war ended before research scientists would begin to emerge from the graduate schools in significant numbers, one of Bush’s committees,8 in collaboration with the American Institute of Physics, predicted (without giving details on the methodology used) a deficit of 150,000 bachelor’s degree holders and 17,000 advanced degree holders for 1955.9 “Neither our allies nor, as far as is known, our enemies have permitted such condition to develop,” stated the committee.10 Two years later, the President’s Scientific Research Board report agreed with this assessment. Based on a large number of different sources,11 the board estimated that the American “manpower pool today is smaller by 90,000 bachelors and 5,000 doctors of science than it would have been had pre-war trends continued.”12 The net loss to the country, however, was estimated to be 40,000 bachelors, and 7,600 PhD-level.13 The board’s numbers were smaller than Bush’s because the former estimated that normally only 90 percent of doctors and a third of bachelors enter careers in research or teaching. For the board, these numbers were estimates of shortages rather than of deficits (which are larger), for they represented “the number of students who probably would have graduated in science and made careers in science if the war had not forced them out of school.”14 Two causes were identified for the shortages. Besides the wartime demands themselves, the board discussed the increase in demand for American R&D that began before the war and which was increased by the destruction and disruption of Europe: “The increase in demand occurred so sharply—expenditures tripled and quadrupled within a few years— than no possible training program could have turned out an adequate supply of
7 8 9 10 11
V. Bush (1945), Science: The Endless Frontier, op. cit., p. 24. Committee on Discovery and Development of Scientific Talent. V. Bush (1945), Science: The Endless Frontier, op. cit., p. 158. Ibid., p. 159. National Research Council, National Resources Planning Board (National Roster of Scientific and Specialized Personnel), Office of Education. 12 President’s Scientific Research Board (1947), Science and Public Policy, op. cit., p. 16. 13 Ibid., Vol. 4, p. 14. 14 Ibid., p. 3.
242
Highly qualified personnel: should we really believe in shortages?
scientists. It takes an average of ten years’ training to prepare for independent scientific research.”15 Now that the rhetorical framework had been set, it was not long before other American scientific institutions, among them the National Science Foundation (NSF), began adopting the same discourses.16 Raymond. H. Ewell, head of the Program Analysis Office at the NSF in the 1950s, and probably the first individual to perform an economic analysis linking R&D and GNP, launched the NSF discourse in 1955: “Figures indicate a requirement of 75,000 research scientists and engineers from 1954 to 1960, and a requirement for a net increase of 150,000 from 1954 to 1965. There is substantial doubt whether the required numbers of research scientists and engineers will be available.”17 Thereafter, the NSF developed discourses on the imminent shortage of scientists and engineers in the country (and enrolled industrialists in the crusade by surveying their lamentations on the lack of qualified personnel18). We can identify three steps in the construction of these discourses. First, in the late 1950s, the NSF relied on predictions that had been calculated by others. In 1957, for example, it compared the country’s actual needs in S&T resources with what the President’s Scientific Research Board, ten years previously, had predicted the country would need in 1957.19 The NSF observed that the country was far short of the goals that had been envisioned ten years earlier, at least with respect to basic research. The lesson was clear: the federal government must increase its support to basic research. Second, the NSF began developing its own predictions, by projecting past trends into the future. To facilitate this task, it launched an information program on the supply and demand of scientific and technical personnel, for which the Bureau of Labor Statistics conducted forecasting studies.20 It projected a doubling of science and engineering doctorates by 1970.21 15 Ibid. 16 The 1950s literature on the topic is voluminous. The following list contains only a few important texts. For the United States, see: National Manpower Council (1953), A Policy for Scientific and Professional Manpower, New York: Columbia University Press; D. Wolfe (1954), America’s Resources of Specialized Talent: A Current Appraisal and Look Ahead, Report of the Commission on Human Resources and Advanced Training, New York: Harper and Row; D. M. Blank and G. J. Stigler (1957), The Demand and Supply of Scientific Personnel, National Bureau of Economic Research, New York; A. A. Alchian, K. J. Arrow, and W. M. Capron (1958), An Economic Analysis of the Market for Scientists and Engineers, RAND Corporation, RM-2190-RC. For Great Britain, see: J. Alexander (1959), Scientific Manpower, London: Hilger and Watts; G. L. Payne (1960), Britain’s Scientific and Technological Manpower, London: Oxford University Press. 17 R. H. Ewell (1955), Role of Research in Economic Growth, Chemical and Engineering News, 33 (29), p. 2982. 18 See for example: NSF (1956), Science and Engineering in American Industry, 1953–54 Survey, NSF 5616, Washington, pp. 53–54; NSF (1955), Shortages of Scientists and Engineers in Industrial Research, Scientific Manpower Bulletin, No. 6, August. 19 National Science Foundation (1957), Basic Research: A National Resource, Washington, pp. 46–47. 20 NSF (1961), The Long-Range Demand for Scientific and Technical Personnel: A Methodological Study, NSF 61–65, Washington; NSF (1963), Scientists, Engineers, and Technicians in the 1960s: Requirements and Supply, NSF 63-34, Washington. 21 NSF (1961), Investing in Scientific Progress, NSF 61-27,Washington, p. 15; NSF (1967), The Prospective Manpower Situation for Science and Engineering Staff in Universities and Colleges: 1969–1975, Washington.
Highly qualified personnel: should we really believe in shortages? 243 Third, the NSF developed its own tools—surveys and databases on science and engineering doctorates — with which it produced a regular series of statistics. The first tool it used was a roster on scientific and specialized personnel created at the suggestion of the NRC during World War II. The American roster was intended to facilitate the recruitment of specialists for war research.22 In 1940, the National Resources Planning Board (NRPB), following the recommendation of its Committee on Wartime Requirements for Specialized Personnel, established the plan for the national roster, and operated the latter jointly with the Civil Service Commission until it was transferred to the War Manpower Commission in 1942.23 Several organizations collaborated in the effort, among them the NRC, which set up, on Bush’s request, an Office of Scientific Personnel. The NRC had in fact already begun compiling directories on specialized personnel several years before, as discussed previously. The roster was intended to “make proper contact between the right man and the right job (. . .) where acute manpower shortages have been found to exist.”24 “The task was an enormous one—to compile a list of all Americans with special technical competence, to record what those qualifications were, and to keep a current address and occupation for each person. (. . .) Questionnaires were sent out, using the memberships lists of professional societies and subscription lists of technical journals, and the data were coded and placed on punched cards for quick reference.”25 By 1944, the roster had detailed punch-card data on 690,000 individuals.26 Considered of little practical use by many, and inoperative since 1947, the roster, along with a national scientific register that had been operated by the Office of Education since 1950, was transferred to the NSF in 1952.27 The agency developed the roster further28 as required by the law setting up the NSF, and used it to produce statistical analyses over the course of a decade.29 The NSF finally abandoned the roster in 1971,30 by which time surveys had begun to systematically replace directories.
22 NRPB (1942), National Roster of Scientific and Specialized Personnel, Washington. 23 The roster was again transferred to the Department of Labor in 1945. Other rosters were also established in the Office of Naval Research (ONR): one on top scientific personnel (1948) and another on engineering personnel (1949). 24 NRPB (1942), National Roster of Scientific and Specialized Personnel, op. cit., p. 1. 25 C. Pursell (1979), Science Agencies in World War II: The OSRD and its Challenges, in N. Reingold, The Sciences in the American Context, Washington: Smithsonian, p. 367–368. 26 R. C. Cochrane (1978), The National Academy of Sciences: The First Hundred Years 1863–1963, Washington: National Academy of Sciences, p. 406; For statistical studies based on the roster, see: NRPB (1941), Statistical Survey of the Learned World, Washington; L. Carmichael (1943), The Number of Scientific Men Engaged in War Work, Science, 98 (2537), pp. 144–145; Department of Labor (1946), Directory of Colleges and Universities Offering Graduate Degrees and some Form of Graduate Aid, Washington. 27 National Science Board (1951), Minutes of the 5th meeting, April 6; National Science Board (1951), Minutes of the 6th meeting, May 11. 28 NSF (1961), The National Register of Scientific and Technical Personnel, NSF 61-46, Washington; NSF (1964), National Register of S&T Personnel, NSF 64-16, Washington. 29 A publication entitled American Science Manpower was published periodically from the mid-1950s to 1968. 30 National Science Board (1971), Minutes of the 142nd meeting, November 14–15.
244
Highly qualified personnel: should we really believe in shortages?
In 1958, following a request by the Bureau of Budget, the NSF, together with the President’s Committee on Scientists and Engineers, which found itself handicapped by a lack of data, recommended a program for national information on scientific and technical personnel.31 This would lead to the second tool developed by the NSF. The organization gradually developed a whole system of surveys—the Scientific and Technical Personnel Data System (STPDS)—for tracking the supply of graduates, their occupations and their geographical mobility. The system was revised in the early 1990s to better measure occupations, among other things.32 Before then, taxicab drivers with advanced degrees in physics (qualification) were officially classified as physicists (occupation), producing enormous counting differences with other agencies like the Bureau of Labor Statistics or the Census Bureau. According to many, the NSF’s data on scientists and engineers were and remain unique among OECD countries: The number and distribution of scientists and engineers were recognized to be important indicators of a nation’s S&T potential when the first S&T statistics were being designed in the 1960s. However, only the United States set up and has systematically maintained a coherent system for monitoring stocks and flows of scientists and engineers. Other countries have generally expressed a need for international data only in the context of short-term policy issues such as the brain drain or ageing. (OECD (1994), Statistics and Indicators for Innovation and Technology, DSTI/STP/TIP (94)2/ANN1, p. 14) It was using this type of statistical data that the NSF lobbied for more resources year after year. H. A. Averch has documented the argumentative strategy as follows:33 1 2 3
Since the potential of scientific discoveries is unlimited, there should be a continually increasing flow of manpower for research. Market forces do not deliver researchers in sufficient quantity or quality to meet national needs. Therefore, the government should secure enough money to supply the right number of scientists and engineers.
Enter statistics: the “right number” of scientists and engineers was to be determined by the statisticians and their users—the President’s Science Advisory 31 NSF (1958), A Program for National Information on Scientific and Technical Personnel, NSF 58-28, NSF and President’s Committee on Scientists and Engineers, Washington. 32 NRC (1989), Surveying the Nation’s Scientists and Engineers: A Data System for the 1990’s, Washington; NSF (1989), The Scientific and Technical Personnel Data System: The Plan for the Nineties, SRS Division, NSF, Washington. 33 H. A. Averch (1985), A Strategic Analysis of Science and Technology Policy, Baltimore: Johns Hopkins University Press, Chapter 4.
Highly qualified personnel: should we really believe in shortages? 245 Committee,34 the National Academy of Sciences,35 the Association of American Universities,36 the Bureau of Labor Statistics,37 . . ., and the NSF.38 These arguments worked for a while. Then, in 1989, the NSF published a highly controversial study forecasting a shortage of 675,000 scientists and engineers in the next two decades.39 The study was swiftly and widely criticized for extrapolating from simplistic demographic trends,40 to the point that the NRC recently recommended that: “The NSF should not produce or sponsor official forecasts of supply and demand of scientists and engineers (. . .). The NSF should limit itself to data collection and dissemination.”41 Despite the warning, Congress again asked the organization to predict how many high-tech workers the United States will need over the next decade.42 All in all, the success of American predictions on the supply and demand of scientists and engineers was about zero:43 by 1968, all predictions of shortages had proved incorrect. But this did not deter the NSF nor any other organizations from pursuing and refining the same general discourses. When arguments based
34 President’s Science Advisory Committee (1962), Meeting Manpower Needs in Science and Technology, Washington. 35 NRC (1979), Research Excellence Through the Year 2000, Washington, Study conducted at the request of the NSF; NRC (1985), Engineering Education and Practice in the United States, Study conducted at the request of the NSF, Washington. 36 J. C. Vaughn and R. M. Rosenzweig (1990), Heading Off a PhD Shortage, Issues in Science and Technology, Winter, pp. 66–73. 37 In the early sixties, the Bureau conducted studies for the NSF (see: Chapter 13, footnote 20). For a critical assessment, see: L. Hansen (1984), Labour Market Conditions for Engineers: Is There a Shortage?, Washington: National Research Council, pp. 75–98; W. L. Hansen (1965), Labour Force and Occupational Projections, Proceedings of the 18th Annual Winter Meeting, December 28–29, Industrial Relations Research Association, Madison, Wisconsin, pp. 10–30. 38 NSF (1969), Science and Engineering Doctorate Supply and Utilization, 1968–1980, NSF (69) 37; NSF (1971), 1969 and 1980 Science and Engineering Doctorate Supply and Utilization, NSF (71) 20; NSF (1975), Projections of Science and Engineering Doctorate Supply and Utilization, 1980 and 1985, NSF (75) 301; NSF (1979), Projections of Science and Engineering Doctorate Supply and Utilization, 1982 and 1987, NSF (79) 303; NSF (1984), Projected Responses of the Science, Engineering and Technicians Labour Market to Defense and Non-defense Needs: 1982–87, NSF (84) 304. 39 NSF (1989), Future Scarcity of Scientists and Engineers: Problems and Solutions, Washington; NSF (1990), The State of Academic Science and Engineering, Washington, pp. 189–232. 40 US House Subcommittee on Investigations and Oversight of the Committee on Science, Space and Technology (1992), Projecting Scientific and Engineering Personnel Requirements for the 1990s, 102nd Congress, 2nd session, April 8; National Research Council (2000), Forecasting Demand and Supply of Doctoral Scientists and Engineers: Report of a Workshop on Methodology, Washington. For a synthesis of the debate see: D. S. Greenberg (2001), Science, Money, and Politics: Political Triumph and Ethical Erosion, Chicago: University of Chicago Press, Chapters 7–9. 41 National Research Council (2000), Forecasting Demand and Supply of Doctoral Scientists and Engineers: Report of a Workshop on Methodology, op. cit., pp. 55–56. 42 Science (1998), Forecast: Fog Ahead on Job Front, 282, December 4, p. 1795. 43 Office of Technology Assessment (1985), Demographic Trends and the Scientific and Engineering Work Force: A Technical Memorandum, Washington; NRC (1984), Labour-Market Conditions for Engineers: Is There a Shortage?, Office of Scientific and Engineering Personnel, Washington; W. L. Hansen (1967), The Economics of Scientific and Engineering Manpower, Journal of Human Resources, 2 (2), pp. 191–220.
246
Highly qualified personnel: should we really believe in shortages?
on quantity lost their persuasive appeal, one turned to the alarming quality of researchers: there is always a shortage of exceptionally able scientists.44 The rhetorical resources of scientists and their representatives were in fact infinite, especially when people were driven by political goals. As D. S. Greenberg recently commented: “Lacking any real political power (. . .) science employed desperate appeals in which precision took second place to propaganda.”45 The British brain drain The American debate on scientists and engineers centered on shortages. In fact, the brain drain issue was nonexistent in the country, because the United States was a net importer of scientists and engineers.46 Americans, for instance, spoke of brain circulation instead of brain drain.47 The situation was different in Europe, however. After the United States, concern over personnel shortages was greatest in Great Britain (followed by Canada48).49 According to H. G. Johnson, the term brain drain originated in the United Kingdom because of government policies that kept salaries from rising too rapidly, resulting in the emigration of scientists to North America.50 The British government even considered banning foreign recruitment advertising in the late 1960s.51 In the 1950s, the British Advisory Council on Science Policy (ACSP), through its committee on scientific manpower, had pioneered the collection of statistics on the supply of scientists and engineers in Great Britain. Its work involved not only assessing the then-current supply of scientists and engineers, but also forecasting the demand for them. The numbers produced were published regularly until 1963–1964, and were followed by reports in 1966 and 1968 by the Committee on Manpower Resources for Science and Technology.
44 H. A. Averch, A Strategic Analysis of Science and Technology Policy, op. cit., p. 76. 45 D. S. Greenberg (2001), Science, Money, and Politics: Political Triumph and Ethical Erosion, Chicago: University of Chicago Press, p. 77. 46 The Brain Drain into the United States of Scientists, Engineers, and Physicians, House of Representatives, Committee on Government Operations, 90th Congress, 1st Session, Washington: USGPO, 1967. 47 NSF (1998), International Mobility of Scientists and Engineers to the United States: Brain Drain or Brain Circulation, Issue Brief, NSF (98) 316, June 22. 48 L. Parai (1965), Immigration and Emigration of Professional and Skilled Manpower During the Post-War Period, Special study No. 1, Economic Council of Canada: Ottawa. 49 For developing countries, see for example: S. Dedijer (1961), Why Did Daedalus Leave?, Science, 133, June 30, pp. 2047–2052; United Nations (1968), Outflow of Trained Personnel from Developing Countries, New York, 68-24459; UNESCO (1968), The Problem of Emigration of Scientists and Technologists, SC/WS/57, Paris; UNESCO (1971), Scientists Abroad: A Study of the International Movement of Persons in Science and Technology, COM.70/D.60/A, Paris; S. Watanabe (1969), The Brain Drain from Developing to Developed Countries, International Labour Review, pp. 401–433; Education and World Affairs (1970), The International Migration of High-Level Manpower, Committee on the International Migration of Talent, New York: Praeger. 50 H. G. Johnson (1965), The Economics of the Brain Drain: The Canadian Case, Minerva, 3 (3), p. 299. 51 Nature (2000), UK Discussed Ban on Foreign Job Ads in 1960s, 403, January 13, p. 121.
Highly qualified personnel: should we really believe in shortages? 247 Early British official statistical studies on S&T personnel Scientific Man-Power, Report of a Committee Appointed by the Lord President of the Council (Barlow Committee), Cmd. 6824, 1946. Report on the Recruitment of Scientists and Engineers by the Engineering Industry, ACSP, Committee on Scientific Manpower, 1955. Scientific and Engineering Manpower in Great Britain, ACSP and Ministry of Labour & National Service, 1956. Scientific and Engineering Manpower in Great Britain: 1959, ACSP, Committee on Scientific Manpower, Cmnd. 902, 1959. The Long-Term Demand for Scientific Manpower, ACSP, Committee on Scientific Manpower, Cmnd. 1490, 1961. Scientific and Technical Manpower in Great Britain: 1962, ACSP, Committee on Scientific Manpower, Cmnd. 2146, 1963. Report of the 1965 Triennial Manpower survey of Engineers, Technologists, Scientists and Technical Supporting Staff, Committee on Manpower Resources for Science and Technology, Secretary of State for Education and Science and Ministry of Technology, Cmnd. 3103, 1966. The Brain Drain, Report of the Working Group on Migration, Committee on Manpower Resources for Science and Technology, Secretary of State for Education and Science and Ministry of Technology, Cmnd. 3417, 1968. Enquiry into the Flow of Candidates in Science and Technology into Higher Education, Committee on Manpower Resources for Science and Technology, Secretary of State for Education and Science and Ministry of Technology, Cmnd. 3541, 1968. The Flow into Employment of Scientists, Engineers and Technologists, Report of the Working Group on Manpower for Scientific Growth, Committee on Manpower Resources for Science and Technology, Secretary of State for Education and Science and Ministry of Technology, Cmnd. 3760, 1968. The works of the ACSP, as well as a study by the Royal Society,52 soon came under vehement criticism as soon as researchers began looking critically at it: “Its influence on policy was out of all proportion to the quality of its forecasts,” commented K. G. Gannicott and M. Blaug.53 “Instead of making out a convincing case for a shortage of scientists and technologists, with due attention to the swing from science and the brain drain which may have intensified it, the (. . .) Committee’s efforts to develop an integrated picture of scientifically-qualified manpower is simply a mass of contradictions.”54 The main methodological
52 Royal Society (1963), Emigration of Scientists from the United Kingdom, Report of a Committee Appointed by the Council of the Royal Society, London: Royal Society. 53 K. G. Gannicott and M. Blaug (1969), Manpower Forecasting since Robbins: A Science Lobby in Action, Higher Education Review, 2 (1), p. 56. 54 Ibid., p. 57.
248
Highly qualified personnel: should we really believe in shortages?
limitations were as follows:55 Supply and demand ● ● ●
●
● ● ●
Uncritically accepting employers’ estimates;56 Projecting past trends into the future; Inadequately defining occupations (as an indicator of demand) and qualifications (as an indicator of supply) as well as the relationship between them; Confusing needs (or what ought to happen: the numbers required for the attainment of some economic targets) and demand (what actually occurs: the number who are offered employment); Ignoring the operations of the labor market; Using different methods in each survey; Providing insufficient details on methodology.
Brain drain ● ● ●
Using only American immigration data; Not distinguishing between permanent and temporary employment abroad; Neglecting inflows.
These limitations were not specific to Great Britain. They were also documented for almost every national study on the brain drain.57 In fact, there were few statistics available to correctly document the phenomenon. In general, national statistical studies on migration relied essentially on American data, namely data on emigration to the United States.58 These were the only data available to measure the phenomenon. In using these data, however, countries like Great Britain were neglecting inflows into their own territory, thus inventing a phenomenon that did not really exist, or over-dramatizing a situation that was far from catastrophic. Be that as it may, the British surveys had a considerable influence on work by the OECD. Alexander King, the first secretary of the ACSP committee on scientific manpower, soon became the director of the OEEC Office of Scientific 55 Besides Gannicott and Blaug (1969), Manpower Forecasting since Robbins: A Science Lobby in Action, op. cit., see: B. Thomas (1966), The International Circulation of Human Capital, Minerva, 5 (1), pp. 479–506; H. G. Grubel and A. D. Scott (1966), The Immigration of Scientists and Engineers to the United States, 1949–1961, Journal of Political Economy, 74 (4), pp. 368–378; C. A. Moser and P. R. G. Layard (1968), Estimating the Need for Qualified Manpower in Britain, in B. J. McCormick (ed.), Economics of Education, Middlesex: Harmondsworth, V. A. Richardson (1969), A Measurement of Demand for Professional Engineers, British Journal of Industrial Relations, 7 (1), pp. 52–70. 56 S. Zuckerman once admitted: “One of the least reliable ways of finding out what industry wants is to go and ask industry,” cited by Gannicott and Blaug (1969), Manpower Forecasting since Robbins: A Science Lobby in Action, op. cit., p. 59. 57 For similar uses of US data in France, see: L’Émigration des scientifiques et des ingénieurs vers les États-Unis (1966), Le Progrès Scientifique, 93, pp. 38–53. 58 Some influential NSF studies were: NSF (1958), Immigration of Professional Workers to the United States, 1953–56, Scientific Manpower Bulletin, NSF (58) 4, Washington; NSF (1962), Scientific Manpower From Abroad: United States Scientists and Engineers of Foreign Birth and Training, NSF (62) 24, Washington; NSF (1965), Scientists and Engineers From Abroad, NSF, Washington; NSF (1967), Scientists and Engineers From Abroad, 1962–64, NSF (67) 3, Washington.
Highly qualified personnel: should we really believe in shortages? 249 and Technical Personnel (OSTP) and, later, the first director of the OECD Directorate of Scientific Affairs (DSA).
Internationalizing the discourses Two issues regarding the economic well-being and reconstruction of Europe challenged European bureaucrats after World War II: productivity and the supply of qualified human resources. These were the two domains where the OEEC invested most in terms of S&T measurement. The organization, however, got involved in the measurement of personnel before it began measuring monetary investments in S&T. Documenting gaps between Europe, America, and the USSR The OEEC’s early discussions on S&T were conducted in several committees: Manpower (1948), Scientific and Technical Information (WP3) (1949), Scientific and Technical Matters (1951), Productivity and Applied Research (1952), Applied Research (1954)—as well as the European Productivity Agency (EPA) (1953). It is to the Manpower Committee in particular that we owe the first systematic international measurements of S&T. As early as 1951, it recommended to the Council improvement of the comparability of manpower statistics in general,59 conducted the first international survey on scientific and technical personnel in 1954,60 and published its results in 1955. The committee concluded that “on the whole, shortages do not at present seriously interfere with research or production.”61 But, in line with Averch’s analysis of the rhetoric, the report further specified that: Quantity is not the only factor in assessing requirements. Quality is equally important in this field. A merely numerical calculation of shortages could in fact lead to misunderstanding seeing that the shortage of a very small number of highly-qualified specialists may have very important effects on the launching of projects. (OECD (1955), Shortages and Surpluses of Highly Qualified Scientists and Engineers in Western Europe, Paris, p. 21)
OEEC/OECD structures concerned with S&T personnel OEEC Manpower Committee (1948) WP25 (on shortages of highly qualified and technical manpower) (1957) Office of Scientific and Technical Personnel (OSTP) (1958)
59 OEEC (1956), Improvement of the Comparability of Manpower Statistics, C (56) 59. 60 Seventeen countries participated. 61 OECD (1955), Shortages and Surpluses of Highly Qualified Scientists and Engineers in Western Europe, Paris, p. 21.
250
Highly qualified personnel: should we really believe in shortages?
OECD Directorate of Scientific Affairs Committee on Scientific and Technical Personnel (CSTP) (1961) Education Committee (1970) Directorate of Social Affairs, Manpower, and Education (1974) The report recommended that countries improve their methods of measurement, and supply data every two years (p. 22). Soon after, the OEEC also tried to persuade member countries to add questions to their censuses: “The forthcoming general census which take place in many countries in 1960 offers an admirable opportunity to obtain information with regard to the qualification and employment of scientifically and technically trained persons.”62 In 1966, the OECD published its first analysis based on such data.63 An international conference on the organization and administration of applied research, organized by the EPA in 1956 and devoted to the shortage of research workers, made recommendations that were similar to those by the Manpower Committee. Regarding the 1954 survey, it reported: It was difficult to assess shortages at present because some countries had not included questions of an appropriate nature in their census reports until about 1950, and so there was a limited basis for comparisons. National census figures also did not afford a means of determining the quality of the personnel in question. (European Productivity Agency (1957), Scientific Manpower for Applied Research: Shortage of Research Workers— How to Train and Use Them?, OEEC: Paris, p. 8) The conference report qualified the available statistics as “sketchy” and the discussions on the topic as bedeviled by the “different interpretations given to educational qualifications and occupational terminology (p. 11).” It recommended that the OEEC set up a dictionary of terms and encouraged detailed comparative surveys (pp. 11–12). The report nevertheless concluded, in contradiction to the conclusions of the 1954 survey: “It is certain that there is a general lack of qualified personnel” (p. 9). Two years later, the second OEEC survey came to the same conclusion as the EPA conference, but to a different one than the first survey:64 “universal shortage is striking” (p. 5), said the report using data that was hardly comparable between countries. “Shortages exist in virtually all countries and in all branches of science”
62 OEEC (1958), Use of General Census to Gather Information on Scientific and Technical Personnel, C (58) 52, p. 22. 63 OECD (1966), The Education and Utilization of Highly Qualified Personnel: An Analysis of Census Data, Inter-Governmental Conference on the Education and Utilization of Highly Qualified Personnel, DAS/EID/66.53. 64 OEEC (1957), The Problem of Scientific and Technical Manpower in Western Europe, Canada and the United States, Paris.
Highly qualified personnel: should we really believe in shortages? 251 (p. 22) and “is tending to impede the expansion of production” (p. 6). For the OEEC, the results of the survey were: a warning that countries of Western Europe, Canada and the United States are behind in their drive to produce scientists and engineers. The danger that this involves must not be underestimated. Technical progress, which is an essential factor in the improvement of living standards and security, depends upon the adequate supply of adequate personnel (. . .). The problem for many years to come will be to train enough qualified scientists and engineers. There will be no danger of training too many. (OEEC (1957), The Problem of Scientific and Technical Manpower in Western Europe, Canada and the United States, Paris, p. 5) As a consequence, a working party on scientific and highly-qualified manpower (WP25) was set up and a “vigorous program of action” developed.65 This was motivated by the recognition of a “fundamental change in the pattern of industry [due to] the development of new industries based on recent scientific discovery (. . .) with rich resources in the skill and ability of its people” (p. 4). The program of action was therefore “concerned with scientific manpower questions in general, basic education, university and technical high school training and industrial needs” (p. 3). The Secretariat also suggested, for the first time, a study on the “use of research funds in member countries and the means to improve the allocation of these funds, with the goal of rationalizing government research programs and, thereby, the management of human resources.”66 One year later (1958), the OEEC created the OSTP as part of the EPA. The office, pursuing the work of its predecessor—Working Party No. 25—conducted a third survey on scientific and technical personnel in members countries.67 The report admitted that: “the problem of the definition of various types of scientific and technical personnel was of major concern to the experts” (p. 19). In the foreword to the publication, A. King nonetheless concluded, using the argument from minimizing limitations, that: “although there are large gaps in the data presented in this report, the information assembled certainly represents one of the richest sources now available on the accumulation of stocks of qualified scientific and technical manpower in the OECD area (. . .)” (p. 5). The survey found a growing difference between North America and Europe, and projected larger discrepancies for 1970: “Figures clearly reveal a growing difference between the two parts of the OECD area. While in 1952, the United States and Canada had 215,000 more first degrees in higher education than the European member countries, this difference will most probably be 500,000 in 1970” (p. 28). A similar gap between Europe and the USSR had been documented
65 OEEC (1957), Creation of a Working Party on Scientific and Highly Qualified Manpower, C (57) 137. 66 OEEC (1957), Note du Secrétaire général sur le document C (57) 54, C (57) 66, p. 4. 67 OECD (1963), Resources of Scientific and Technical Personnel in the OECD Area, Paris, p. 28.
252
Highly qualified personnel: should we really believe in shortages?
by the OEEC, a few years before.68 The study concluded that between 1954 and 1958, “the European countries have achieved less progress in the output of scientists and technologists from universities and equivalent institutions than the United States or Canada (. . .) [and] the Soviet Union has gained a clear lead (. . .). The relative positions will not, if present trends continue, be greatly changed by 1965” (p. 2). The rhetoric of this study (itself influenced by the NSF)69 was carried over into the third survey. The OSTP was abolished in 1961, and CSTP continued its work in the 1960s. The committee would measure, for the first time in history, the migration of scientists and engineers between member countries, the United States, and Canada. The brain drain was a highly popular topic, as we have seen.70 The OECD had previously documented some facets of this supposed brain drain with numbers produced for the Policy Conference on Economic Growth and Investment in Education held in 1961,71 and in the study on R&D produced by C. Freeman and A. Young.72 But the CSTP was now embarking on a huge project. In 1964, it appointed an ad hoc group to determine the feasibility of conducting a comprehensive study of the international movement of scientific and technical manpower. The group agreed that the “common impediment to taking adequate account of migration in policy decisions is the general lack of reliable information. Statistics (. . .) either do not exist at all or are seriously deficient in completeness, accuracy, and detail. The ad hoc studies that have been undertaken in a few countries are (. . .) too narrow in scope and too tentative in their conclusions to provide a sound basis for definitive policy discussions.”73 The committee concluded: “migration is necessarily an international activity that can best be studied on an international basis (. . .); the OECD thus provides a suitable forum for a migration study.”74 A small steering group was subsequently appointed to 68 OEEC (1960), Producing Scientists and Engineers: A Report on the Number of Graduate Scientists and Engineers produced in the OEEC member countries, Canada, the United States and the Soviet Union, Paris, OSTP/60/414. 69 N. De Witt (1955), Soviet Professional Manpower: Its Education, Training, and Supply, Washington: NSF; N. De Witt (1961), Education and Professional Employment in the USSR, NSF (61) 40, Washington: NSF; L. A. Orleans (1961), Professional Education in Communist China, NSF (61) 3, Washington: NSF. 70 By 1967, S. Dedijer estimated that there were 3,000 titles in print: S. Dedijer (1967), Brain Drain or Brain Gain: A Bibliography on Migration of Scientists, Engineers, Doctors and Students, Lund. The following references are to several international conferences that were organized at the time: US Advisory Commission on International Education Affairs/European Research Center (Lausanne, 1967): W. Adams (1968), The Brain Drain, New York: Macmillan, Committee on Research Economics (Stockholm, 1973): The Brain Drain Statistics: Empirical Evidence and Guidelines, Gotab (Stockholm): NFR Editorial Services; EEC/ESF (Strasbourg, 1980): Employment Prospects and Mobility of Scientists in Europe; NATO, NSF and US NRC (Lisbon, 1981): The International Mobility of Scientists and Engineers, NATO. 71 OECD (1962), International Flows of Students, Policy Conference on Economic Growth and Investment in Education, Vol. 5, Paris. 72 C. Freeman and A. Young (1965), The Research and Development Effort in Western Europe, North America and the Soviet Union: An Experimental International Comparison of Research Expenditures and Manpower in 1962, op. cit., pp. 57–59. 73 OECD (1964), The International Movement of Scientific and Technical Manpower, Paris, STP (64) 25, p. 2. 74 Ibid., pp. 3–4.
Highly qualified personnel: should we really believe in shortages? 253 establish guidelines and definitions, and to recommend methodology and sources of data.75 It took five years before the idea of the survey, first suggested in 1964, became reality. The report, two volumes and hundreds of pages long, was never published.76 In general, data were partial in coverage (six countries) and difficult to process. Nevertheless, the OECD estimated that migration was overestimated, and affected only a small part of the total national stock of the scientific and technical work force: it is the elite who migrate.77 This result was completely at odds with the discourses of member countries. The same message was conveyed in the Technological Gaps study, published in 1968.78 The study documented, among other things, gaps between Europe and North America in the production of graduates,79 but it also included a chapter on the brain drain.80 This chapter brought together readily available data (mainly on immigration to the United States) and showed that only a relatively small proportion of European scientists and engineers migrated to the United States. The proportion of loss to the United States was increasing, but when inflows were taken into account, the net balance was about four times as low as the (outflow) numbers which usually appeared in British studies, for example. The OECD never pursued the work on the brain drain. In fact, in the 1970s the brain drain was no longer a central political issue. However, as the outcome of an international meeting on brain drain statistics arranged in 1973 by the Committee on Research Economics (Stockholm), Alison Young from the OECD acted as a consultant and drafted a proposal containing guidelines for surveying the international migration of highly-qualified manpower.81 Unfortunately, I have found no evidence of any use of the guidelines in the ensuing years. The OECD got involved considerably, however, in forecasting human resources in S&T. These efforts were motivated by the realization that “policy makers would be better guided by a more comprehensive and strategic approach than by mere numbers on the shortages of scientists and engineers.” The OECD consequently organized symposia on methods of forecasting personnel needs.82
75 OEEC (1965), The International Movement of Scientific and Technical Manpower, Paris, STP (65) 1. 76 OECD (1969), The International Movement of Scientists and Engineers, Paris, STP (69) 3. The document was written by Y. Fabian, G. Muzart, and A. Young. 77 OECD (1970), International Movement of Scientists and Engineers, Paris, STP (70) 20, p. 4. 78 OECD (1968), Gaps in Technology: General Report, Paris. 79 The study found that the overall educational effort of the United States was greater than that of Europe with respect to the production of (pure) scientists. With respect to engineers and “technologists,” however, Europe surpassed the United States in both absolute and relative terms. 80 OECD (1967), A Note on the Brain Drain, DAS/SPR/67.98. 81 A. Young (1974), Guidelines for Surveying International Migration of Highly Qualified Manpower, in Committee on Research Economics, The Brain Drain Statistics: Empirical Evidence and Guidelines, op. cit., The specifications defined for the OECD survey served as a basis for the proposal. See: OECD (1965), International Movement of Scientific and Technical Personnel, STP (65) 19. 82 OEEC (1960), Forecasting Manpower Needs for the Age of Science, op. cit.; OECD (1962), Employment Forecasting: International Seminar on Employment Forecasting Techniques, Paris.
254
Highly qualified personnel: should we really believe in shortages?
These gave rise to important works on education planning in the 1960s and 1970s, which mobilized the resources of the newly-created Committee on Education.83 These functions, however, were soon (1974) transferred to another directorate (Social Affairs, Manpower, and Education),84 and the DSTI almost completely stopped dealing with human resources—except R&D human resources—until the 1990s. From UNESCO to Canberra The early OECD measurements of the scientific and technical workforce were not based on any international standards. The organization collected data from national governments that had their own definitions and collection methods. National data were therefore poorly comparable between countries. Some governments based their estimates on censuses, others on labor force surveys, still others on available and ready-made statistics (administrative records). In all these estimates, data were based on the educational qualifications of the population, as is still done in most surveys, and rarely on the occupations held by scientists and engineers. What surveys measured were graduates, not jobs, supply, not demand. All these problems were identified and discussed systematically for the first time in 1981 at an OECD workshop (which some say was “pushed and pulled by American initiatives”85) on the measurement of stocks of scientists and technical personnel.86 Following the discussions, experimental questionnaires requesting data on total stocks of scientific and technical personnel were addressed to member countries.87 The results were disappointing: “Only half a dozen countries responded and the quality and sparse data received did not allow any serious international comparisons to be made.”88 We had to wait until the early 1990s for the OECD member countries to define international standards, as they had done thirty years before with the Frascati manual.89
83 OECD (1980), Education Planning: An Historical Overview of OECD Work, Paris; G. S. Papadopoulos (1994), Education 1960–1990: The OECD Perspective, Paris. 84 In the 1980s, the Social Affairs, Manpower, and Education Directorate (renamed the Directorate for Education, Employment, Labour, and Social Affairs (DEELSA) in the early 1990s) constructed its own indicators and started a publication based thereon: Education at a Glance. First edition: 1992. 85 G. Westholm (1993), Recent Developments in International Science and Technology Personnel Data Collection, Draft Paper presented for the International Conference on Trends in Science and Technology Careers, Brussels, March, 28–31, p. 4. 86 OECD (1981), Summary Record of the OECD Workshop on the Measurement of Stocks of Scientists and Technical Personnel, DSTI/SPR/81.45. 87 OECD (1982), International Survey of the Resources Devoted to R&D in 1979 by OECD member countries: Questionnaire, DSTI/SPR/80.42. 88 OECD (1992), Workshop on the Measurement of Human S&T Resources: Conference Announcement, DSTI/STII (2) 2, p. 1. 89 In the 1960s, following gaps identified in data on scientific and technical personnel, the OECD Secretariat envisaged preparing a manual, but never did. See: OECD (1964), Committee for Scientific Research: Programme of Work for 1966, SR (65) 42, p. 23.
Highly qualified personnel: should we really believe in shortages? 255 It was really UNESCO that reawakened the interest in data on scientific and technical personnel at the international level. Very early on, in the 1960s, UNESCO began defining new categories of interest and collecting education data. The statistics were used, for example, by the OEEC in its study on gaps with the USSR. But the statistical series issued in the UNESCO Statistical Yearbook had always been too aggregated for detailed analytical purposes. Then, in 1978, UNESCO adopted its recommendation that defined scientific and technological activities (STA) in terms of three broad classes of activities: R&D, scientific and technological services (STS), and scientific and technical training and education (STET). In line with the recommendation, methodological guidelines on STET were published in 1982,90 and guidelines on lifelong training discussed in 1989.91 The guidelines on STET included persons directly engaged in S&T activities (occupation), and excluded those who were not, regardless of whether they had the qualifications for such work. However, the guidelines were never implemented through any substantial data collection: “due to the drastic reduction of personnel in the Division of Statistics, priorities had to be established and unfortunately, this area was not considered a high priority.”92 UNESCO’s Division of Statistics continued to center on R&D personnel, rather than on the broader measurement of scientists and engineers. As for the OECD: until the 1993 edition, the Frascati manual limited the measurement of personnel to R&D—and to full-time equivalents (FTE) rather than head-counts.93 The former, while a true measure of the volume of R&D, did not, however, allow them to count the stocks and flows of physical persons (headcounts), and to compare S&T statistics to population, education, and labor statistics, among others. The interest in the broader concept of S&T personnel came from the Technology-Economy Program (TEP) in the early 1990s, which highlighted, backed with scattered statistics, the key role of “human capital” in the innovation process.94 Firmly deploring the lack of accurate statistical tools, TEP invited the DSTI statistical division to prepare a methodological manual. The Division, in collaboration with Eurostat, therefore undertook a survey of existing data and national practices in measurement of scientific and technical personnel.95 90 UNESCO (1982), Proposals for a Methodology of Data Collection on Scientific and Technological Education and Training at the Third Level, CSR-S-15. 91 UNESCO (1989), Secretariat Background Paper to the Meeting of Experts on the Methodology of Data Collection on Lifelong Training of Scientists, Engineers and Technicians, ST.89/CONF.602/3. 92 UNESCO (1994), General Background to the Meeting and Points for Discussion, ST.94/CONF.603/5, p. 3. Guidelines were also envisaged on training statistics: UNESCO (1989), Secretariat Background Paper to the Meeting of Experts on the Methodology of Data Collection on Lifelong Training of Scientists, Engineers and Technicians, ST.89/CONF.602/3. The OECD produced such a manual: OECD (1997), Manual for Better Training Statistics: Conceptual, Measurement and Survey Issues, Paris. 93 OECD (1991), Initial United States Proposals for Changes in the R&D Sections of the Frascati Manual to Facilitate the Measurement of Total S&T Personnel, DSTI/STII (91) 26. 94 OECD (1992), Technology and the Economy: The Key Relationships, Paris, Chapter 7. 95 OECD (1993), Results of the OECD/Eurostat Inventory of HRST Data Availability in OECD/EC member countries, DSTI/EAS (93) 9; G. Westholm (1993), Recent Developments in International Science and Technology Personnel Data Collection, op. cit.
256
Highly qualified personnel: should we really believe in shortages?
After examining several national and international publications, reports, and statistics, it identified about a dozen different concepts concerning scientific and technical human resources.96 Most of these concepts differed in the way they covered occupation, qualification, and field of study. Only a few countries (among them the United States) seemed to undertake some kind of regular humanresources data collection. Above all, there was no clear category for highly-qualified personnel in any of the existing international classifications. Simultaneously, the Division asked R. Pearson from the Institute of Manpower Studies at the University of Sussex (Brighton) to prepare a draft manual on measuring scientific and technical human resources.97 The explicit aim was to “identify possible future mismatches in the demand/supply equation.”98 The discussions engaged other OECD directorates and agencies (mainly interested in education), as well as UNESCO and, particularly, the European Commission (DG XIII and Eurostat). Two workshops were held, one in 1992,99 and another in 1993,100 in which countries expressed general agreement with most of the proposed manual, although some debates occurred over issues like the inclusion of the humanities, the minimum level of scientific and technical qualification, and the treatment of managers and the Armed Forces. The manual introduced the concept of “Human Resources for Science and Technology” (HRST), with a rather wide definition embracing both qualifications— completed post-secondary education (level 5 and above)101—and occupation— employment in the scientific and technical professions but without the equivalent qualifications.102 It included the social sciences and humanities, but a system of “priorities” was established with the natural sciences and engineering situated at its core.103 A similar priority was assigned to university (level 6) over technical training.104 Students who were employed in any way and for however short an amount of time were to be included, and students from abroad were to be distinguished systematically from their domestic counterparts. Member countries adopted the manual for the measurement of HRST in 1994 in Canberra. The document was cleared for official publication in 1995.105 The 96 OECD (1993), Measuring Human Resources Devoted to Science and Technology (HRST), DSTI/EAS (93) 17, p. 3. 97 OECD (1992), Draft Manual on the Measurement of S&T Human Resources, DSTI/STII (92) 4. 98 OECD (1992), Workshop on the Measurement of Human S&T Resources: Conference Announcement, op. cit., p. 1. 99 OECD (1993), Summary Record of the Workshop on the Measurement of S&T Human Resources, DSTI/EAS/M (93) 2. 100 OECD (1993), Summary Record of the 1993 Workshop on the Measurement of S&T Human Resources, DSTI/EAS/M (93) 4. 101 ISCED: International Standard Classification of Education. 102 ISCO: International Standard Classification of Occupations. 103 “The Nordic group [of countries] had difficulties in accepting the use of the term “low priority” in connection with the humanities (. . .). It was agreed that the priorities terminology be replaced by coverage”: OECD (1994), NESTI: Summary Record of the Meeting Held on 18–20 April 1994 in Canberra, Australia, DSTI/EAS/STP/NESTI/M (94) 1, p. 4. 104 But the PhD level could not be separately distinguished because of identification difficulties. 105 OECD (1995), Manual on the Measurement of Human Resources Devoted to S&T, OECD/GD (95) 77.
Highly qualified personnel: should we really believe in shortages? 257 manual did not propose a collection of entirely new statistics, but offered guidelines on how various kinds of existing data could be exploited for the construction of S&T indicators. National Experts on Science and Technology Indicators (NESTI) itself emphasized that it was “a compromise between what is possible in the ideal world and what is realistic.”106 Three stages in the measurement of human resources immediately followed the adoption of the manual: 1 2 3
Surveys of HRST stocks: A first OECD/Eurostat pilot survey was conducted in 1995–1996 to test the viability of the manual’s data, concepts, and indicators;107 Development of indicators on HRST flows;108 Analyses of the international mobility of highly skilled workers.109
On the basis of these experiences, a revision of the manual was begun in 2001.110 A lot of problems were identified and future modifications to the manual suggested, especially on improving definitions and finding appropriate sources of data:111 ●
● ●
●
●
There was a relatively high level of national misunderstanding, mis-reporting and unusable data; National data came from different sources, and methodologies differ; Conversions from national systems to the standardized systems of ISCED and ISCO were very problematic;112 Coverage of qualification (particularly level 5) and occupation (persons working as HRST but not trained in HRST) led to overestimations; There were difficulties in matching educational qualifications with occupations.
106 OECD (1994), Manual on the Measurement of Human Resources in Science and Technology: Discussion Paper, DSTI/EAS/STP/NESTI (94) 3, p. 6. 107 L. Hardy (1997), Evaluation Report on the 1995/96 Pilot Data Collection on HRST Stocks, DSTI/EAS/STP/NESTI (97) 3; Eurostat (1998), Using the Community Labour Force Survey (LFS) as a Source of Data for Measuring the Stocks of HRST, DSTI/EAS/STP/NESTI (98) 1. 108 Eurostat (1996), Basic Indicators for Describing the Flow of HRST: Full Report, DSTI/EAS/STP/ NESTI (96) 14; Eurostat (1997), Basic Indicators for Describing the Flow of HRST: R&D, Methods and Data Analysis, DSTI/EAS/STP/NESTI (97) 8; Eurostat (1998), A Preliminary Analysis of the Flows From the Tertiary Education System and Stocks of HRST, DSTI/EAS/STP/NESTI (98) 2; M. Akerblom (1999), Mobility of Highly Qualified Manpower: A Feasibility Study on the Possibilities to Construct Internationally Comparable Indicators, DSTI/EAS/STP/NESTI (99) 7. 109 Seminar held in June 2001. Published in: OECD (2002), International Mobility of the Highly Skilled, Paris. See also: OECD (2001), Innovative People: Mobility of Skilled Personnel in National Innovation Systems, Paris. 110 Eurostat, The Canberra Manual: Some Preliminary Thoughts on the Revision of the Manual on the Measurement of Human Resources Devoted to Science and Technology, DSTI/EAS/STP/NESTI/RD (99) 7. 111 L. Hardy (1997), op. cit., L. Auriol and J. Sexton (2002), Human Resources in Science and Technology: Measurement Issues and International Mobility, in OECD, International Mobility of the Highly Skilled, op. cit., 13–38. 112 ISCO: International Standard Classification of Occupations; ISCED: International Standard Classification of Education.
258
Highly qualified personnel: should we really believe in shortages?
As can be seen from this short list, several of today’s problems are very similar to those of the 1960s. In fact, between 1960 and 1990, no work had been conducted toward creating internationally-standardized measurements of HRST. The OECD, under the influence of its member countries, is in fact simply picking up were it left off over thirty years ago.
Conclusion The early measurement of human resources in S&T owes a great deal to war-related issues, first among them the “shortage” of scientists and engineers after World War II, but also the gap preoccupations of Americans and Europeans with respect to the USSR during the Cold War and after. These latter preoccupations were great enough, in 1958, for the NSF to comment: “In recent years the need for information has been emphasized by considerations of national defense, and for this reason seems more pronounced than ever before in national history.”113 With regard to the comparative and international measurement of human resources in S&T, this chapter identified three phases. First, the OECD’s work in the 1950s on scientific and technical personnel was rooted in the conviction that increased investment in the training of scientific and technical personnel, and in the basic general education on which such training must be built, is an essential ingredient of forward-looking policy to promote economic and social development, such development being intimately linked with the quality of manpower in the member countries and with the technical sophistication of their methods of production. (OECD (1968), Review of the Work of the Organization in the Field of Scientific and Technical Personnel, C (68) 68, p. 1) The OEEC and the OECD conducted several international surveys specifically aimed at measuring the supply and demand of scientists and engineers. Then, in the 1970s and 1980s, the DSA (and its successor, the DSTI) considerably restricted its measurement of human resources: “All member countries are now committed to a policy of unprecedented expansion of their educational system (. . .). This imposes new tasks,” stated the OECD.114 The DSTI therefore concentrated on R&D personnel, and left the task of compiling education statistics to another directorate. S&T personnel data would be gathered by the DSTI only on an ad hoc basis over the years, on the occasion of specific studies. Only in the 1990s, as the knowledge-based economy began generating attention (and buzzwords), did the organization’s statisticians again shift to measuring the supply and demand of scientists and engineers: “The move toward the knowledge-based economy has placed human capital in S&T at the forefront of the policy debate across OECD countries, not just in the area of education and labour markets but 113 NSF (1958), A Program for National Information on Scientific and Technical Personnel, op. cit., p. 3. 114 OECD (1968), Review of the Work of the Organization in the Field of Scientific and Technical Personnel, C (68) 68, p. 2.
Highly qualified personnel: should we really believe in shortages? 259 also in science, technology and innovation policy.”115 The current concept of “human capital” had in fact already been proposed in the 1960s,116 along with the notion of “investment in human resources,” formulated in 1964 by the OECD study group on the economics of education.117 Over the last fifty years, no set of S&T statistics has caused more debate and controversy than those on HRST. There was no shortage of myths, but the data were always too poor to permit any reliable measurement of personnel shortages or of a brain drain. The data were generally recognized as limited, but the charismatic mystique of numbers remained: “It does not seem that too much importance needs to be attached to this lack of comparability,” suggested the OECD’s second report on scientific and technical manpower in 1957.118 Similarly, during the debates over the controversial NSF study published in 1989, R. C. Atkinson, president of the American Association for the Advancement of Science (AAAS), stated: “The models used to project supply and demand for scientists and engineers have been subject to criticism. But most of the dispute turns on quantitative details rather than the fundamental conclusion.”119 In his recent historical account of education at the OECD, G. S. Papadopoulos, former OECD director, summarized the bureaucrats’ state of mind as follows: reports’ “ultimate value lies not so much in the accuracy or otherwise of their quantitative analyses and predictions (. . .) as in the stimulus they provided for a more systematic approach to educational planning and in sensitising public opinion (. . .).”120 Some were doubtless more cautious in their methodologies and political discourses. In the early 1950s, for example, the Bureau of Labor Statistics measured the presumed loss of personnel owing to Reserve and Selective Service calls, and concluded: “factors other than calls to military duty caused the bulk of the separations.”121 The Cold War, at least, was not responsible for the shortages of scientists and engineers. Others, like S. Zuckerman in the United Kingdom, made analytical clarifications like the following: “The term [brain drain] itself is a misnomer. In general, educated men throughout the ages have always moved from
115 OECD (2000), Mobilising Human Resources for Innovation, Paris, p. 3. See also: OECD (1996), Measuring What People Know: Human Capital Accounting for the Knowledge Economy, Paris; OECD (1998), Human Capital Investment: An International Comparison, CERI, Paris; OECD (2001), The Well-being of Nations: the Role of Human and Social Capital, Paris. 116 G. S. Becker (1964), Human Capital, New York: National Bureau of Economic Research; T. M. Schultz (1968), Investment in Human Capital, in B. J. McCormick, Economics of Education, Penguin Books: Middlesex: Harmondsworth, 13–33; M. Blaug (1976), The Empirical Status of Human Capital Theory: A Slightly Jaundiced Survey, The Journal of Economic Literature, 14, pp. 827–853. 117 OECD (1964), The Residual Factor and Economic Growth, Paris. 118 OECD (1957), The Problem of Scientific and Technical Manpower in Western Europe, Canada, and the United States, op. cit., p. 9. 119 R. C. Atkinson (1990), Supply and Demand for Scientists and Engineers: A National Crisis in the Making, Science, 248, April 27, p. 427. 120 G. S. Papadopoulos (1994), op. cit., p. 45. 121 Bureau of Labor Statistics (1953), Scientific R&D in American Industry: A Study of Manpower and Costs, Bulletin No. 1148, Washington, p. 38.
260
Highly qualified personnel: should we really believe in shortages?
areas of lesser to areas of greater opportunity, where they expect to find better resources with which they can apply their talents.”122 Finally, in commenting on the first draft manual on HRST, OECD member countries themselves explicitly admitted that “too much attention was given [in Chapter 2] to the brain drain.”123 Consequently, at the second OECD workshop on HRST, member countries “agreed that the manual should not deal extensively with the forecasts and projections as this was both a technically complex and politically sensitive issue.”124 Nevertheless, politically charged discourses persisted unabated. While lamenting the lack of appropriate statistics, the TEP document confidently proclaimed: there is “a risk of shortages of scientists and engineers in the future years.”125 National studies also continued to appear on shortages in specific sectors, like biotechnology126 and the information and communication sector.127 Finally, public debates over the so-called brain drain took place again in United Kingdom128 and Canada.129 The arguments were the same as those in the 1960s, and the data just as poor.130 The rhetoric persisted because, as the first OECD survey stated: “There will be no danger of training too many scientists.” In their criticism of British forecasting, Gannicott and Blaug correctly identified the axiom guiding these discourses: there is always an unsatisfied need. “One cannot go wrong in putting [demand forecasts] forward as minimum estimates, since the real needs must exceed them.”131 “We cannot go wrong by producing too much.”132 Forty years ago, the OECD concluded: “Because concern about the international movement of scientific and technical personnel has been aroused by the loss of highly qualified people, interest has been centred on immigration and many attempts have been made to count the people who have left.”133 But “the situation is much more complicated than a simple examination of the US data would suggest.”134 This chapter suggested that it was the US statistics on the
122 S. Zuckerman, Scientist in the Arena, in A. de Reuck, M. Goldsmith, and J. Knight (eds) (1968), Decision Making in National Science Policy, Boston: Little, Brown and Co., p. 14. 123 OECD (1993), Summary Record of the 1993 Workshop on the Measurement of S&T Human Resources, DSTI/EAS/M (93) 4, p. 5. 124 OECD (1994), Manual on the Measurement of Human Resources in Science and Technology: Discussion Paper, DSTI/EAS/STP/NESTI (94) 3, p. 7. 125 OECD (1992), Technology and the Economy: The Key Relationships, Paris, p. 135. 126 R. Pearson and D. Pearson (1983), The Biotechnology Brain Drain, IMS, Brighton. 127 US Department of Commerce (1997), America’s New Deficit: The Shortage of Information Technology Workers, Office of Technology Policy, Washington. 128 ABRC (1985), The Brain Drain: Summary of Findings of Enquiry, London; Royal Society (1987), The Migration of Scientists and Engineers to and from the UK, London. 129 OST (2000), Les flux migratoires de personnel hautement qualifié, Montreal. 130 See for example: W. J. Carrington and E. Detragiache (1998), How Big Is the Brain Drain?, International Monetary Fund, WP98102. 131 K. G. Gannicott and M. Blaug (1969), Manpower Forecasting since Robbins: A Science Lobby in Action, op. cit., p. 63. 132 Ibid., p. 65. 133 OEEC (1965), The International Movement of Scientific and Technical Manpower, op. cit., p. 6. 134 OECD (1970), International Movement of Scientists and Engineers, op. cit., p. 3.
Highly qualified personnel: should we really believe in shortages? 261 immigration of scientists and engineers into the United States in the 1960s that launched the brain drain debate in Europe, or at least sustained it for a while.135 Most countries tended to “confirm their axioms” using US statistics, creating “hysterical reactions” as A. Young qualified the debates of the time.136 What people forgot—or chose to ignore—was precisely the fact that these statistics only presented one side of the picture. In this case, numbers served to stir up controversies, rather than settle them.
135 For one more example of such use (France), see: Le Progrès scientifique (1966), L’émigration des scientifiques et des ingénieurs vers les États-Unis, 93, February, pp. 38–53. 136 A. Young (1974), Guidelines for Surveying International Migration of Highly Qualified Manpower, op. cit., p. 8.
14 Is there basic research without statistics?
Fundamental research was one of the dimensions of S&T examined during the Gaps exercise. It was and still is a central category of science policy and science measurement. Of all the concepts defined in the first edition of the Frascati manual, the first dealt with fundamental research. While a definition of research itself did not appear until the second edition in 1970, fundamental research was defined explicitly as follows: Work undertaken primarily for the advancement of scientific knowledge, without a specific practical application in view. (OECD (1963), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Development, Paris, p. 12) In the last edition of the manual, the definition is substantially the same as the one in 1963, although the term “basic” is now used instead of fundamental: Basic research is experimental or theoretical work undertaken primarily to acquire new knowledge of the underlying foundation of phenomena and observable facts, without any particular application or use in view. (OECD (2002), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, Paris, p. 77) Between 1963 and 1994, all six editions of the manual carried essentially the same definition without any significant changes: basic research is research concerned with knowledge, as contrasted with applied research, which is concerned with the application of knowledge. Over the same period, however, the definition was frequently discussed, criticized and, in some cases, even abandoned. How did the concept originate and why does it persist in discourses, policy documents, and statistics despite almost unanimous dissatisfaction with it? Certainly, the concept of basic research exists because a community defines itself according to it, and because it is a dimension of action (science policy). But the concept is, above all, a category. And, as is often the case with a category, it
Is there basic research without statistics? 263 acquires social and political existence through numbers.1 In this chapter, I argue that the concept of basic research acquired political stability (partly) because of statistics. The latter helped academics and bureaucrats to convince politicians to fund basic research. However, as soon as the interests of politicians toward academic research changed, the concept of basic research came to be questioned. Statistics could not continue to “hold things together,” to paraphrase A. Desrosières.2 Although the concept, whatever its name, has existed for centuries in the discourses of philosophers and scientists, basic research was first defined explicitly in a taxonomy in 1934 by J. S. Huxley, and later appropriated by V. Bush. The Bush report Science: The Endless Frontier envisioned a National Research Foundation ( NSF ) as the main vehicle for funding basic research.3 But it was the President’s Scientific Research Board, I argue, that was decisive in crystallizing the concept for political purposes: its report served to institutionalize the current concept of basic research because the latter was measured, rather than being talked about rhetorically, as Bush has.4 This chapter outlines the history of the concept of basic research as it relates to measurement, particularly from the 1930s onward.5 The first part presents and discusses the different labels and definitions of basic research that were used before the Bush report. This was a period of searching and fuzziness, which Bush put an end to. The second part shows how the concept crystallized into a specific label and definition as a result of the NSF surveys and the OECD Frascati manual. The last part reviews the alternatives. It shows that even the promoters were dissatisfied with the concept, but that extenuating factors prevented them—or so they believed—from changing it.
Emergence The Ancients developed a hierarchy of the world in which theoria was valued over practice. This hierarchy rested on a network of dichotomies that were deeply
1 W. Alonso and P. Starr (1987), The Politics of Numbers, New York: Russell Sage; A. Desrosières (1993), op. cit.; A. Desrosières (1990), How to Make Things Which Hold Together: Social Science, Statistics, and the State, in P. Wagner, B. Wittrock, and R. Whitley (eds), Discourses on Society, Kluwer Academic Publishing, pp. 195–218. 2 A. Desrosières (1990), How to Make Things Which Hold Together: Social Science, Statistics and the State, op. cit. 3 V. Bush (1945), Science: The Endless Frontier, op. cit. 4 President’s Scientific Research Board (1947), Science and Public Policy, op. cit. 5 To the best of my knowledge, the literature contains only one article dealing with the history of the concept: R. Kline (1995), Construing Technology as Applied Science: Public Rhetoric of Scientists and Engineers in the United States, 1880–1945, ISIS, 86: 194–221. Layton also touches on the topic from a technological point of view: E. T. Layton (1976), American Ideologies of Science and Engineering, Technology and Culture, 17 (4), pp. 688–700; E. T. Layton (1974), Technology as Knowledge, Technology and Culture, 15 (1), pp. 31–41.
264
Is there basic research without statistics?
rooted in social practice and intellectual thought.6 A similar hierarchy existed in the discourse of scientists: the superiority of pure over applied research.7 The concept of pure research originated in 1648, according to B. Cohen.8 It was a term used by philosophers to distinguish between science or natural philosophy, which was motivated by the study of abstract notions, and the mixed “disciplines” or subjects, like mixed mathematics, that were concerned with concrete notions.9 The concept came into regular use at the end of the nineteenth century, and was usually accompanied by the contrasting concept of applied research. In the 1930s, the term “fundamental” occasionally began appearing in place of “pure.” I do not deal here with the story of how the word was used by scientists in their discourses. Such a task would go well beyond the scope of the present chapter. I rather concentrate on how the word and concept were inserted into taxonomies, or kinds of research, and on how the word and the concept were related to measurement. The first attempts at defining these terms systematically occurred in Britain in the 1930s, more precisely among those scientists interested in the social aspects of science—the “visible college” as G. Werskey called them,10 among whom were the two British scientists, J. D. Bernal and J. S. Huxley. As we saw previously, J. D. Bernal was one of the first academics to perform measurement of science in a western country. In The Social Function of Science (1939), Bernal used the terms “pure” and “fundamental” interchangeably. He contrasted the ideal of science, or science as pure thought, not mainly with applied science, but with the social use of science for meeting human needs.11 When dealing with numbers, Bernal did not break the research budget down by type of research— such statistics were not available. “The real difficulty (. . .) in economic assessment of science is to draw the line between expenditures on pure and on applied science,” Bernal said.12 He could only present total numbers, sometimes broken down by sector, but he could not figure out how much was allocated to basic research. Five years earlier, J. S. Huxley (1934), who later became UNESCO’s first Director-General (1947–1948), introduced new terms and suggested the first 6 H. Arendt (1958), Condition de l’homme moderne, Paris: Calmann-Lévy, 1983; G. E. R. Lloyd, (1966), Polarity and Analogy: Two Types of Argumentation in Early Greek Thought, Cambridge: Cambridge University Press; N. Lobkowicz (1967), Theory and Practice: History of a Concept From Aristotle to Marx, London: University of Notre Dame. 7 D. A. Hounshell (1980), Edison and the Pure Science Ideal in 19th Century America, Science, 207, pp. 612–617; X. Roqué (1997), Marie Curie and the Radium Industry: A Preliminary Sketch, History and Technology, 13 (4), pp. 267–291; G. H. Daniels (1967), The Pure-Science Ideal and Democratic Culture, Science, 156, pp. 1699–1705. 8 I. B. Cohen (1948), Science Servant of Men, Boston: Little, Brown and Co., p. 56. 9 R. Kline (1995), Construing Technology as Applied Science: Public Rhetoric of Scientists and Engineers in the United States, 1880–1945, op. cit. 10 G. Werskey (1978), The Visible College: The Collective Biography of British Scientific Socialists of the 1930s, New York: Holt, Rinehart, and Winston. 11 J. D. Bernal (1939), The Social Function of Science, op. cit., pp. 3–7, 95–97. 12 Ibid., p. 62.
Is there basic research without statistics? 265 formal taxonomy of research. The taxonomy had four categories: background, basic, ad hoc, and development.13 To Huxley, ad hoc meant applied research, and development meant more or less what we still mean by the term today. The first two categories defined pure research: background research is research “with no practical objective consciously in view,” while basic research is “quite fundamental, but has some distant practical objective (. . .). Those two categories make up what is usually called pure science.”14 Despite having these definitions in mind, however, Huxley did not conduct any measurements, and his definitions were not widely adopted.15 The terms pure, fundamental, background, and basic frequently overlapped before V. Bush arrived on the scene. Some analysts were also skeptical of the utility of the terms, and rejected them outright. For example, Research: A National Resource (1938), one of the first government measurements of science in America, explicitly refused to use any categories but research: “There is a disposition in many quarters to draw a distinction between pure, or fundamental, research and practical research (. . .). It did not seem wise in making this survey to draw this distinction.”16 The reasons offered were that fundamental and practical research interact, and that both lead to practical and fundamental results. The Bush report itself, although it used the term basic research in the core of the text, also referred to pure research elsewhere in the document: in the Bowman committee report—Appendix 3 of Science: The Endless Frontier—pure research was defined as “research without specific practical ends. It results in general knowledge and understanding of nature and its laws.”17 Bush labored over definitions all his life: “A principal problem confronting Bush was public confusion over terms like science, research and engineering, at least according to his views. Throughout the war and for many years afterwards, he tried to clarify their meanings to colleagues and to the public with only modest success.”18 In his well-known report, Science: The Endless Frontier (1945), Bush elected to use the term basic research, and defined it as “research performed without thought of practical ends.”19 He estimated that the nation invested nearly six times as much in applied research as in basic research.20 The numbers were 13 J. S. Huxley (1934), Scientific Research and Social Needs, op. cit. 14 Ibid., p. 253. 15 Although he did have some influence, as we will see shortly, on Bush (who borrowed the term “basic”), on the President’s Scientific Research Board (who adapted Huxley’s typology) and on UNESCO and the OECD (who called Huxley’s basic research “oriented basic research”). 16 National Resources Committee (1938), Research: A National Resource, op. cit., p. 6. 17 V. Bush (1945), Science: The Endless Frontier, op. cit., p. 81. 18 N. Reingold (1987), V. Bush’s New Deal for Research, Historical Studies in the Physical Sciences, 17 (2), p. 304. 19 V. Bush (1945), Science: The Endless Frontier, op. cit., p. 81; N. Reingold, (1987), V. Bush’s New Deal, op. cit., p. 18. According to R. Kline (1995), Construing Technology as Applied Science: Public Rhetoric of Scientists and Engineers in the United States, 1880–1945, op. cit., pp. 216–217, the term originated from A. Kennelly (Harvard University, engineering) in the mid-1920s, and was popularized by industrialists. 20 V. Bush (1945), Science: The Endless Frontier, op. cit., p. 81; N. Reingold (1987), V. Bush’s New Deal, op. cit., p. 20.
266
Is there basic research without statistics?
arrived at by equating colleges and universities with basic research, and industrial and governmental research with applied research. More precise numbers appeared in appendices, such as ratios of pure research in different sectors— 5 percent in industry, 15 percent in government, and 70 percent in colleges and universities21—but the sources and methodology behind these figures were totally absent from the report. With his report, Bush gave Huxley’s term political flavor by putting “basic research” on governments’ political agendas. He argued at length that governments should support basic research on the basis that it is the source of socioeconomic progress and the “pacemaker of technological progress. Basic research (. . .). creates the fund from which the practical applications of knowledge must be drawn. New products and new processes do not appear full-grown. They are founded on new principles and new conceptions, which in turn are painstakingly developed by research in the purest realms of science.”22 This was the first formal formulation of the linear model that preoccupied researchers for decades.23 To this rhetoric, the Bowman committee added: There is a perverse law governing research: under the pressure for immediate results, and unless deliberate policies are set up to guard against this, applied research invariably drives out pure. The moral is clear: it is pure research which deserves and requires special protection and specially assured support. (V. Bush (1945), Science: The Endless Frontier, op. cit., p. 83)
Crystallization Between 1930 and 1945, then, numerous labels were used for more or less the same concept: pure, fundamental, background, and basic. The same label was sometimes even used to refer to different concepts:24 the term background research was a type of pure research to Huxley, while to the President’s Scientific Research Board it represented what we now call “related scientific activities” (RSA); basic research, on the other hand, was to Huxley what we today call strategic or “oriented research.” By integrating basic research into the NSF bill, however, the
21 Ibid., p. 85. 22 Ibid., p. 19. 23 Although Bush is often credited with “inventing” the linear model, scientists had been using it in public discourse since the end of the nineteenth century (e.g. see: H. Rowland (1883), A Plea for Pure Science, in The Physical Papers of Henry Augustus Rowland, Baltimore: Johns Hopkins University Press, 1902, pp. 593–613). Similarly, industrialists (F. B. Jewett (1937), Communication Engineering, Science, 85, pp. 591–594) and government (National Resources Committee (1938), Research: A National Resource, op. cit., pp. 6–8) used it in the 1930s. The linear model is in fact the spontaneous philosophy of scientists. 24 One even went so far as to use the three terms in the same page. See: G. Perazich and P. M. Field (1940), Reemployment Opportunities and Recent Changes in Industrial Techniques, Works Progress Administration, National Research Project, Philadelphia, p. 3.
Is there basic research without statistics? 267 government succeeded in imposing the term and its institutional definition in the United States. Science: The Endless Frontier is usually considered the basis of science policy in the United States, particularly the basis for the funding of basic research.25 This is only partly true.26 Bush proposed the idea of an agency that would be responsible for basic research, but the rhetoric he used succeeded mainly with scientists, and much less with policy-makers: President H. Truman vetoed the National Research Foundation bill in 1947, following the recommendation of the Bureau of Budget against the organizational aspects, which involved an independent board.27 Instead, he asked the President’s Scientific Research Board to prepare a report on what the government should do for science. The executive order stipulated that the board:28 1
2
Review the current and proposed scientific R&D activities conducted and financed by all government departments and independent establishments to ascertain: (a) the various fields of R&D and the objectives sought; (b) the type and number of personnel required for operating such programs; (c) the extent to and manner in which such R&D is conducted for the federal government by other profit and non-profit institutions; and (d) the costs of such activities. Review using readily available sources: (a) the nature and scope of non-federal scientific R&D activities; (b) the type and number of personnel required for such activities; (c) the facilities for training new scientists; and (d) the amounts of money expended for such R&D.
The Board can be credited, as much as Bush, for having influenced science policy in the United States. Several of the issues and problems with which science policy dealt over the next fifty years were clearly identified by the board report: research expenditures, support for basic research, defense research, human resources, the role of government, inter-departmental coordination, and the international dimension of science. In his inaugural address to the American Association for the Advancement of Science (AAAS) in 1948, President Truman proposed five objectives that were drawn straight out of the board report.29
25 B. L. R. Smith (1990), American Science Policy Since World War II, op. cit. 26 D. M. Hart (1998), Forged Consensus: Science, Technology and Economic Policy in the United States, 1921–1953, Princeton: Princeton University Press; W. A. Blanpied (1999), Science and Public Policy: The Steelman Report and the Politics of Post-World War II Science Policy, in AAAS Science and Technology Policy Yearbook, Washington: AAAS, pp. 305–320. 27 J. M. England (1982), A Patron for Pure Science: The NSF’s Formative Years, 1945–1957, Washington: NSF, p. 82. The veto of the NSF bill was only one manifestation of American politicians’ reluctance to fund basic research before the 1950s. There were three other unsuccessful funding experiments in the 1920s and 1930s, and a long struggle between V. Bush and Senator H. Kilgore between 1942 and 1948 on the appropriate role of the NSF. See B. L. R. Smith (1990), American Science Policy Since World War II, op. cit. 28 PSRB (1947), Science and Public Policy, op. cit., pp. 70–71. 29 H. S. Truman (1948), Address to the Centennial Anniversary, Washington: AAAS Annual Meeting.
268
Is there basic research without statistics?
Furthermore, the board developed three instruments that helped give basic research a more robust political existence than Bush had. First, he conducted the first survey of resources devoted to R&D using precise categories, although these did not make it “possible to arrive at precisely accurate research expenditures” because of the different definitions and accounting practices employed by institutions.30 In the questionnaire he sent to 70 industrial laboratories and 50 universities and foundations, he included a taxonomy of research that was inspired by Huxley’s four categories: fundamental, background, applied, and development.31 The board did not retain Bush’s term, preferring to talk of fundamental research in his taxonomy, though it regularly used “basic” in the text, and defined fundamental research similarly as “theoretical analysis, exploration, or experimentation directed to the extension of knowledge of the general principles governing natural or social phenomena.”32 With this definition, it estimated that basic research accounted for about 4 percent of total R&D expenditure in the United States in 1947.33 Second, based on the numbers obtained in the survey, the board proposed quantified objectives for science policy. For example, it suggested that resources devoted to R&D be doubled in the next ten years, and that basic research be quadrupled.34 This kind of objective, including another to which I shall presently turn, appeared, and still appears, in almost every science policy document in western countries in the following decades. Third, the board introduced into science policy the main science indicator that is still used by governments today: R&D expenditures as a percentage of GNP.35 Unlike Bernal however, he did not explain how he arrived at the 1 percent goal for 1957. Nevertheless, President Truman subsequently incorporated an objective of 1 percent in his address to the AAAS. While Bush developed an argument for basic research based on science’s promise for the future, the President’s Scientific Research Board developed arguments based on statistics of R&D budgets. Of course, the latter also called on the future promises of science: “scientific progress is the basis for our progress against poverty and disease”36 and basic research is “the quest for fundamental knowledge from which all scientific progress stems,” wrote the board,37 recalling Bush’s rhetoric. But it also developed an argument concerning the balance between basic science and applied research.38 To that end, the board used two kinds of quantitative comparison. 30 31 32 33 34 35 36 37 38
PSRB (1947), Science and Public Policy, op. cit., p. 73. Ibid., pp. 299–314. Ibid., p. 300. Ibid., p. 12. Ibid., p. 6. Ibid., p. 6. Ibid., p. 3. Ibid., p. 21. The argument was already present in J. D. Bernal (1939), The Social Function of Science, op. cit., pp. 329–330, but without quantitative evidence.
Is there basic research without statistics? 269 First, it made comparisons with other nations, among them the USSR, which had invested $1.2 million in R&D in 1947,39 which was slightly more than the Unites States ($1.1 million). It was Europe, however, that served as the main yardstick or target: “We can no longer rely as we once did upon the basic discoveries of Europe.”40 “We shall in the future have to rely upon our own efforts in the basic sciences”:41 As a people, our strength has laid in practical application of scientific principles, rather than in original discoveries. In the past, our country has made less than its proportionate contribution to the progress of basic science. Instead, we have imported our theory from abroad and concentrated on its application to concrete and immediate problems. (PSRB (1947), Science and Public Policy, op. cit., pp. 4–5) One remark should be made about this rationale for investing in basic research. At the time, the fact that other nations were thought to invest more than the United States in basic research was explained by what has been called the “indifference thesis.” Following Alexis de Tocqueville’s Democracy in America (1845), in his chapter titled “Why the Americans are More Addicted to Practical than to Theoretical Science,” some argued that the United States was more interested in applied science than basic research.42 N. Reingold has aggressively contested this thesis.43 He showed how historians (we should add policy-makers, including the President’s Scientific Research Board) lacked critical scrutiny, and easily reproduced scientists’ complaints and views on colonial science as a golden age and Europe as a model of aristocratic sympathy for basic science. Reingold also argued that a nationalistic bias supported the discourses of the time: “Not only should the United States participate significantly in the great achievement known as science, but it should lead.”44
39 PSRB (1947), Science and Public Policy, op. cit., p. 5. 40 Ibid., p. 13. 41 Ibid., p. 4. A similar discourse was also developed in V. Bush (1945), Science: The Endless Frontier, op. cit.: Our national preeminence in the fields of applied research and technology should not blind us to the truth that, with respect to pure research—the discovery of fundamental new knowledge and basic scientific principles—America has occupied a secondary place. Our spectacular development of the automobile, the airplane, and radio obscures the fact that they were all based on fundamental discoveries made in nineteenth-century Europe. (p. 78) “A Nation which depends upon others for its new scientific basic knowledge will be slow in its industrial progress and weak in its competitive position in world trade, regardless of its mechanical skill” (p. 19). “We cannot any longer depend upon Europe as a major source of this scientific capital” (p. 6). 42 R. H. Shryock (1948), American Indifference to Basic Research During the Nineteenth Century, Archives Internationales d’Histoire des Sciences, 28, pp. 50–65. 43 N. Reingold (1971), American Indifference to Basic Research: A Reappraisal, in N. Reingold (1991), Science: American Style, New Brunswick and London: Rutgers University Press, pp. 54–75. 44 Ibid., p. 63.
270
Is there basic research without statistics?
The second kind of comparison the President’s Scientific Research Board made was between university budgets and those of other sectors. The board showed that university research expenditures were far lower than government or industry expenditures, that is, lower than applied research expenditures, which amounted to 90 percent of total R&D.45 Moreover, he showed that university budgets as a percentage of total R&D had declined from 12 percent in 1930 to 4 percent in 1947.46 The board urged the government to redress the imbalance in the “research triangle.” The NSF then seized the tools suggested by the President’s Scientific Research Board for selling basic research to the government and to the public. In 1950, Congress passed the controversial bill that created the NSF.47 The law charged the NSF with funding basic research, but it also gave it, as we saw, a role in science measurement. From the outset, sound data were identified at the NSF as the main vehicle for assessing the state of science, as recommended by W. T. Golden in his memorandum to the National Science Board (NSB).48 Beginning with the first survey it conducted in 1953—on federal government R&D expenditures— the NSF defined basic research as research “which is directed toward the increase of knowledge in science.”49 One year later, the NSF added the following qualification in its survey: “It is research where the primary aim of the investigator is a fuller knowledge or understanding of the subject under study, rather that a practical application thereof.”50 These definitions had to be followed by the respondent to the NSF surveys, which classified research projects and money according to the suggested categories. With the definitions, developed for measurement purposes, and with the numbers originating from the surveys, the NSF fought for money and mustered several arguments in its favor. The NSF reiterated to politicians the arguments already put forward by Bush and the President’s Scientific Research Board: knowledge is a cultural asset; university research is so basic that it is the source of all socioeconomic progress; a shortage of scientists prevents the nation from harvesting all the benefits of science; the United States is lagging behind its main competitor, the USSR; and a balance between applied and basic research is needed. All these arguments appeared in Basic Research: A National Resource (1957), a document written to convey in a non-technical manner the meaning of basic research.51 But two new kinds of argument were also put forward. First, Basic Research: A National Resource argued for a new way to strengthen basic research: convince 45 46 47 48
President’s Scientific Research Board (1947), Science and Public Policy, op. cit., p. 21. Ibid., p. 12. 21 bills were introduced in Congress between 1945 and 1950. W. T. Golden (1951), Memorandum on Program for the National Science Foundation, in W. A. Blanpied (ed.), Impacts of the Early Cold War on the Formulation of US Science Policy, Washington: AAAS, pp. 68–72. 49 National Science Foundation (1953), Federal Funds for Science: 1950–51 and 1951–1952, Washington, p. 12. 50 National Science Foundation (1954), Federal Funds for Science: Fiscal Years 1953, 1954 and 1955, Washington, p. 20. 51 National Science Foundation (1957), Basic Research: A National Resource, Washington.
Is there basic research without statistics? 271 industry to invest more in basic research than it currently does.52 Indeed, the early NSF surveys showed that only a small percentage of industrial R&D was devoted to basic research. Second, the document stated that “the returns (of basic research) are so large that it is hardly necessary to justify or evaluate the investment”53 and that, at any rate, “any attempt at immediate quantitative evaluation is impractical and hence not realistic.”54 Numbers were not judged useful here. All that was necessary was to show the great contributions achieved by science, and to present the important men who were associated with the discoveries. In line with this philosophy, the NSF regularly produced documents showing the unexpected but necessary contribution of basic research to innovation, generally using case studies.55 Besides Basic Research: A National Resource, the NSF published: Investing in Scientific Progress (1961) Technology in Retrospect and Critical Events in Science (TRACES) (1968) Interactions of Science and Technology in the Innovation Process (1973) How Basic Research Reaps Unexpected Rewards (1980). The rhetoric served a particular purpose: to give university research a “political” identity it did not yet have. Indeed, the university contribution to national R&D was small, as the President’s Scientific Research Board had measured. In arguing that basic research was the basis of progress, the rhetoric made university research an item on the political agenda: “Educational institutions and other non profit organizations together performed only 10 percent of all R&D in the natural sciences. But (they) performed half of the Nation’s basic research” claimed the NSF.56 This rhetoric was soon supported and reinforced by economists, among them economists at the RAND Corporation—the US Air Force’s think tank.57 Economists presented science as a public good, which had of course been advanced as a defining feature of science since the Republic of Science.58 But economists qualified the
52 53 54 55
Ibid., pp. 37–38, 50–51. Ibid., p. 61. Ibid., p. 62. The practice was probably inspired by similar exercises at the Office of Naval Research (ONR), where A. T. Waterman, first director of the NSF, was chief scientist from 1946 to 1951. See H. M. Sapolsky (1990), Science and the Navy: The History of the Office of Naval Research, Princeton: Princeton University Press, pp. 83–85. 56 NSF (1957), Basic Research: A National Resource, op. cit., p. 28. 57 R. R. Nelson (1959), The Simple Economics of Basic Scientific Research, Journal of Political Economy, 67, pp. 297–306; K. J. Arrow (1962), Economic Welfare and the Allocation of Resources for Invention, in National Bureau of Economic Research, The Rate and Direction of Inventive Activity: Economic and Social Factors, Princeton: Princeton University Press, pp. 609–626. 58 R. Hahn (1971), The Anatomy of a Scientific Institution: The Paris Academy of Sciences, 1966–1803, Berkeley: University of California Press, pp. 35–37; F. M. Turner (1980), Public Science in Britain, 1880–1919, ISIS, 71: 589–608; L. Stewart (1992), The Rise of Public Science: Rhetoric, Technology, and Natural Philosophy in Newtonian Britain, 1660–1750, Cambridge: Cambridge University Press; J. Golinski (1992), Science as Public Culture: Chemistry and Enlightenment in Britain, 1760–1820, Cambridge: Cambridge University Press.
272
Is there basic research without statistics?
public good using their own jargon: “Since Sputnik it has become almost trite to argue that we are not spending as much on basic scientific research as we should. But, though dollar figures have been suggested, they have not been based on economic analysis of what is meant by as much as we should.”59 To economists, science was a public good because knowledge could not be (exclusively) appropriated by its producer, which therefore justified the need for government support. When, at the beginning of the 1960s, the OECD began seriously considering the possibility of conducting measurements of S&T, a large part of the work, then, had already been done. Indeed, several countries had definitions that were in line with those of the NSF, as shown by two studies performed during that time, one by the OEEC,60 and the other by the OECD.61 In fact, the NSF had considerably influenced the Frascati manual because the United States was far in advance of other countries in measurement.62 The entire manual, and particularly the survey and definitions of concepts, was conceived according to the NSF’s experience, and the definition of basic research that was suggested in the 1963 edition of the Frascati manual is still used by most countries today.
Contested boundaries (Institutions and) statistics are what gave stability to the fuzzy concept of basic research. Before the NSF and the OECD, the concept of basic research was a free-floating idea, supported only by the rhetoric of scientists. Both organizations succeeded in “selling” basic research as a category thanks to a specific tool: the survey and the numbers it generated. Important controversies raged beneath the consensus of an international community of state statisticians, however. Much effort is still devoted to keeping the concept of basic research on the agenda, a task that has occupied the NSF and OECD from the early 1960s onward. From the beginning, almost everyone had something to say against the definitions of basic research. Academics (particularly social scientists), governments, and industry all rejected the definitions and suggested alternatives. Even the NSF and OECD never really seemed satisfied with the definitions. The criticisms centered around two elements.63 First and foremost, the definitions referred to the researcher’s motives—mainly curiosity (no applications in view)—and were thus
59 R. R. Nelson (1959), The Simple Economics of Basic Scientific Research, op. cit., p. 297. 60 OEEC (1961), Government Expenditures on R&D in France and the United Kingdom, EPA/AR/4209. 61 OECD (1963), Government Expenditures on R&D in the United States of America and Canada: Comparisons with France and the United Kingdom on Definitions Scope and Methods Concerning Measurement, J. C. Gerritsen, J. Perlman, L. A. Seymour, and G. McColm, DAS/PD/63.23. 62 This is admitted in the first edition of the Frascati Manual, OECD (1963), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Development, op. cit., p. 7. 63 The arguments were developed at length at two conferences: National Science Foundation (1980), Categories of Scientific Research, Washington; O. D. Hensley (1988), The Classification of Research, Lubbock: Texas Tech University Press.
Is there basic research without statistics? 273 said to be subjective:64 the intentions of sponsors and users differed considerably, and different numbers were generated depending on who classified the data:65 Whether or not a particular project is placed under the basic research heading depends on the viewpoint of the persons consulted. For instance, university officials estimate that, during the academic year 1953–54, academic departments of colleges and universities and agricultural experiment stations received about $85 million for basic research from the Federal Government. But Federal officials estimate that they provided barely half that amount to the universities for the same purpose during the same period. A large part—perhaps the major part—of what industry regarded as basic research would be considered to be applied research or development in universities. (NSF (1957), Basic Research: A National Resource, op. cit., p. 25) Motives were also said to be subjective in the following sense: the classification of a research project often changes depending on the policy mood of the time: “Quite solidly justifiable mission-applicable work, labeled applied in the statistics of an earlier time, is now classified as basic, and vice versa. (. . .) Research support data reported by the agencies change in response to a number of fashions, forces and interpretations.”66 In fact, if fundamental research was abandoned as a category, the NSF predicted that: “the funding of fundamental research could be viewed by mission agencies as having no political advantage. (. . .) Hence, universities may be adversely affected or have to reclassify some research efforts in order to gain funding for projects.”67 As early as 1938, the US National Resources Committee observed the phenomenon and called it “window dressing”:68 “data are presented in the form which is supposed to be most conductive to favourable action by the Bureau of the Budget and congressional appropriation committees.”69 In sum, the definition emphasized the researcher’s intentions rather than the results or content of research: “In the standard definition, basic research is the 64 C. V. Kidd (1959), Basic Research: Description versus Definition, Science, 129, pp. 368–371. 65 Even charities had always expected more or less concrete results from their grants—grants generally reported to be basic research by the recipients; at the very least, foundations’ motives were usually mixed, combining elements of basic and applied research. See: R. E. Kohler (1991), Partners in Science: Foundations and Natural Scientists 1900–1945, Chicago: University of Chicago Press. See also: J. Schmookler (1962), Catastrophe and Utilitarianism in the Development of Basic Science, in R. A. Tybout (ed.), Economics of R&D, Columbus: Ohio: 19–33. 66 National Science Board (1978), Basic Research in the Mission Agencies: Agency Perspectives on the Conduct and Support of Basic Research, Washington, pp. 286–287. 67 NSF (1989), Report of the Task Force on R&D Taxonomy, Washington, p. 9; See also: OECD (1991), Ventilation fonctionnelle de la R-D par type d’activité, op. cit. 68 The first occurrence of the phenomenon in American history goes back to 1803 when President T. Jefferson asked Congress to support a purely scientific expedition for presumed commercial ends. See: A. H. Dupree (1957), Science in the Federal Government: A History of Policies and Activities to 1940, New York: Harper and Row, p. 26. 69 National Resources Committee (1938), Research: A National Resource, op. cit., p. 63.
274
Is there basic research without statistics?
pursuit of knowledge without thought of practical application. The first part is true—that science is intended to produce new discoveries—but the implication that this necessarily entails a sharp separation from thoughts of usefulness is just plain wrong.”70 The definition forgot, according to some, to consider the results of research, its substantial content:71 “Basic research discovers uniformities in nature and society and provides new understanding of previously identified uniformities. This conception departs from a prevailing tendency to define basic research in terms of the aims or intent of the investigators. It is a functional, not a motivational definition. It refers to what basic research objectively accomplishes, not to the motivation or intent of those engaged in that research.”72 The problem to which these criticisms refer was already identified in 1929 by J. Dewey in The Quest for Certainty: There is a fatal ambiguity in the conception of philosophy as a purely theoretical or intellectual subject. The ambiguity lies in the fact that the conception is used to cover both the attitude of the inquirer, the thinker, and the character of the subject-matter dealt with. The engineer, the physician, the moralist deal with a subject-matter which is practical; one, that is, which concerns things to be done and the way of doing them. But as far as personal disposition and purpose is concerned, their inquiries are intellectual and cognitive. These men set out to find out certain things; in order to find them out, there has to be a purgation of personal desire and preference, and a willingness to subordinate them to the lead of the subject-matter inquired into. The mind must be purified as far as is humanly possible of bias and of that favoritism for one kind of conclusion rather than another which distorts observation and introduces an extraneous factor into reflection (. . .). It carries no implication (. . .) save that of intellectual honesty. ( J. Dewey (1929), The Quest for Certainty: A Study of the Relation of Knowledge and Action, New York: Milton, Balch and Co., pp. 67–68) It is fair, then, to conclude that the question of the relations of theory and practice to each other, and of philosophy to both of them, has often been compromised by failure to maintain the distinction between the theoretical
70 National Research Council (1995), Allocating Federal Funds for Science and Technology, Committee on Criteria for Federal Support of R&D, Washington: National Academy of Science, p. 77. 71 C. D. Gruender (1971), On Distinguishing Science and Technology, Technology and Culture, 12 (3), pp. 456– 463; OECD (1963), Critères et Catégories de recherche, C. Oger, DAS/PD/63.30; H. K. Nason (1981), Distinctions Between Basic and Applied in Industrial Research, Research Management, May, pp. 23–28. 72 H. Brooks (1963), Basic Research and Potentials of Relevance, American Behavioral Scientist, 6, p. 87.
Is there basic research without statistics? 275 interest which is another name for intellectual candor and the theoretical interest which defines the nature of the subject-matter. ( J. Dewey (1929), The Quest for Certainty: A Study of the Relation of Knowledge and Action, New York: Milton, Balch and Co., pp. 68–69) Elsewhere in the book, Dewey presented the problem in terms of the following fallacy: Independence from any specified application is readily taken to be equivalent to independence from application as such (. . .). The fallacy is especially easy to fall into on the part of intellectual specialists (. . .). It is the origin of that idolatrous attitude toward universals so often recurring in the history of thought. ( J. Dewey (1929), The Quest for Certainty: A Study of the Relation of Knowledge and Action, New York: Milton, Balch and Co., p. 154) A second frequently-voiced criticism was that motives should be only one of the dimensions for classifying research. Research has multiple dimensions, and any classification system with mutually-exclusive categories tends to oversimplify the situation. Basic and applied research can be seen as complementary, rather than opposing, dimensions. Viewed this way, there is no clear-cut boundary between basic and applied research. Instead, there is a spectrum of activities, a continuum, where both types of research overlap and mix.73 Some even argued that there is such a thing as technological research that is basic74 (a contradiction in terms
73 D. Wolfe (1959), The Support of Basic Research: Summary of the Symposium, in Symposium on Basic Research, Washington: AAAS, pp. 249–280; H. Brooks (1967), Applied Research: Definitions, Concepts, Themes, in National Academy of Science, Applied Science and Technological Progress, Washington, pp. 21–55. 74 The term “fundamental technological research” seems to have appeared, to the best of my knowledge, in the 1960s, both at the NSF (1998) (see: D. O. Belanger, Enabling American Innovation: Engineering and the National Science Foundation, West Lafayette: Purdue University Press) and at the OECD (1966) (Technological Forecasting in Perspective, DAS/SPR/66.12, Paris). See also: D. E. Stokes (1997), Pasteur’s Quadrant: Basic Science and Technological Innovation, Washington: Brookings Institution; D. E. Stokes (1982), Perceptions of the Nature of Basic and Applied Science in the United States, in A. Gerstenfeld (ed.), Science Policy Perspectives: USA–Japan, Academic Press, pp. 1–18; D. E. Stokes (1980), Making Sense of the Basic/Applied Distinction: Lessons From Public Policy Programs, in National Science Foundation, Categories of Scientific Research, Washington, pp. 24 –27; L. M. Branscomb (1998), From Science Policy to Research Policy, in L. M. Branscomb and J. H. Keller (eds), Investing in Innovation: Creating a Research Innovation Policy That Works, Cambridge, MA: MIT Press, pp. 112–139; L. M. Branscomb (1993), Targeting Critical Technologies, in L. M. Branscomb (ed.), Empowering Technology: Implementing a US Strategy, Cambridge, MA: MIT Press, pp. 36–63. Pioneers of the idea are historians like E. T. Layton (1974), Technology as Knowledge, op. cit. and W. G. Vincenti (1990), What Engineers Know and How They Know It, Baltimore: Johns Hopkins University Press. For more references, see: J. M. Staudenmaier (1985), Technology’s Storytellers: Reweaving the Human Fabric, Cambridge, MA: MIT Press, Chapter 3.
276
Is there basic research without statistics?
according to H. Brooks75), and the British government has introduced the concept of basic technology research in its budget documents.76 All these reflections illustrate a long and continuing academic debate on the relationships between science and technology.77 Given the concept’s malleability, several people concluded that the definition was essentially social78 or political,79 and at best needed to protect research from unrealizable expectations.80 Some also argued that the definition rested on moral values. H. Brooks noted, for example, that “there has always been a kind of status hierarchy of the sciences, in order of decreasing abstractness and increasing immediacy of applicability (. . .). Historically a certain snobbery has always existed between pure and applied science.”81 Bernal also talked about snobbery, “a sign of the scientist aping the don and the gentleman. An applied scientist must needs appear somewhat as a tradesman.”82 75 National Science Board (1964), Minutes of the 91st Meeting, January 16–17, NSB-64-4, Attachment 1, p. 4. 76 DTI/OST (2000), Science Budget 2001–02 to 2003–04, London. 77 The literature on the relationship between science and technology is voluminous. For a broad historical overview, see: A. R. Hall (1974), What Did the Industrial Revolution in Britain Owe to Science?, in M. McKendrick (ed.), Historical Perspectives: Studies in English Thought and Society, London: Europa, pp. 129–151; A. Keller (1984), Has Science Created Technology?, Minerva, 22 (2), pp. 160–182; G. Wise (1985), Science and Technology, OSIRIS, 1, pp. 229–246; E. Kranakis (1990), Technology, Industry, and Scientific Development, in T. Frangsmyr (ed.), Solomon’s House Revisited: The Organization and Institutionalization of Science, Canton: Science History Publications, pp. 133–159; P. L. Gardner (1994, 1995), The Relationship Between Technology and Science: Some Historical and Philosophical Reflections, International Journal of Technology and Design Education, Part I (4, pp. 123–153) and Part II (5, pp. 1–33). For a policy perspective, see N. Rosenberg (1991), Critical Issues in Science Policy Research, Science and Public Policy, 18 (6), pp. 335–346; N. Rosenberg (1982), How Exogenous is Science?, in N. Rosenberg, Inside the Black Box: Technology and Economics, Cambridge: Cambridge University Press, pp. 141–159; K. Pavitt (1991), What Makes Basic Research Economically Useful?, Research Policy, 20, pp. 109–119; K. Pavitt (1989), What Do We Know About the Usefulness of Science: The Case for Diversity, SPRU Discussion Paper no. 65; K. Pavitt (1987), The Objectives of Technology Policy, Science and Public Policy, 14 (4), pp. 182–188; H. Brooks, (1994), The Relationship Between Science and Technology, Research Policy, 23, pp. 477–486. 78 N. W. Storer (1964), Basic Versus Applied Research: The Conflict Between Means and Ends in Science, Indian Sociological Bulletin, 2 (1), pp. 34–42. 79 I. B. Cohen (1948), Science Servant of Men, op. cit.; H. A. Shepard (1956), Basic Research and the Social System of Pure Science, Philosophy of Science, 23 (1), pp. 48–57; M. D. Reagan (1967), Basic and Applied Research: A Meaningful Distinction?, op. cit.; G. H. Daniels (1967), The Pure-Science Ideal and Democratic Culture, op. cit.; C. Falk (1973), An Operational, Policy-Oriented Research Categorization Scheme, op. cit.; E. T. Layton (1976), American Ideologies of Science and Engineering, op. cit.; S. Toulmin (1980), A Historical Reappraisal, in National Science Foundation (1980), Categories of Scientific Research, Washington, pp. 9–13; T. F. Gieryn (1983), Boundary-Work and the Demarcation of Science From Non-Science: Strains and Interests in Professional Ideologies of Scientists, American Sociological Review, 48, pp. 781–795; R. Kline (1995), Construing Technology as Applied Science: Public Rhetoric of Scientists and Engineers in the United States, 1880–1945, op. cit. 80 H. Brooks (1967), Applied Research: Definition, Concepts, Themes, in H. Brooks (ed.), Applied Science and Technological Progress, Washington: NAS, p. 25. 81 Ibid., p. 51. 82 J. D. Bernal (1939), The Social Function of Science, op. cit., p. 96.
Is there basic research without statistics? 277 People often denied that they made distinctions between the two types of research, but the arguments were generally fallacious. A common strategy used was a variant on the argument from minimizing limitations. For example, A. T. Waterman, the first director of the NSF, noted that “mission-related research is highly desirable and necessary,”83 but recommended looking at “the impressive discoveries (made) solely in the interest of pure science” to appreciate the priority of basic research.84 Similarly, W. Weaver, a member of the NSF’s National Science Board from 1956 to 1960, wrote: “Both types of research are of the highest importance and it is silly to view one as more dignified and worthy than the other (. . .). Yet the whole history of science shows most impressively that scientists who are motivated by curiosity, by a driving desire to know, are usually the ones who make the deepest, the most imaginative, and the most revolutionary discoveries.”85 A symposium on basic research held in New York in 1959 and organized by the National Academy of Science (NAS), the AAAS and the Alfred P. Sloan Foundation concluded that no agreement existed on the definition of basic research: “none of these and no other proposed definition survived the criticism of the symposium participants.”86 And an influential report by the NASs submitted in 1965 to the House of Representatives had this to say: the report of the panel on basic research and national goals could only present a diversity of viewpoints rather than a consensus on questions regarding the level of funding basic research deserves from the federal government.87 The alternatives suggested since these discussions have not generated consensus either (see Appendix 22). Brooks suggested classifying research according to its broadness or basic nature.88 Others proposed using terms that corresponded to end-results or use: targeted/non targeted, autonomous/exogenous, pure/oriented, basic/problem-solving. Ben Martin and J. Irvine, for their part, resurrected the OECD concept of “oriented research,”89 and proposed the term “strategic.”90 Basic research would be distinguished according to whether it was (1) pure or curiosity-oriented, or (2) strategic: “basic research carried out with the expectation that it will produce a broad base of knowledge likely to form the background 83 A. T. Waterman (1965), The Changing Environment of Science, Science, 147, p. 15. 84 Ibid., p. 16. 85 W. Weaver (1960), A Great Age for Science, in Commission on National Goals, Goals for Americans, Columbia University, pp. 107–108. 86 D. Wolfe (1959), The Support of Basic Research: Summary of the Symposium, op. cit., p. 257. 87 National Academy of Sciences (1965), Basic Research and National Goals, Washington. 88 H. Brooks (1980), Basic and Applied Research, in National Science Foundation, Categories of Scientific Research, Washington, pp. 14–18. 89 The term “oriented research” came from the 1960s: according to Freeman et al., fundamental research fell into two categories—free research that is driven by curiosity alone, and oriented research. See: OECD (1963), Science, Economic Growth and Government Policy, Paris, p. 64. 90 Variations on this concept can be found in: G. Holton (1993), On the Jeffersonian Research Program, in Science and Anti-Science, Cambridge, MA: Harvard University Press, pp. 109–125; L. M. Branscomb (1999), The False Dichotomy: Scientific Creativity and Utility, Issues in Science and Technology, Fall, pp. 66–72; D. Stokes (1997), Pasteur’s Quadrant: Basic Science and Technological Innovation, op. cit.
278
Is there basic research without statistics?
to the solution of recognized current or future practical problems.”91 Still others preferred abandoning the classification and suggested disaggregating research by sector only—university, government, and industry.92 None of these alternatives were unanimously considered advantageous: applied research can be as broad as basic research,93 sectors are often multipurpose,94 as evidenced, for example, by the presence of applied research in universities,95 etc. These were only some of the recent criticisms. The US Society for Research Administrators organized a conference in 1984 to study the topic again.96 The US General Accounting Office (GAO) also looked at the question, and proposed its own taxonomy, separating fundamental research into basic and generic, and adding a mission-targeted category.97 The US Industrial Research Institute (IRI) created an ad hoc Committee on Research Definition that worked between 1971 and 1979.98 IRI concluded that basic research was a category that firms did not use, and suggested replacing basic by exploratory, that is “research which generates or focuses knowledge to provide a concept and an information base for a new development program.”99 How did the NSF and the OECD respond? The NSF took discussions on the limitations of definitions seriously, and was regularly involved in clarification exercises. As early as 1953, it warned its readers about the limitations of the data: Greater caution must be used in interpreting amounts shown for the classifications by character of work [basic/applied] and by scientific category. The complex nature of most Government scientific research and development undertakings, involving as they often do a broad range of fields and
91 J. Irvine and B. R. Martin (1984), Foresight in Science: Picking the Winners, London: Frances Pinter, p. 4. 92 M. D. Reagan (1967), Basic and Applied Research: A Meaningful Distinction?, Science, 155, pp. 1383–1386. 93 D. N. Langenberg (1980), Distinctions Between Basic and Applied Research, in National Science Foundation, Categories of Scientific Research, Washington, pp. 32–36; E. E. David (1980), Some Comments on Research Definitions, in National Science Foundation, Categories of Scientific Research, Washington, pp. 40–42; C. Falk (1973), An Operational, Policy-Oriented Research Categorization Scheme, Research Policy, 2, pp. 186–202. 94 Brooks (1980), Basic and Applied Research, op. cit. 95 M. Crow and C. Tucker (2001), The American Research University System as America’s de facto Technology Policy, Science and Public Policy, 28 (1), pp. 2–10; N. Rosenberg and R. Nelson (1994), American Universities and Technical Advance in Industry, Research Policy, 3, pp. 323–348. 96 O. D. Hensley (1988), The Classification of Research, op. cit. 97 General Accounting Office (1987), US Science and Engineering Base: A Synthesis of Concerns About Budget and Policy Development, Washington, pp. 29–30. 98 H. K. Nason (1981), Distinctions Between Basic and Applied in Industrial Research, op. cit.; A. E. Brown (1972), New Definitions for Industrial R&D, Research Management, September, pp. 55–57. 99 Industrial Research Institute (IRI) (1978), Definitions of Research and Development, New York. Thirty years later, IRI definitions are no more than labels: the institute uses NSF data on traditional basic research to talk of “directed” basic research and “discovery-type” research in industry. See C. F. Larson (2000), The Boom in Industry Research, Issued in Science and Technology, Summer, pp. 27–31.
Is there basic research without statistics? 279 disciplines of science and extending from purely basic to development, do not lend themselves easily to categorization. Judgments employed in making estimates are apt to vary from agency to agency. In addition, points of view of the reporting agencies tend to influence their judgments in certain directions. (NSF (1953), Federal Funds for Science: Federal Funds for Scientific R&D at Nonprofit Institutions, 1950–1951 and 1951–1952, Washington, p. 5) The difficulties of classifying research and development activities by character of work and scientific field are somewhat greater than the original determination of what constitutes R&D in the first instance. As a result the distributions in this section are generally less reliable than amounts shown elsewhere in this report (. . .). Because of these difficulties, the distributions should be taken as indications of relative orders of magnitude rather than accurate measures. (NSF (1953), Federal Funds for Science: The Federal R&D Budget, Fiscal Years 1952 and 1953, Washington, p. 8) The limitations were particularly acute in the case of industry. At the end of the 1980s for example, only 62 percent of companies reported data on basic research. As a consequence, the NSF had to devise a new method for estimating basic research in industry.100 Second, the NSF deliberated regularly on the problem: it organized a seminar in 1979 on categories of scientific research;101 it studied R&D definitions for tax purposes in the mid-1980s;102 and created a task force on R&D taxonomy in 1988.103 The task force suggested three categories instead of the standard two— basic and applied: the three were fundamental, strategic, and directed.104 The definitions narrowed the scope of basic research by splitting it into two further types, fundamental and strategic (which amounts to what is called basic research in industry). Also, the term “directed” significantly modified the sense of applied research so that it concerned what we usually call applied research and most of government research. None of these efforts, however, had any consequences for the NSF definitions and surveys. Third, NSF representatives occasionally abandoned the dichotomy between basic and applied research. For example, in the NSF’s first annual report, J. B. Conant, chairman of the NSB, wrote: “we might do well to discard altogether the phrases applied research and fundamental research. In their place I should put
100 NSF (1990), Estimating Basic and Applied R&D in Industry: A Preliminary Review of Survey Procedures, NSF (90) 322, Washington. 101 NSF (1980), Categories of Scientific Research, op. cit. 102 H.R. Hertzfeld (1985), Definitions of Research and Development for Tax Credit Legislation, NSF: Syscon Corporation. 103 NSF (1989), Report of the Task Force on R&D Taxonomy, op. cit. 104 Ibid., p. 3.
280
Is there basic research without statistics?
the words programmatic research and uncommitted research.”105 Similarly, A. T. Waterman distinguished two kinds of basic research—free and missionoriented: “Basic research activity may be subdivided into free research undertaken solely for its scientific promise, and mission-related basic research supported primarily because its results are expected to have immediate and foreseen practical usefulness.”106 These liberties on the part of individuals were rather exceptional, however, and again had no consequences. The “NSF’s entire history resonates with the leitmotiv of basic versus applied research.”107 Former NSF director D. N. Langenberg once explained: the NSF “must retain some ability to characterize, even to quantify, the state of the balance between basic and applied research across the Foundation. It must do so in order to manage the balance properly and to assure the Congress and the scientific and engineering community that it is doing so.”108 Finally, what really had a long-lasting effect was the decision to use two definitions of basic research in the surveys instead of one. The first definition is the traditional one, and would thereafter be used in government and university surveys; the second was added specifically for the industrial survey: Research projects which represent original investigation for the advancement of scientific knowledge and which do not have specific commercial objectives, although they may be in the fields of present or potential interest to the reporting company. (National Science Foundation (1959), Science and Engineering in American Industry: Report on a 1956 Survey, Washington, NSF 59–50, p. 14) This was in fact the implicit recognition that only oriented research—and not basic research—existed in industry. If measured according to the standard definition, little money would have been classified as being spent on basic research in industry. As for the OECD, definitions were discussed for each revision of the Frascati manual. The first meeting in 1963 brought together national experts from several countries, chief among which was the United States (NSF). K. S. Arnow109 and K. Sanow110 discussed at length on the difficulties of defining appropriate concepts for surveys. Indeed, for some time the NSF devoted a full-time person specifically to this task—K.S. Arnow. C. Oger from France (DGRST) discussed
105 National Science Foundation (1951), First Annual Report: 1950–1951, Washington, p. VIII. 106 Waterman (1965), The Changing Environment of Science, op. cit., p. 15. 107 D. O. Belanger (1998), Enabling American Innovation: Engineering and the NSF, op. cit., O. N. Larsen (1992), Milestones and Millstones: Social Science at the NSF, 1945–1991, New Brunswick: Transaction Publishers. 108 D. N. Langenberg (1980), Memorandum for Members of the National Science Board, NSB-80–358, Washington, p. 4. 109 OECD (1963), Some Conceptual Problems Arising in Surveys of Scientific Activities, K. S. Arnow, DAS/PD/63.37. 110 OECD (1963), Survey of Industrial Research and Development in the United States: Its History, Character, Problems, and Analytical Uses of Data, K. Sanow, DAS/PD/63.38.
Is there basic research without statistics? 281 the limitations of a definition based exclusively on researchers’ motives and suggested alternatives.111 His suggestion appeared without discussion in an appendix to the first edition of the Frascati manual. Discussions continued over the following few years and resulted in the addition of a brief text to the second edition of the manual. In 1970, and in line with a 1961 UNESCO document,112 the OECD discussed a sub-classification of basic research according to whether it was pure or oriented. Pure basic research was defined as research in which “it is generally the scientific interest of the investigator which determines the subject studied.” “In oriented basic research the organization employing the investigator will normally direct his work toward a field of present or potential scientific, economic or social interest.”113 Despite these clarifications, few countries produced numbers according to the new definitions. Discussions resumed in 1973. C. Falk, of the NSF, proposed to the OECD a definition of research with a new dichotomy based on the presence or absence of constraints. He suggested “autonomous” when the researcher was virtually unconstrained and “exogenous” when external constraints were applied to his program.114 He recommended that some form of survey be undertaken by the OECD to test the desirability and practicality of the definitions. He had no success: “the experts (. . .) did not feel that the time was ripe for a wholesale revision of this section of the manual. It was suggested that as an interim measure the present division between basic and applied research might be suppressed.”115 However, the only modifications that member countries accepted—to appear in the 1981 edition of the Frascati manual—were that the discussion between pure and basic research was transferred to another chapter, separated from the conventional definitions. Then, in 1992, two governments tried to introduce the term “strategic research” into the Frascati manual—United Kingdom and Australia, the latter going so far as to delay the publication of the Frascati manual:116 “original investigation undertaken to acquire new knowledge which has not yet advanced to the state when eventual applications to its specific practical aim or objective can be clearly specified.”117 After “lively discussions,” as the Portuguese delegate
111 OECD (1963), Critères et Catégories de recherche, DAS/PD/63.30, op. cit. 112 P. Auger (1961), Tendances actuelles de la recherche scientifique, Paris: UNESCO, p. 262. 113 OECD (1970), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, Paris, p. 10. 114 OECD (1973), The Sub-Division of the Research Classification: A Proposal and Future Options for OECD, C. Falk, DAS/SPR/73.95/07. 115 OECD (1973), Results of the Meeting of the Ad Hoc Group of Experts on R&D Statistics, DAS/SPR/73.61, p. 8. 116 This is only one of two discussions concerning the taxonomy of research at the time. A new appendix was also suggested but rejected. It concerned distinguishing between pure and “transfer” sciences. See: OECD (1991), Distinction Between Pure and Transfer Sciences, DST/STII (91) 12; OECD (1991), The Pure and Transfer Sciences, DSTI/STII (91) 27. 117 OECD (1992), Frascati Manual—1992, DSTI/STP (92) 16; OECD (1993), The Importance of Strategic Research Revisited, DSTI/EAS/STP/NESTI (93) 10.
282
Is there basic research without statistics?
described the meeting,118 they failed to win consensus. We read in the 1993 edition of the Frascati manual that: “while it is recognized that an element of applied research can be described as strategic research, the lack of an agreed approach to its separate identification in member countries prevents a recommendation at this stage.”119 The 1992 debate at the OECD centered, among other things, on where to locate strategic research. There were three options. First was to subdivide the basic research category into pure and strategic as the OECD suggested. Second was to subdivide the applied research category into strategic and specific, as the British government did. Third was to create an entirely new category (strategic research) as recommended by the Australian delegate.120 In the end, “delegates generally agreed that strategic research was an interesting category for the purposes of S&T policy but most felt that it was very difficult to apply in statistical surveys.”121 In 2001, the question was on the agenda again during the fifth revision of the Frascati manual.122 This time, countries indicated a “strong interest in a better definition of basic research and a breakdown into pure and oriented basic research” but agreed that discussions be postponed and addressed in a new framework after they have advanced on policy and analytical ground.123 The United Kingdom was the only country to have openly debated the definitions—with Australia—and to have adopted an alternative to the OECD’s definition of basic research for its surveys on R&D. Twice since the 1970s, the House of Lords Select Committee on Science and Technology has discussed the taxonomy of research, first in response to the green paper on science. In the latter, A Framework for Government R&D (1971), Lord Rothschild chose a simple dichotomy (basic/applied) on the grounds that “much time can be lost in semantic arguments about the nature of basic research, its impact, accidental or otherwise, on applied research, and the difference between them.”124 In fact, Rothschild identified forty five “varieties” or taxonomies of research in the literature.125 The Select Committee discussed the policy document in 1972, and thought otherwise: the
118 OECD (1993), Treatment of Strategic Research in the Final Version of Frascati Manual—1992, DSTI/EAS/STP/NESTI/RD (93) 5. 119 OECD (1994), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, op. cit., p. 69. 120 See OECD (1991), Ventilation fonctionnelle de la R-D par type d’activité, DSTI/STII (91) 7. 121 OECD (1993), Summary Record of the NESTI Meeting, DSTI/EAS/STP/NESTI/M (93) 1, p. 5. 122 OECD (2000a), Review of the Frascati Manual: Classification by Type of Activity, DSTI/EAS/ STP/NESTI/RD(2000)4; OECD (2000b), Ad Hoc Meeting on the Revision of the Frascati Manual R&D Classifications: Basic Research, DSTI/EAS/STP/NESTI/RD (2000) 24. 123 OECD (2000), Summary Record, DSTI/EAS/STP/NESTI/M (2000) 1, p. 5. A workshop on basic research was therefore held in October 2001 in Oslo: OECD (2002), Workshop on Basic Research: Policy Relevant Definitions and Measurement: Summary Report. 124 HMSO (1971), A Framework for Government Research and Development, London, p. 3. 125 L. Rothschild (1972), Forty-Five Varieties of Research (and Development), Nature, 239, pp. 373–378.
Is there basic research without statistics? 283 various definitions in existence obscured the real issue, and there was need for an agreement on a standardized definition.126 Upon analysis of the question, the committee asked three funding councils (environment, agriculture, medical) to submit statistics to the Lords using a more refined classification based on the so-called Zuckerman definition: basic, basic-strategic, oriented-strategic, and applied.127 The committee recommended a special study of the problem with a view to drawing up standard definitions.128 In 1990, the committee studied the question again in a session entirely devoted to R&D definitions.129 It noted that the largest defect in OECD definitions concerned strategic research, and recommended that: “the Frascati manual should be amended to cater better to strategic research.”130 The committee did not recommend creating a new category, but rather locating strategic research in either the basic or applied category. There still remained the problem, however, of deciding which category. Today, the United Kingdom is one of the few countries (together with Australia) that publish numbers using the oriented and strategic subclasses.131 Since the 1985 edition of the Annual Review of Government Funded R&D, the British government produces statistics according to the following classification: (1) basicpure, (2) basic-oriented, (3) applied-strategic, and (4) applied-specific. Strategic research is defined in the Annual Review as “applied research in a subject area which has not yet advanced to the stage where eventual applications can be clearly specified.”132 It differs however from the House of Lords Select Committee on Science and Technology’s definition: “research undertaken with eventual practical applications in mind even though these cannot be clearly specified.”133 In sum, despite official definitions (Frascati manual), governments use their own classification (United Kingdom) or do not use any.134 Departments also have their own definitions: this is the case for Defense and Space, for example.135 In the
126 HMSO (1972), First Report from the Select Committee on Science and Technology, London, pp. XIV–XV. 127 HMSO (1961), The Management and Control of R&D, London: Office of the Minister of Science, pp. 7–8. 128 HMSO (1972), First Report from the Select Committee on Science and Technology, op. cit., p. 15. 129 HMSO (1990), Definitions of R&D, Select Committee on Science and Technology, HL Paper 44, London. 130 Ibid., p. 12. 131 See for example: HMSO (1999), Science, Engineering and Technology Statistics 1999, London: DTI/OST. 132 HMSO (1985), Annual Review of Government Funded R&D, London, p. 183. 133 HMSO (1990), Definitions of R&D, op. cit., p. 11. 134 In fact, since the mid-1970s, governments started to delete the question on basic research from their surveys. 135 NAS (1995), Allocating Federal Funds for Science and Technology, Washington; OECD (1994), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development, op. cit., Chapter 12; H. A. Averch (1991), The Political Economy of R&D Taxonomies, Research Policy, 20, pp. 179–194; HMSO (1990), Definitions of R&D, op. cit.
284
Is there basic research without statistics?
1970s, the OECD itself deleted the question on basic research from the list of mandatory questions on the R&D questionnaire, and rarely published numbers on basic research except for sector totals because of the low quality of the data, and because too many national governments failed to collect the necessary information.136 All in all, it seems that the current definition of basic research is not judged, by several people, to be a useful one for policy purposes,137 at least not as much as the concept was in the 1950s, during the NSF’s crusade for government funding. To D. Stokes, the definitions “have distorted the organization of the research community, the development of science policy, and the efforts to understand the course of scientific research”;138 it “has distorted the research agendas of the so-called mission agencies” because it has limited research support to pure applied research139 and constrained NSF to pure basic research.140 For L. Branscomb, NSF’s “definitions are the source of much of the confusion over the appropriate role for government in the national scientific and technical enterprise.”141 But why, if numbers solidify concepts according to A. Desrosières, has the definition of basic research not gained strength over time? Why has the definition remained in the OECD methodological manual and in national surveys despite serious reservations? The reasons are many. The first is because of its institutional referent. The fact that basic research is conducted mainly in a specific and dedicated institution (universities) was an important reason for the persistence of the category. The second is because of statistics themselves. As seen in the discussions that took place during the 1992 OECD meeting of national experts, there has been a desire to preserve historical distinctions and the statistical series associated with them. As a result, experts were encouraged to move “toward how strategic research might be accommodated by drawing distinctions within the basic and applied categories, rather than by cutting across the categories.”142 Since the concept of basic research has a relatively long history—and a politically charged history at that—as the definition was inscribed from the start in
136 The only numbers appear in Basic Science and Technology Statistics, but missing data abound. See: OECD (1964), Some Notes on Expenditures for Fundamental Research, C.S-C1/CC/2/64/3. 137 Neither is it judged useful by industrialists (see: H. K. Nason (1981), Distinctions Between Basic and Applied in Industrial Research, op. cit.), nor governments. According to the NSF itself, industrial representatives “prefer that the NSF not request two separate figures” (basic and applied), but “the Foundation considers it to be extremely important” to distinguish both (K. Sanow (1963), Survey of Industrial R&D in the United States: Its History, Character, Problems, and Analytical Uses of Data, paper presented at the OECD Frascati meeting, DAS/PD/63.38, p. 13). With regard to government representatives, the report of the second OECD users group reported that the least-popular of all the standard indicators were those concerning basic research, applied research, and experimental development: OECD (1978), Report of the Second Ad Hoc Review Group on R&D Statistics, STP (78) 6. 138 D. Stokes (1982), Perceptions of the Nature of Basic and Applied Science in the United States, op. cit., p. 2. 139 Ibid., p. 14. 140 Ibid., p. 15. 141 L. Branscomb (1998), From Science Policy to Research Policy, op. cit., p. 120. 142 D. Stokes (1997), Pasteur’s Quadrant: Basic Science and Technological Innovation, op. cit., p. 69.
Is there basic research without statistics? 285 the Frascati manual, and since we possess a statistical series running back to the 1960s, it would take important arguments to counter the inertia.143
Conclusion Basic research is a central category for the measurement of science. Taxonomies have occupied academics, governments and statisticians for seventy years. In the course of these efforts, the concept has passed from a period where it was more or less well-defined to a precise definition, for survey purposes, centered on the motivations of the researchers and the non-application of research results. The concept became institutionalized because agencies were specifically created to fund basic research, but also because of statistics. Without surveys and numbers, the concept would probably never have congealed—or at least not in the way it did, because the criticisms were too numerous and frequent. The history of the concept is not a linear story, however. Even if Huxley and Bush launched the concept in taxonomies and the NSF appropriated and institutionalized it immediately, it nevertheless did not enjoy consensus among countries, institutions, and individuals. The history of the concept and its measurement centers around three stages or periods. The first stage is one where what is referred to when talking of basic research takes different labels. Pure, fundamental, and basic were used interchangeably to refer to a similar object—an object defined with related notions of knowledge, freedom and curiosity. The second stage is that of the institutionalization of the term and of the concept of “basic research.” This emerged because both Bush and the President’s Scientific Research Board argued for it, Bush for political reasons, and the Board for quantitative ones. It was then institutionalized by the NSF and the OECD. The survey was one of the main vehicles for this institutionalization. The third stage, partly overlapping with the second, is a stage where the concept is criticized and, sometimes, even abandoned by some, even though it persists in several countries. All in all, statistics were influential in helping to give basic research political identity and value. This lasted from 1947 (the President’s Scientific Research Board report) to the beginning of the 1970s. The concept and its measurement remained relatively intimate as long as the interests of policy-makers and academics were served. “Things described by statistics are solid and hold together (. . .) to the extent that they are linked to hard social facts: institutions, laws, customs, etc.”144 “Clusters [statistics] are justified if they render action possible, if they create things which can act and which can be acted upon.”145 When interests began to clash in the 1970s, however, the stabilizing force of statistics deteriorated. It was a time when basic and applied were increasingly recognized as not exclusive, but 143 There were associated practical reasons as well, such as accounting: institutions collect information for operational purposes, not for statistics. 144 A. Desrosières (1990), How to Make Things Which Hold Together: Social Science, Statistics and the State, op. cit., p. 198. 145 Ibid., p. 200.
286
Is there basic research without statistics?
also when oriented research began to be seen as far more important for policymakers than basic research per se. Whether or not research was categorized in a valid manner suddenly made a difference. More and more people began to look seriously at the then-current definitions used for statistical purposes in order to challenge them. The OECD was the platform where such discussions were held. People started using new definitions, and tried to produce appropriate numbers: strategic research (United Kingdom) and, later in the 1990s, innovation. Today, basic research holds second place in the taxonomies of research, that is, the residual: the basic/applied dichotomy, where basic comes first and applied second, has been replaced by oriented/non-oriented (basic) in R&D statistics (when broken down by socioeconomic objective), where non-oriented comes as a residual.146 This is a complete reversal of the traditional hierarchy. Contrary to what Lord Rothschild thought, issues surrounding definitions are not merely semantic. The basic/applied dichotomy has led to numerous debates about where the responsibility of government funding ends and that of industry begins. Categorization is important, as the UK Select Committee argued, “because wrong orientation could have repercussions on funding.”147 Definitions often entail large sums of money. In fact, “once a class of research is identified as potentially helpful (. . .) a funding program usually follows.”148 This is why official definitions and statistics matter.
146 See: European Union (2001), Statistics in Focus, 2, p. 4; OECD (2001), Main Science and Technology Indicators, 1, Paris, p. 48. 147 HMSO (1990), Definitions of R&D, op. cit., pp. 11–12. 148 O. D. Hensley (1988), The Classification of Research, op. cit., p. 9.
15 Are statistics really useful? Myths and politics of science and technology indicators
In 1971, the OECD published its sole official science policy document of the decade: Science, Growth and Society.1 The report departed from the historical OECD emphasis on economic considerations with regard to the objectives of science policy, and suggested aligning the latter with social objectives:2 The science of economics, despite all the refinements it has undergone in this century, has not been able to give policy-makers the kinds of advice they need. (p. 33). During the 1960s, science policy in the OECD countries was considered as an independent variable of policy, only loosely related to the total social and political context. Just as economic growth came to be regarded as an end in itself, rather than as a means to attain certain social goals, so science policy became attached to the “research ratio” as a kind of touch-stone of scientific success independent of the content of R&D activity or its coupling to other policy objectives. (OECD (1971), Science, Growth, and Society: A New Perspective, Paris, p. 45) The Brooks report, as it was called, had few consequences on OECD science policy orientations in the long term. It was rather an erreur de parcours in OECD history: economics would continue to drive national science policies and objectives in member countries, feed the main OECD S&T policy documents, and guide S&T statistics. Over the period 1961–2000, the central aim of science policy has been, in line with the economic literature of the time, to bend S&T to economic ends.3 Thinking in economic terms meant that empirical data and statistics would be the quintessence of OECD analyses and deliberations.4 Chris Freeman, an 1 OECD (1971), Science, Growth, and Society: A New Perspective, Paris. 2 For similar views, see also: C. Freeman et al. (1971), The Goals of R&D in the 1970s, Science Studies, 1, pp. 357–406. 3 Reconstruction of Europe (1950s), economic growth (1960s), technological gaps (1970s), and innovation (1980s). 4 For historical considerations on the influence of the economy and economics on science and statistics, see: J. Kaye (1998), Economics and Nature in the Fourteenth Century: Money, Market Exchange, and the Emergence of Scientific Thought, Cambridge: Cambridge University Press; M. Poovey (1998), A History of the Modern Fact, op. cit.
288
Myths and politics of S&T indicators
economist at the National Institute of Economic and Social Research (London) from 1959 to 1965 working on several studies on research and innovation in industry, was one of the main persons behind this thinking. He was involved in most of the OECD S&T analyses of the 1960s, and it was he who produced the first edition of the Frascati manual aimed at collecting standardized statistics on R&D. In 1966, he founded—and directed until 1982—the Science Policy Research Unit (SPRU) at the university of Sussex, a research center dedicated to quantitative analyses of S&T policies. He also continued to be involved in OECD policy deliberations until the 1990s. This chapter aims to clarify what people understood when they talked about the usefulness of statistics for decision-making. This usefulness generally has to do with the second of four possible uses of statistic: theoretical, practical, ideological/ symbolic, and political. With regard to OECD statistics, this chapter argues that the latter two were just as important, because, as R. R. Nelson argued, policy-making goes “considerably beyond questions whether to spend, and if so how and on what.”5 The first part argues that it was economists’ dream of making science policy more “scientific” that explains the development of S&T statistics at the OECD in the 1960s. The period was, in fact, one where “rational” management tools were promoted in every government administration: planning, and statistics. Statistics, more particularly, were supposed to make science policy less haphazard and more enlightened.6 The next two parts confront economists’ and state statisticians’ rhetoric with reality. They show how discourses on the usefulness of statistics were often exercises in rhetoric to legitimize statisticians’ work or justify governments’ choices. Either statistics could not be developed to answer fundamental policy questions (second part), or the few existing R&D statistics all appeared after policies were enacted (third part).
Rationalizing science and technology policy Positivism was an influential doctrine in certain philosophical milieus in the nineteenth and twentieth centuries. The doctrine acclaimed the methodology of the natural sciences, namely quantitative and empirical research, in the conduct of all sciences.7 Positivism was particularly popular in the social sciences: the thinking was that social scientists must align with the methodology of the natural sciences if they are to be successful in their endeavors.8 “Mathematical measures 5 R. R. Nelson (1977), The Moon and the Ghetto, op. cit., p. 35. 6 For similar arguments from economists involved in early government activities, see: G. Alchon (1985), The Invisible Hand of Planning: Capitalism, Social Science, and the State in the 1920s, Princeton: Princeton University Press. 7 L. Kolakowski (1968), The Alienation of Reason: A History of Positivist Thought, New York: Doubleday; T. Sorell (1991), Scientism: Philosophy and the Infatuation with Science, London: Routledge. 8 See for example: D. Ross (1991), The Origins of American Social Science, Cambridge: Cambridge University Press; J. Heilbron (1995), The Rise of Social Theory, Minneapolis: University of Minnesota Press.
Myths and politics of S&T indicators 289 are the most appropriate tools for human reason to comprehend the complexities of the real world,” once wrote the UNESCO Working Group on statistics on S&T.9 Karl Popper,10 Frederik Hayek,11 and Maynard Keynes12 all criticized the doctrine, as applied to the social sciences, for being too reductionist. To Hayek, scientism was the “slavish imitation of the method and language of Science. (. . .) It involves a mechanical and uncritical application of habits of thought to fields different from those in which they have been formed” (p. 24). Scientism was, according to Hayek, at the heart of social engineering and planning: the “desire to apply engineering techniques to the solution of social problems” (p. 166), “effectively using the available resources to satisfy existing needs” (p. 176). At the OECD, the doctrine was most evident in three of the organization’s work “programs”: institution building, forecasting, and the production of statistics. Setting up offices of science policy The first task to which the OECD dedicated itself with regard to S&T policy was persuading member countries to draw up national science policies13 and set up central units or coordinating offices “that can see the problem of both science and the nation from a perspective wider than that available to any individual scientist or organization.”14 The science office envisaged by the Piganiol report was not a science ministry or an executive science agency.15 The office should be an advisory body, without official line of authority in the government structure, and with a supporting staff ( p. 36). It should concern itself with questions of the consistency, comprehensiveness, support, organization, evaluation, coordination, and long-term trends and implication of all the nation’s activities in research, development, and scientific education, both in and out of Government, and on the domestic and the international scenes. (. . .) The functions of the Office should be to monitor these several activities, to aid in establishing priorities among them, and to foster the multitude of organizational and operational connections among different agencies and institutions out of which policy ultimately emerges. (p. 37)
9 UNESCO (1972), Considerations on the International Standardization of Science Statistics, COM72/CONF.15/4, p. 4. 10 K. Popper (1957), The Poverty of Historicism, London: Routledge and Kegan Paul. 11 F. von Hayek (1952), The Counter-Revolution in Science: Studies on the Abuse of Science, Indianapolis: Liberty Fund (1979). 12 J. M. Keynes (1938), Letter addressed to R. F. Harrod, in The Collected Writings of J. M. Keynes, Vol. XIV, London: Macmillan, 1973, p. 300. 13 OECD (1960), Co-Operation in Scientific and Technical Research, Paris, p. 24. 14 OECD (1963), Science and the Policies of Government, Paris, p. 34. 15 Ibid., p. 35.
290
Myths and politics of S&T indicators
The report continued: “the tasks of a Science and Policy Office will naturally divide into information gathering on the one hand, and advisory and coordinating activities on the other. The latter will require a sound factual basis” (p. 39). The following list of tasks was thus suggested (pp. 40–41): Information to be collected 1 2 3 4 5 6 7
data, analyses, and evaluations of money and manpower investment in research and development; periodic state-of-the-art surveys in selected major scientific and technical fields; projections of future needs for scientific and technical personnel; data on the organization and management of institutions engaged in research, development, and education; data about trends and activities in research, technology, and education in other countries; studies of factors affecting the training, employment, motivation, and mobility of scientists and engineers; data and case studies on the contributions of research and technology to economic development, social change, national defense, international co-operation, etc.
Coordinating activities 1 2
3
4 5
6
7 8
determination of, or advice on, the nation’s research and development priorities; recommendation on the size and distribution of the part of the national budget devoted to research and development, including the proportion that should be devoted ab initio to the support of basic research; co-ordination of the scientific plans and policies of government agencies, and advice to individual departments on preparation of their research and development budget submissions to the national treasury; consultation with government departments concerning ways of exploiting scientific opportunities in the formulation of policy; recommending measures to establish or strengthen research institutions and to stimulate increased research and development activity in non-governmental sectors of society; making information, advice, and possibly also some consulting services available on request to any sector of the society engaged in research and development activities; initiation and monitoring of scientific and development programs of national scope; co-ordination of national participation in international scientific activities.
Myths and politics of S&T indicators 291 It would not be long before member countries acted. When the OECD organized the first ministerial conference on science in 1963, only four countries had ministries of science. By the second conference in 1966, three-quarters of governments had one. Forecasting research The Piganiol report put enormous emphasis on two functions of the office of science policy: planning (by way of determining priorities, establishing the science budget, coordinating other agencies and monitoring programs) and statistics. In line with planning, two activities became buzzwords of OECD science policy thought in the 1960s to 1970s: forecasting and technology assessment.16 In fact, “most governments can support only a fraction of the R&D projects which are proposed to them. (. . .) Governments must, to some extent, determine priorities within science and technology.”17 In order to allocate resources, “research planning is not only possible but inevitable,” reported J.-J. Salomon, head of the OECD Science Policy Division, in its review of the 1967 OECD seminar on the problems of science policy.18 And he continued: “an entirely different process of thought and direction is required: planning entails forecasting.” Some thought differently. To such questions as “is it possible for the economist to say what proportion of GNP should be devoted to R&D?,” J. R. Gass, Deputy Director of the Directorate of Scientific Affairs (DSA), replied There is no magic percentage figure which enables us to escape the nuts and bolts of relating R&D expenditures to economic and other objectives. (. . .) Such endeavours run into several major obstacles. Firstly, economic objectives are rarely explicitly defined. Secondly, the economic planners have not yet become accustomed to incorporating, or indeed making explicit, the technological assumptions underlying their economic forecasts. Thirdly, many of the decisions relating to R&D are unforeseeable not only by nature, but also because they are made by private entrepreneurs in domains where commercial secrecy is important. ( J. R. Gass (1968), Science and the Economy: Introduction, in OECD, Problems of Science Policy, op. cit., p. 52) In a similar vein, H. Brooks from the US President’s Science Advisory Committee maintained that: “Many of the current demands for better scientific planning are probably as naïve as the early demand for economic planning.”19 However, he did leave a place for forecasting, adding: “We have to develop a 16 J.-J. Salomon (1970), Science et Politique, Paris: Seuil, pp. 157–228. 17 OECD (1966), Government and the Allocation of Resources to Science, Paris, p. 11. Similar statements can be found in OECD (1965), Ministers Talk About Science, Paris. 18 J.-J. Salomon (1968), A Review of the Seminar, in OECD, Problems of Science Policy, Paris, p. 11. 19 H. Brooks (1968), Can Science Be Planned, in OECD, Problems of Science Policy, op. cit., p. 111.
292
Myths and politics of S&T indicators
much more sophisticated understanding of how the existing system works before we can control it.” E. Jantsch and C. Freeman, both consultants at the OECD, were two representative individuals of the time with regard to the “scientification” movement in science policy. The former had recently produced a document for the OECD on technological forecasting that analyzed over a hundred basic approaches and techniques.20 At the 1967 OECD seminar, Jantsch suggested: “A recently perfected and potentially most valuable planning tool for science policy is technological forecasting.”21 Although Jantsch took great pains to distinguish forecasting from prediction, the latter being rather deterministic and focused on technical achievements, the message was clear: there is now a “necessity of anticipating advances” (p. 115). “The allocation of funds to fundamental research is one of the classical problems of public science policy. (. . .) Technological forecasting now provides effective tools for translating our future more clearly into structural terms right down to the level of fundamental science” (p. 118). The desire for forecasting was reinforced considerably by the Brooks Report, which delivered a sharp criticism of science policies of the 1960s because of their failure to foresee and forestall the present difficulties: the support of a broad range of free basic research has produced a growth of disciplines but not socially useful results, it was argued.22 “The imperative for the coming decade is, then, the management and orientation of technological progress,” claimed the Brooks report (p. 36). “Each government should establish, at Ministerial level or in a manner independent of the Executive, a special structure that would be responsible for anticipating the likely effects, threatening or beneficial, of technological initiatives and developments” (p. 106). Technological assessment, a new social technique developed in the 1960s, particularly in the United States23 (and which led to the establishment of the US Office of Technology Assessment in 1972), would allow governments to evaluate the social costs of existing civilian and military technologies in the form of pollution, social disruptions, infrastructure costs, etc., to anticipate the probable detrimental effects of new technologies, to devise methods of minimizing these costs, and to evaluate the possible benefits of new or alternative technologies in connection with existing or neglected social needs. ( p. 82)
20 OECD (1966), Technological Forecasting in Perspective: A Framework for Technological Forecasting, its Techniques and Organization, a Description of Activities and Annotated Bibliography, DAS/SPR/66.12. 21 E. Jantsch (1968), Technological Forecasting: A Tool for a Dynamic Science Policy, in OECD, Problems of Science Policy, op. cit., p. 113. 22 J. Ben-David (1977), The Central Planning of Science, in J. Ben-David (1991), Scientific Growth, Berkeley: University of California Press, p. 269. 23 NAS (1969), Technology: Processes of Assessment and Choice, Washington; NAE, A Study of Technology Assessment, Washington.
Myths and politics of S&T indicators 293 Table 15.1 Main OECD Projects concerning “rational” science policies Technology Assessment (1964–89) Technological Forecasting Social Assessment of Technologya New Urban Transportation Systems Humanized Working Conditions Telecommunication Technologies as an Instrument of Regional Planning Impact of New Technologies on Employment Societal Impacts of Technology Systemic Methods in Science Policy and the Problem of Planning the Allocation of Resources (1970–75) Research Evaluation (1980s) a The program of work was under the supervision of an advisory group on control and management of technology from 1972 to 1976.
Over three decades, the OECD would promote the ideas of forecasting24 and planning25 as ways to establish socioeconomic goals and priorities and coordinate efforts toward the attainment of those goals through seminars, studies and methodological documents. To these tools, research evaluation would be added in the 1980s and subsequently. Collecting statistics C. Freeman was a partisan of operational research, system analysis and technological forecasting: “There is no reason why these methodologies, developed for military purposes but already used with success in such fields as communication and energy, could not be adapted to the needs of civilian industrial technology.”26 In 1971, he suggested a three-stage methodology for technology 24 Technological Forecasting in Perspective (E. Jantsch), 1967; Society and the Assessment of Technology (F. Hetman), 1973; Methodological Guidelines for Technology Assessment Studies, 1974, DAS/SPR/73.83, DAS/SPR/74.1-7, DAS/SPR/74.22; Social Assessment of Technology, 1976, STP (76) 21; Facing the Future: Mastering the Probable and Managing the Unpredictable, 1979; Assessment of the Societal Impacts of Technology, 1981, STP (81) 21. Seminars: Symposiums and Conferences: Seminar on Technology Assessment (1972); Symposium on Technology Assessment (1989). During the 1990s, the activities of the OECD on these topics were less systematic: a seminar on technology foresight was held in 1994 (followed by a special issue of STI Review in 1996), and the proceedings of a conference (1997) on future technologies were published: OECD (1998), 21st Century Technologies: Promises and Perils of a Dynamic Future, Paris. Another paper, intended for the 1998 issue of Science, Technology and Industry Outlook, was never published: OECD (1997), Technology Foresight: Outlook and Predictions, DSTI/IND/STP (97) 7. 25 Analytical Methods in Government Science Policy: An Evaluation (DAS/SPR/70.53); Allocation of R&D Resources: A Systemic Approach (STP (73) 20); Prospective Analysis and Strategic Planning (STP (75) 18); Planning and Anticipatory Capacity in Government (STP (80) 14); Medium and Long-Term R&D Expenditures Planning in OECD Countries (STP (83) 8). Seminar: Methods of Structural Analysis (1973). 26 OECD (1963), Science, Economic Growth and Government Policy, C. Freeman, R. Poignant and I. Svennilson, op. cit., p. 73; see also: C. Freeman (1971), Technology Assessment and its Social Context, Studium Generale, 24, pp. 1038–1050.
294
Myths and politics of S&T indicators
assessment that should start with “economic mathematical methods for rendering explicit the value judgments which are implicit in our present institutional and legal system of controlling technology (. . .).”27 At the 1967 OECD seminar, however, Freeman talked about R&D statistics as “the” tool for rational management of science policy. Trying to follow a science policy, to choose objectives and to count the costs of alternatives objectives, without such statistics is equivalent to trying to follow a full employment policy in the economy without statistics of investment or employment. It is an almost impossible undertaking. The chances of getting rational decision-making are very low without such statistics. (C. Freeman (1968), Science and Economy at the National Level, in OECD, Problems of Science Policy, op. cit., p. 58) H. Roderich, head of the OECD Division for Research Co-operation in the 1960s, unambiguously shared this enthusiasm: “you have to think in quantitative terms, you have to take into account measurements, no matter how poor or how crude your estimates are.”28 As we have already discussed, in the early days of the OECD, thinking on science policy had been driven by economic considerations,29 and therefore by empirical data. This had a lasting influence on the organization. As early as 1963, C. Freeman et al., in a document that synthesized the result of the program of studies of the DSA on “economy of research” and served as a background to the first ministerial conference held in 1963, made the following assessment: most countries have more reliable statistics on their poultry and egg production than on their scientific effort and their output of discoveries and inventions. (. . .) The statistics available for analysis of technical change may be compared with those for national income before the Keynesian revolution.30 (OECD (1963), Science, Economic Growth and Government Policy, Paris, pp. 21–22) A pity, since the Piganiol report stated: “Informed policy decisions (. . .) must be based on accurate information about the extent and forms of investment in research, technological development, and scientific education. (. . .) Provision for 27 C. Freeman et al. (1971), Technology Assessment and its Social Context, op. cit., p. 393. 28 H. Roderick (1968), Fundamental Research and Applied R&D: Introduction, in OECD, Problems of Science Policy, op. cit., p. 92. 29 Although one can often read sentences like the following in OECD reports: “The formulation of a national research policy must take into account non-economic objectives as well as economic ones; and the former may even sometimes take precedence, and will in any case have a major impact on the scale and direction of R&D”. OECD (1963), Science, Economic Growth and Government Policy, Paris, p. 20. In fact, this “ambivalence” reflected a continuous tension at the DSA between quantitative (and economic) and qualitative (and social) points of view. 30 The same citation (more or less) can be found on p. 5 of the first edition of the Frascati manual.
Myths and politics of S&T indicators 295 compilation of such data is an indispensable prerequisite to formulating an effective national policy for science.”31 The story of the “scientification” of policies, as recalled by Freeman et al., went like this Governments have been loath to recognize their responsibilities concerning the level and balance of the national R&D effort. Government policies have evolved somewhat haphazardly, being influenced at times by the special interests of government departments, at times by lines of thought advocated in influential scientific circles. (OECD (1963), Science, Economic Growth and Government Policy, Paris, p. 49) Now, governments in some countries have begun to set up a top-level science service or department which is called upon to (1) compile basic data on the research effort; (2) conduct enquiries and convene groups to evaluate scientific and technological trends, reveal gaps, and estimate the medium and long-term needs for research and development in the different sectors of economic activity (. . .). (pp. 51–52) The lesson was clear: numbers enlighten. It is very difficult, even for a group of specialists, to have a clear view of all the problems in such a complex area and to decide upon priorities with absolute certainty. With the data and information available at present in most countries, the only possible attitude is a pragmatic one. The best procedure seems to be as follows: the first step is to make as thorough an analysis as possible of each economic sector with regard to its needs for R&D (. . .). (p. 70) At the national level, this meant governments needed, first, an annual science budget that “enables particular proposals for scientific activity to be examined in the context of total government spending.”32 Second, governments needed to maintain a comprehensive inventory of the total and distribution of national scientific resources.33 It was only on this basis that, third, planning was a possible and necessary step for the development and optimum deployment of resources.34 The scheme Freeman et al. suggested was in fact the one the OECD adopted in the early 1960s, via its methodological manual on R&D surveys: The first elementary step towards improving the rationality of this process [science policy], towards making these choices more conscious and more 31 32 33 34
OECD (1963), Science and the Policies of Government, Paris, p. 24. OECD (1966), Government and the Allocation of Resources to Science, Paris, p. 37. Ibid., p. 42. Ibid., p. 46.
296
Myths and politics of S&T indicators carefully considered, is the systematic collection of statistics on the deployment of scientific manpower, and on the expenditures on different branches of scientific activity. These statistics must be collected in a great variety of breakdowns. They must show the distribution of scientific effort between industries, between firms, between sectors of the economy, between government agencies, between different universities, between different sizes of research establishments, between different disciplines in science. (C. Freeman (1968), Science and Economy at the National Level, in OECD (1968), Problems of Science Policy, Paris, p. 7)
All in all, it seemed that the OECD had found the solution to the most difficult questions of science policy: how to allocate funds to S&T. According to Freeman et al., statistics would be the ideal yardstick. The reality would be very different, however. In fact, science policy is an art, an art that can certainly be informed, but that is still an art. As R. R. Nelson pointed out: “The science of science policy is very soft.”35
Controlling research Statistics can be put to four possible uses: theoretical, practical, ideological/ symbolic, and political (see Table 15.2). While collecting R&D statistics, governments were certainly not interested in knowledge per se—theoretical use—although the OECD program on the economy of research in the early 1960s, as well as the first edition of the Frascati manual, had this as one of their objectives. Certainly also, understanding R&D was one of the prime results of R&D surveys. Summarizing twenty years of surveys, Y. Fabian, director of the OECD Science and Technology Indicator Unit (STIU), identified two main trends in recent history. First, growth in R&D spending slowed down in the OECD area in the 1970s compared with the 1960s. A modest rate of growth would follow in the 1980s. Second, there was a swing from public to private support for R&D. University R&D leveled off in most countries, and industrial R&D was given particularly high priority.36 One of the main uses of national and OECD statistics in recent history, then, had been to document trends in R&D and accompany analytical documents.37 But it was the belief and wish of economists and state statisticians that statistics would also be practical. Whether the theoretical results served this end is dealt with in the following matter. For the moment, discuss a related thesis, one very popular in academic circles, particularly in the literature on the history of (social) statistics: governments produced statistics in order to control populations. In the
35 R. R. Nelson (1977), The Moon and the Ghetto, op. cit., p. 59. 36 Y. Fabian (1984), The OECD International Science and Technology Indicators System, Science and Public Policy, February: 4–6. 37 On such uses, see OECD (1963), Science, Economic Growth and Government Policy, Chapter 2; OECD (1980), Technical Change and Economic Policy, Chapter 3; OECD (1991), Technology in a Changing World, pp. 50–64. See also: the series Science, Technology and Industry Outlook from 1992.
Myths and politics of S&T indicators 297 Table 15.2 Uses of science and technology statistics Theoretical Understanding and learning about science and technology Comparing countries (benchmarking) Forecasting Practical Managing (planning and allocating resources, assessing priorities) Orienting research Monitoring Evaluating Ideological/Symbolic Displaying performance Objectifying decisions Justifying choices Political Awakening and alerting Mobilizing people Lobbying for funds Persuading politicians
case of S&T, this thesis says that governments entered the field of S&T measurement to control R&D expenses, because “substantial annual increases in government spending can no longer be taken for granted.”38 “It is in any case self-evident that the present rate of expansion cannot continue indefinitely (. . .). A choice must be made,” suggested C. Freeman and A. Young as early as 1965.39 The control thesis, with regard to S&T statistics at least, certainly has to be qualified. First, the notion of control may include several meanings that are not always clear in the literature on social statistics.40 The first meaning, associated with Max Weber but more recently with Michel Foucault, refers to the disciplining, policing and regulating of individuals.41 Social statistics were techniques used by governments to submit individuals to moral and social goals. This is a strong definition of control, but a definition which is also found in less radical form in the literature. In fact, a second way of looking at the impact of statistics on individuals refers to how classifications and measurements inadvertently shape individuals by suggesting new ways in which to behave and think about themselves or, at the very least, how categories create ways of describing human beings which, by looping effects (feedback), 38 OECD (1966), Government and the Allocation of Resources to Science, op. cit., p. 50. 39 C. Freeman and A. Young (1965), The R&D Effort in Western Europe, North America and the Soviet Union, op. cit., p. 15. 40 For a short history of the concept of control, see: M. R. Levin (ed.) (2000), Contexts of Control, Cultures of Control, Amsterdam: Harwood Academic Publisher, pp. 13–39. 41 P. Miller, and T. O’Leary (1987), Accounting and the Construction of the Governable Person, Accounting Organizations and Society, 12 (3), pp. 235–265; P. Miller, and N. Rose (1990), Governing Economic Life, Economy and Society, 19 (1), pp. 1–31; N. Rose, (1988), Calculable Minds and Manageable Individuals, History of the Human Sciences, 1 (2), pp. 179–200.
298
Myths and politics of S&T indicators
affect behavior and actions.42 I submit that a third sense refers to the means by which statistics enable governments to intervene in the social sphere, not necessarily for the purpose of control, but to achieve a predetermined goal.43 In the case of S&T statistics, I would definitely opt for this last sense: the original goal was funding and orienting research. Here, the term control is a misnomer. Orienting fundamental research In 1966, the OECD produced a series of documents for the second ministerial conference on science, among them one on fundamental research.44 The report recalled that in fundamental research (. . .) the very notion of planning seems to be a contradiction (. . .). In fact, it is often suggested that the policy of an enlightened government toward fundamental research can only be to provide ample financial resources and encourage the training of research workers (. . .). [However,] in these days of rising research expenditures by governments and of a too facile appreciation of its promise of practical applications, [science for its own sake] is impossible to sustain. (p. 18) But the report continued: “it would be a great mistake, and in the end detrimental to both science and economic growth, for governments to base their policies of support for research only on lines of fundamental investigation which from the beginning appeared promising in terms of application” (p. 24). “By neglecting fundamental research, a country would be condemning its own industry to obsolescence” ( p. 25). The ministers agreed generally with the diagnosis of the report—fundamental research should be regarded as a long-term investment—but were unwilling to accept fully its institutional recommendations, which were judged too timid. In fact, the report did not really consider changes in the structure of the universities, the funding mechanisms, or the academic “mentalities” as solutions to the problem of research as applied to socioeconomic objectives. The OECD was hence requested to continue its examination of the subject. Joseph Ben-David was therefore invited to examine the implications of the report.45 He argued that academics still considered science essentially as a cultural good. They were not
42 N. Goodman (1978), Ways of Worldmaking, Indianapolis (Illinois): Hackett Publishing; Hacking, I. (1995), The Looping Effects of Human Kinds, in D. Sperber et al. (eds), Causal Cognition: A Multidisciplinary Debate, Oxford: Clarendon Press, pp. 351–383. 43 This way of conceptualizing “rationality” is more in line with J. Habermas than M. Foucault. See: J. Habermas (1984), The Theory of Communicative Action, Vol. I, Boston: Beacon Press. 44 OECD (1966), Fundamental Research and the Policies of Government, Paris. Members of the group of experts were: A. Maréchal, E. Amaldi, S. Bergstrom, F. Lynen, and C. H. Waddington. 45 OECD (1968), Fundamental Research and the Universities: Some Comments on International Differences, Paris, p. 45.
Myths and politics of S&T indicators 299 alone however. Governments thought similarly: The relationship between government and science is neither as simple nor as readily accepted as the relationship between government and most other social activities that make claims on public resources. In fact, government often show a marked diffidence in their dealing with science, largely because of the alleged uniqueness of R&D activities and the autonomy of the scientific community (OECD (1966), Government and the Allocation of Resources to Science, Paris, pp. 11–12) For example, Belgium argued that science policy was part of educational policy, not economic, and The Netherlands denounced such an economic view of research as the prostitution of science.46 Thereafter, what the OECD would understand by planning research was, among other things, how to allocate resources in order to get the right “balance” between fundamental and applied research, knowing, on one hand, that academics were autonomous and “uncontrollable”47 and, on the other hand, that more focused or oriented research was much needed. According to Freeman et al., fundamental research fell into two categories, free research that is driven by curiosity alone, and oriented research: “the government bodies responsible for science policies must see to it that the right balance is maintained.”48 A second document on the allocation of resources, produced for the ministerial conference, also held the same vision: “All countries should attempt to maintain some competence in a wide range of sectors in both fundamental research and applied research and development in order to be able to identify, absorb and adapt relevant foreign technologies.”49 The vision was in fact a qualified response to the way the problem of planning had been framed by academics since M. Polanyi.50 Until then, the problem was discussed in terms of the freedom of the scientist: “any social interference with the autonomous workings of science would slow scientific progress”—not because scientists have a right to freedom—but because freedom was said to be the best means for the efficiency of the science system.51 Others held a more moderate view: fundamental research should deserve at least an unquestionable and adequate proportion of R&D expenditures, but no-one, as Brooks noted, gave much basis for quantitative criteria.52 46 A. King, Memoirs, op. cit., Chapter 24, pp. 7 and 14. 47 “It is important to preserve the absolute priority of fundamental research and to uphold the freedom of universities”: OECD (1960), Co-operation in Scientific and Technical Research, Paris: 20. See also: OECD (1963), Science and the Policies of Governments, Paris, pp. 24–25. 48 OECD (1963), Science, Economic Growth and Government Policy, op. cit., p. 64. 49 OECD (1966), Government and the Allocation of Resources to Science, op. cit., p. 54. Members of the group of experts were: H. Brooks, C. Freeman, L. Gunn, J. Saint-Geours, and J. Spaey. 50 M. Polanyi (1962), The Republic of Science, Minerva, 1, pp. 54–73. 51 H. Brooks (1968), Can Science Be Planned?, in OECD (1968), Problems of Science Policy, op. cit., p. 100. 52 Ibid.
300
Myths and politics of S&T indicators
In fact, how could such a balance be found except by employing statistics? The OECD committee responsible for the report on fundamental research suggested that basic research should deserve 20 percent of total research expenditures: Support of basic research in the applied laboratory is therefore to be encouraged to the extent of allowing each applied research organization to spend up to, say, 20 percent of their efforts on original investigation. Such an average should be considered as an institutional average. (. . .) This is relevant both to industrial and government laboratories. (OECD (1966), Fundamental Research and the Policies of Government, op. cit., pp. 32–33) Where did such a ratio (which varied from 5 to 20 percent, depending on the author and the organization) come from? Certainly, one can find similar numbers going back to the 1940s, in the US President’s Scientific Research Board report, for example. It suggested quadrupling basic research expenditures to 20 percent of total R&D in 1957.53 Or take the example of the scientists at the US Naval Research Advisory Committee in the 1950s, who proposed that the Navy invest between 5 and 10 percent of its R&D budget in basic research (and when that level was achieved, that it be doubled).54 To several authors, however, the ratios were a historically contingent matter. For example, the notion that 5 percent of total R&D be devoted to basic research in enterprises came from the fact that a “twentieth is the highest still inapplicable rate of taxation on social investment in advanced technological enterprise,”55 which was the allocation of government contracts to cover company overheads and research. For others, like J.-J. Salomon, such criteria were articles of faith. “In most countries, it is an unwritten usage that requires that appropriations to fundamental research shall be not less than 10 percent of the total R&D.”56 Salomon suggested that the origin of the norm was to be found in Condorcet’s Fragments sur l’Atlantide, which made the provision of resources to what he called la société des savants subject to the condition that “one-tenth of the subscription, let us say, shall always be set aside to serve the general interests of this société in order to ensure that its utility extends to the whole system of human knowledge.” The argument also appeared in 1974 in
53 President’s Scientific Research Board (1947), Science and Public Policy, op. cit., p. 28. 54 See: H. M. Sapolsky (1979), Academic Science and the Military: The Years Since the Second World War, in N. Reingold (ed.), The Sciences in the American Context: New Perspectives, Washington: Smithsonian Institution Press, p. 389; A. T. Waterman (1959), R&D and Economic Growth, Address Before the Joint Economic Committee of the 86th Congress, Reviews of Data on R&D, 13, March, NSF 59–17, p. 3. 55 P. Forman (1990), Behind Quantum Electronics: National Security as Basis for Physical Research in the United States, Historical Studies in the Physical and Biological Sciences, 19, pp. 198–199. 56 J.-J. Salomon (2003), Social Sciences, Science Policy Studies, Science Policy-Making, in R. Arvanitis (ed.), Science and Technology Policy: A Section of the Encyclopedia for the Life Supporting Systems, EOLSS and UNESCO Publishers, to be published, pp. 7–8.
Myths and politics of S&T indicators 301 a three-volume OECD study on the research system—in which J.-J. Salomon participated: There appears to be little more than folklore, custom and even a bit of magic to the belief (and practice) that a certain percentage of the total research budgets should be devoted to fundamental science. Even the famous 10 per cent figure for fundamental research funding has little to sustain it empirically. (OECD (1974), The Research System, Vol. 3, op. cit., p. 188) At about the same time as the OECD expert group’s 20 percent suggestion, the US National Academy of Sciences (NAS) was more pessimistic on our capacity for estimating precisely the ratio of basic to applied research. It organized two conferences and produced two reports for Congress, one devoted to basic research and national goals,57 the other on applied science and technological progress.58 Both reports addressed the question of how much research the government ought to support, and what criteria can or should be used in arriving at a proper balance of support between basic research and applied research and development. The experts could not produce any direct answers, and the NAS either preferred to offer a diversity of viewpoints by presenting individual papers, as in the 1965 report, or took note, simply, of actual ratios between basic and applied research.59 In sum, either numbers could not be suggested (NAS), or, when they were, they were not “rationally” demonstrated (OECD). As the Brooks report once concluded: “The search for the ‘optimum’, whether in relation to the aggregate amount allocated to R&D activities or to methods of management and execution, is more an art than a science.”60 At the same time, K. Pavitt made similar remarks: no government has worked out a satisfactory, rational method for allocation of R&D resources amongst a variety of national objectives for two very good reasons: first, that the major choices (and even many of the minor ones) affecting how much and where national R&D resources should go are political choices which cannot be reduced simply to hard, economic calculations; second, that nobody (not even industrial firms) has yet succeeded in developing a satisfactory method of measuring ex ante rates of return on R&D activities. (K. Pavitt (1971), Technology in Europe’s Future, Research Policy, 1, p. 251)
57 NAS (1965), Basic Research and National Goals: A Report to the Committee on Science and Astronautics (US House of Representatives), Washington. 58 NAS (1967), Applied Science and Technological Progress: A Report to the Committee on Science and Astronautics (US House of Representatives), Washington. 59 Both reports also mentioned, although briefly, the need for more statistics and indicators: NAS (1965), p. 22; NAS (1967), p. 8. 60 OECD (1971), Science, Growth and Society, op. cit., p. 47.
302
Myths and politics of S&T indicators
Relevant statistics were, however, supposed to be one of the purposes of the Frascati manual. Of the five objectives the manual should serve, C. Freeman mentioned “management control” of research. By this he meant that survey data would allow one to allocate resources “to attain the optimum development,” evaluate the productivity of research centers, and determine the balance between types of research.61 In the end, however, the OECD never really needed numbers on the balance issue. From the beginning, the (OEEC and) OECD had clearly decided to act in favor of applied and oriented research rather than fundamental research.62 It is essential to maintain the proper balance between fundamental and applied research. Having said this, it can then be stated that the greater emphasis should be on applied research which is not developed to the same extent in Europe as in the United States, said the first policy document of the OECD on science policy.63 The “proper balance” argument and the few good words on fundamental research in OECD documents were rarely in favor of fundamental research for its own sake, but because fundamental research was thought to be at the origin of (all) applied research.64 The balance issue, nevertheless, had echoes in the Frascati manual. First, the term “oriented research” was introduced in the second edition, as a type of research activity that deserves attention (but without any recommendation as to its measurement): between basic and applied research, there was now place for a type of fundamental research aligned toward the resolution of specific problems. To date, however, very few countries collect numbers on oriented research, as we discussed previously.65 Second, and following the example of the European Commission, the OECD introduced, in the third edition of the Frascati manual, a classification of government R&D by socioeconomic objective. The classification would, in principle, allow one to link the allocation of R&D resources to national and social needs. The too-aggregated level of the statistics, however, prevented the classification from being truly useful for that purpose.66
61 OECD (1963), The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Development, op. cit., p. 10. 62 The early Working Parties of the OEEC (Scientific and Technical Information; Productivity and Applied Research) as well as the EPA were mainly concerned with applied research: productivity centers, cooperative applied research groups, networks of technical information services, management of research. The OECD early Scientific Research Committee (1961) comes, moreover, from the OEEC Applied Research Committee (in 1966, it was divided into two new committees: Science Policy and Research Cooperation). 63 OECD (1960), Co-operation in Scientific and Technical Research, op. cit., p. 26. 64 From the 1950s to today, the argument has been quite different in the United States (NSF and NAS), where increased funding of fundamental research has been directly targeted. 65 See: Chapter 14. 66 See: Chapter 10.
Myths and politics of S&T indicators 303
Managing industrial R&D Before turning to the political uses of S&T statistics, let us look briefly at another case of the assumed practical usefulness of R&D statistics. Could national surveys of R&D be more useful for enterprises in their quest to “control” industrial R&D laboratories? “There seems to be no way for measuring quantitatively the performance of a research laboratory. [But] a comparison of figures for one laboratory with figures for some other laboratory (. . .) may lead the laboratory administrator to ask questions about his own laboratory,”67 wrote R. N. Anthony, author of the first survey on industrial R&D in the United States. Besides the survey, Anthony used the statistics he produced for the Department of Defense (Office of Naval Research) to publish a book titled Management Controls in Industrial Research Organizations. The term control in the title could lead us to believe, at first glance, that the book would discuss methods developed to control research activities and scientists in industrial R&D laboratories. Behind the term, however, what one finds is the reference to the need for firms to manage their research laboratories, which were relatively new creatures and not yet well understood.68 “To some people, the word control has an unpleasant, or even a sinister, connotation: indeed, some of the synonyms given in Webster’s dictionary—to dominate, to subject, to overpower—support such an interpretation. As used here, control has no such meaning,” wrote Anthony.69 The book dealt rather with administrative aspects of industrial research: technical programs, service and support activities, money, facilities, organization and personnel, basic policy decisions, short-range planning, operation decisions and actions, and checking up on what has been done. Certainly, firms needed ways to plan research activities, increase the effectiveness of their investments, and “control” expenditures, but they also needed ways to stimulate new activities in light of the bureaucratic conservatism of certain divisions.70 However, “very little control is exercised after the decision has been made to proceed with work in a certain area and of a certain order of magnitude,” as Anthony showed with his interviews of over 200 laboratory directors.71 The first NSF survey of industrial R&D also documented the fact that formulas were rarely used except as a rough guide by managers to decide on the size of budgets: R&D expenditures are determined mainly by judgmental appraisals.72 R. R. Nelson, finally, pointed out that few companies relied on 67 R. N. Anthony (1952), Management Controls in Industrial Research Organizations, op. cit., p. 288. 68 C. C. Furgas (1948), Research in Industry: Its Organization and Management, Lancaster: Lancaster Press; C. E. K. Mees and J. A. Leermakers (1950), The Organization of Industrial Scientific Research, New York: McGraw-Hill. For a good summary of the literature of the time, see: A. H. Rubenstein (1957), Looking Around, Harvard Business Review, 35 (3), pp. 133–146. 69 R. N. Anthony (1952), Management Controls in Industrial Research Organizations, op. cit., p. 3. 70 R. Seybold (1930), Controlling the Cost of Research, Design and Development, Production Series, New York: American Management Association, p. 8. 71 R. N. Anthony (1952), Management Controls in Industrial Research Organizations, op. cit., p. 27. 72 NSF (1956), Science and Engineering in American Industry: Final Report on a 1953–1954 Survey, NSF (56)16, Washington, pp. 46–47. See also: NSF (1964), Decision-Making on R&D in the Business Firm, Reviews of Data on R&D, 44, February, NSF (64) 6.
304
Myths and politics of S&T indicators
formulas to allocate budgets to research divisions, or on specific formal plans for selecting projects: “despite talks of close controls, budgetary and otherwise, much industrial research is conducted under very loose control.”73 “Tremendous uncertainties involved in making any major technological breakthrough preclude either the routinization of invention or the precise prediction of invention.”74 “Let the division manage their own affairs with a minimum of surveillance, so long as they maintained good numbers,” that is, an adequate return on investment.75 Certainly, the issue of the freedom (as opposed to the “control”) of the industrial scientist—as compared to the academic researcher—was an important one at the time. A similar case to that discussed above with respect to fundamental research was encountered here. According to Anthony, “research workers must have freedom, and management must manage. (. . .) The central problem is to find the proper balance between these two opposing principles.”76 At the time, relative freedom for scientists was thought to be essential for research, even in industry, if only to recruit the best scientists: it is neither possible nor desirable to supervise research activities as closely as, say, production activities are supervised. (. . .) It is the essence of fundamental research that no one can know in advance what the results are likely to be, or even whether there will be any results. (. . .) An attempt to exercise too tight a control over [research] will defeat the purpose of control.77 If industrialists really wanted to “control” their scientists, they did not need national R&D surveys at all.78 There were other means to that end. What firms needed were “scientific” tools to assess the value of their R&D projects and to decide where to invest, “to detect and stop unsuccessful work as promptly as possible.”79 Hence, the rise of the literature on planning industrial research and “rational” techniques such as cost/benefit analyses.
73 R. Nelson (1959), The Economics of Invention: A Survey of the Literature, The Journal of Business, 32 (2), p. 101; see also: A. H. Rubenstein (1957), Setting Criteria for R&D, Harvard Business Review, January–February, pp. 95–104. 74 R. R. Nelson, The Economics of Invention: A Survey of the Literature, op. cit., p. 115. 75 D. Brown (1927), Centralized Control with Decentralized Responsibility, New York: American Managers Association. Cited in T. M. Porter (1992), Quantification and the Accounting Ideal in Science, Social Studies of Science, 22, p. 643. 76 R. N. Anthony (1952), Management Controls in Industrial Research Organizations, op. cit., p. 15. 77 Ibid., p. 27. 78 Some firms were even totally opposed to R&D surveys: “A few people, in commenting on the idea of the questionnaire, questioned the wisdom of undertaking the project at all because, they felt, an unwise use of the figures contained in this report could have a dangerous effect on the atmosphere, and perhaps even on the output, of certain laboratories”: R. N. Anthony (1952), Management Controls in Industrial Research Organizations, op. cit., p. 450. 79 R. N. Anthony (1952), Management Controls in Industrial Research Organizations, op. cit., p. 28.
Myths and politics of S&T indicators 305
A lobby in action When discussing the usefulness of statistics for practical ends,80 official statisticians and users of statistics in fact often confused statistics in general with national and OECD S&T data. If one looks at the uses of statistics mentioned by member countries in a recent OECD inquiry, there is ample evidence of statistics’ “usefulness,” according to experts, but rarely were the data from national or OECD S&T surveys mentioned. The reasons generally centered around three limitations that we discussed previously: the data are generally too aggregated and not detailed enough for policy purposes, are non-comparable (between countries), or are out of date (time lags).81 I suggest that national and OECD data are first-level (macro) statistics, that is, contextual indicators, and are usually used in this sense in government reports: to paint a picture of the international context in order to compare one’s country to other countries or rhetorically align policies to those of the best performer, generally the United States. Unlike other statistics, S&T data are not embedded in mandatory or institutional rules (policies, programs, legislation) as several economic and social statistics are. The Consumer Price Index (CPI), for example, serves to index salaries as well as to define monetary policies; unemployment rates are constructed in order to define who would get allowances; demographic statistics are used for political representation and distribution. No such “regulations” exist behind S&T statistics. National S&T surveys are certainly helpful a priori, but are rarely mandatory for policy decisions. In fact, national S&T policies have generally been developed before official statistics became available, or simply without recourse to statistics at all.82 Such was the case for innovation policies. Innovation became a priority of government policy in the early 1970s, but we had to wait until the 1990s for innovation to be properly and systematically measured. In the meantime, governments (and academics) used patents or R&D as proxies for measuring innovation. Similarly, policies of the 1980s with regard to new technologies developed on very shaky empirical ground: only recently did statistics become, still quite imperfectly, available. We could say the same for early R&D policies: the first policy analyses, those of the OECD, for example, were developed with rather poorly comparable
80 Current taxonomies on the uses of statistics all center on practical objectives. See OECD (1980), Science and Technology Indicators: A Background for the Discussion, DSTI/SPR/80.29; OECD (1990), Current Problems Relating to Science, Technology and Industry Indicators, DSTI/STIID; J. van Steen (1995), Science and Technology Indicators: Communication as Condition for Diffusion, Workshop on the Implementation of OECD Methodologies for the Collection and Compilation of R&D/S&T Statistics in the Partners in Transition Countries and the Russian Federation; B. van der Meulen (1998), The Use of S&T Indicators in Policy: Analysing the OECD Questionnaire, DSTI/EAS/STP/NESTI/RD (98) 6. 81 OECD (1998), The Use of S&T Indicators in Policy: Analyzing the OECD Questionnaire, DSTI/EAS/ STP/NESTI/RD (98) 6, pp. 21–23. 82 For a similar argument for economic policies, see: G. Stigler (1965), Essays in the History of Economics, Chicago: University of Chicago Press, p. 5; D. N. McCloskey (1985), The Rhetoric of Economics, Madison: University of Wisconsin Press.
306
Myths and politics of S&T indicators
statistics, as Freeman himself documented.83 Similarly, the OECD had defined precise policy options for solving the technological gap issue some years before the results of its surveys became available.84 In fact, I suggest that the understanding of S&T issues, as a goal of statistics, had generally already been provided in studies by academics,85 well before official statisticians came on the scene. Official S&T surveys rather served other purposes not very different from nineteenth-century statistics: “in all instances, the specification of the data to be collected and the matrices to be adopted, whether by private individuals and associations or government agencies, was elaborated with a view to providing numerical proof of usually fairly well-formulated pre-existing hypotheses (. . .).”86 Some drew drastic conclusions from this state of affairs: data on research and development “are of limited usefulness in policy decisions and offer no guidance with respect to balance among fields, effects of R&D, or the accomplishments and value of R&D.”87 One also read the following in OECD documents: “the data currently available for many OECD countries do not permit evaluation of whether the real amount of resources available is growing or declining, let alone whether it is sufficient,”88 or “current indicators offer little assistance in addressing the overall impact of science on the economy or for evaluating how funding allocations should be made between newly developing and established fields of investigation.”89 In the same spirit, but in a more nuanced way, an OECD study of the 1970s suggested: It is one thing to come to a decision that a certain field is much more important than another. It is yet another question to say that the field is, let us assume, nine times as important as the next one inasmuch as it would cost nine times as much to sustain research in that field (. . .). Quantitative indicators can only be one of many sets of inputs into the entire science policy formulation process, and perhaps, in the final analysis, not even the most important ones. (OECD (1974), The Research System, Vol. 3, op. cit., pp. 190–191) Ministers and their very senior staff are rarely direct consumers of indicators because their decisions are based on qualitative political considerations. (. . .) 83 OECD (1963), Science, Economic Growth and Government Policy, Paris, op. cit., pp. 21–22; C. Freeman and A. Young (1965), The R&D Effort in Western Europe, North America and the Soviet Union, op. cit. 84 OECD (1966), Differences Between the Scientific and Technical Potentials of the Industrially Advanced OECD Member Countries, DAS/SPR/66.13, pp. 7ss. 85 Among whom were some working as consultants for the OECD, like C. Freeman. 86 S. Woolf (1989), Statistics in the Modern State, Comparative Studies in Society and History, 31, p. 590. 87 K. Arnow (1959), Financial Data on R&D: Their Uses and Limitations, in NSF, Methodological Aspects of Statistics on R&D, Costs and Manpower, NSF 59-36, Washington, p. 47. 88 OECD (1994), Statistics and Indicators for Innovation and Technology: Annex I, DSTI/STP/TIP (94) 2/ANN 1, p. 12. 89 OECD (1996), The Knowledge-Based Economy, in Science, Technology and Industry Outlook, Paris, p. 276.
Myths and politics of S&T indicators 307 Private decision-makers have their own key internal indicators in terms of monetary accounting systems. (. . .) Others seek indicators in order to justify their projects and the needs for more resources. (OECD (1994), Statistics and Indicators for Innovation and Technology: Annex I, DSTI/STP/TIP (94) 2/ANN 1, pp. 4 –5) Another possible conclusion one might draw is that the data are certainly useful, but in two other senses. First, the discourses on the usefulness of OECD indicators are part of the rhetoric of statisticians to legitimize their own work. In the same spirit as R. Gass’ comment (p. 291 above), the OECD itself admitted Faced with the possibility of leveling-off, some scientists have reacted by calling for total spending on scientific activities to be allotted a fixed percentage of the GNP, or to be related to some base total which is not susceptible to sharp fluctuations. (. . .) In practice, no government follows so elaborate a formula (. . .). (OECD (1966), Government and the Allocation of Resources to Science, op. cit., p. 50) In fact, it has always been part of policy-makers and statisticians’ rhetoric to overvalue their statistics: “Unless national data on R&D activity could be improved, the establishment of national science policies, and work on the relationships between scientific research and economic growth, would be seriously hindered,” stated R. Gass in 1962.90 After forty years, the same rhetoric is still being used: “An extended and high-quality set of quantitative indicators is necessary to the design and evaluation of science and technology policy.”91 Indicators “are needed by governments, to evaluate their programs and researchers, they are needed by firms, who want to assess the contribution of R&D to their global achievement.”92 Better information on R&D and innovation would inform federal budget decisions regarding the allocation of funds in support of science and technology (. . .). First, should the federal budget be increased to pay for more R&D? Second, within the context of a fixed overall budget, should R&D be preferred to other, non-R&D budget expenditures? Third, given a fixed R&D budget, how much should be spent on R&D in one area relative to R&D in other areas? (T. Brennan, senior economist on the staff of the US Council of Economic Advisers. Cited in National Research Council (1997), Industrial Research and Innovation Indicators, Board on Science, Technology and Economic Policy, Washington: National Academy Press, pp. 9–10) 90 OECD (1962), Committee for Scientific Research: Minutes of the 5th Session, SR/M (62) 3, p. 15. 91 D. Guellec (2001), Introduction, STI Review, Special Issue on S&T Indicators, 27, p. 7. 92 Ibid., p. 11.
308
Myths and politics of S&T indicators
According to some, such statements were self-promoted for the sole need of economists and statisticians, with little value to policy-makers. We saw, how in the 1980s, for example, statisticians rather stressed the “importance of [OECD analytical] reports, not only because of the trends they revealed, but because their preparation highlighted problems with the quality and comparability of the data.”93 This bias would lead the reorientation of R&D statistics toward policy problems in the 1990s: statistics had to enlighten policies, not lag behind them. Second, governments used statistics, certainly, but more often for symbolic and ideological aims. It is worth citing T. M. Porter here on the history of social statistics:94 One cannot say that bureaucracies in democratic societies tend by their very nature to absolve themselves of all responsibility by blindly applying mechanical decision procedures. The converse, however, is not so far from the truth: formalized quantitative techniques tend to be used mainly when there is occasion to disguise accountability, or at least to depersonalize the exercise of authority. Career civil servants usually know too much about the problems they confront to be content with the inevitable abstractions and simplifying assumptions that go into a formal economic analysis. What they want from such studies is justification for a course of action (p. 45). The objectivity of quantitative policy studies has more to do with their fairness and impartiality than with their truth (p. 29). It matters not whether forecasts prophesy well or poorly, for their true function does not depend on their accuracy (. . .): the prestige of forecasts owes more to their disinterestedness than to their trustworthiness. (T. M. Porter (1992), Objectivity as Standardization: The Rhetoric of Impersonality in Measurement, Statistics, and Cost–Benefit Analysis, Annals of Scholarship, 9, p. 40) In brief, accuracy rarely mattered: “inconsistency in data collected over time is better than precise data that does not have a history,” felt many participants to a recent NSF workshop.95 Despite the limitations of statistics, nothing ever prevented policymakers from using them for ideological, symbolic, and political purposes. The GERD/GNP ratio was only one example of statistics being used for political ends (see Chapters 11 and 12). Other examples followed. The NSF developed a whole series of statistics with regard to the support of basic research for instrumental ends, namely to influence congressmen about the necessity to redress the balance between basic and applied research. Similarly, statistics on the shortages of scientists and engineers, and gaps versus the USSR were offered in order to
93 OECD (1987), Summary of the Meeting of NESTI, STP (87) 8, p. 5. 94 T. M. Porter (1992), Objectivity as Standardization: The Rhetoric of Impersonality in Measurement, Statistics, and Cost–Benefit Analysis, Annals of Scholarship, 9, pp. 19–59. 95 M. E. Davey and R. E. Rowberg (2000), Challenges in Collecting and Reporting Federal R&D Data, Washington: Congressional Research Service, p. 19.
Myths and politics of S&T indicators 309 convince politicians to increase funding for academic research (see Chapter 13). The United Kingdom introduced a new measurement in its R&D statistics in order to please politicians who were more and more receptive to the relevance of research. A category of research was integrated between basic and applied: strategic research. At OECD meetings, the UK statisticians tried to persuade other countries to adopt and standardize the practice, but without real success (see Chapter 14). In recent history, however, it was Canada that provided the most eloquent example of a political use of statistics.96 Basically, Quebec had been complaining since the beginning of the 1980s of a significant gap between Ontario and Quebec when it came to federal investment in S&T. Ontario received almost 60 percent of the federal government largesse, compared to just 14 percent for Quebec. Under the (probable) influence of its supervisory ministry (Industry Canada), Statistics Canada developed the idea of producing its statistics in such a way as to reduce the gap between the two provinces. They removed from the statistics the share of federal expenditures allotted to the National Capital Region (NCR), a region that straddles the two provinces, and where—in its Ontario portion—the federal laboratories are concentrated. This statistical artifice had the effect of reducing the gap between Ontario and Quebec to just 8 percent. Furthermore, Quebec suddenly found itself with a ratio of R&D to GDP higher than Ontario’s, an occurrence unprecedented in history. How was it possible that statistical agencies, reputed for their objectivity, moved into politics? This has partly to do with the way statistical agencies managed their position and their relations with government departments and policy divisions concerned with science policy. The following question was a perennial one in the organization of statistics: should the statistical work be performed in an autonomous agency, or should it be connected to the policies and conducted in a government department? For some, autonomy was a gauge of objectivity.97 For others, statistics were useless if not aligned to users’ needs.98 The organization of S&T statistics in western countries has oscillated between two patterns in recent history (see Table 15.3). In one, as in Canada, statistics were produced by an autonomous agency. In other cases, as in the OECD, statistics were part of a policy division. In still other cases, as in the United States, S&T statistics had a mixed status: they were part of neither an autonomous statistical agency nor one located within a department, but were in fact the outgrowth of an arm’s-length agency.
96 B. Godin (2000), La distribution des ressources fédérales et la construction statistique d’un territoire: la Région de la Capitale Nationale (RCN), Revue canadienne de science politique, 33 (2), pp. 333–358. 97 National Research Council (1992), Principles and Practices for a Federal Statistical Agency, Washington; United Nations Statistical Commission (1994), Fundamental Principles of Official Statistics, Official Records of the Economic and Social Council, Supplement No. 9. 98 J. L. Norwood (1975), Should Those Who Produce Statistics Analyze Them? How Far Should Analysis Go? An American View, Bulletin of the International Statistical Institute, 46, pp. 420–432.
310
Myths and politics of S&T indicators Table 15.3 Agencies reporting R&D data to OECD (1995) National statistical offices (producers)
S&T ministries and agencies (users)
Australia Austria Canada Finland Italy Japan Netherlands Spain Sweden Switzerland Turkey United Kingdom
Belgium Denmark France Germany Greece Iceland Ireland Korea New Zealand Norway Portugal United States
OECD (1995), Discussion of Science and Technology Statistics at the 4th Ad Hoc Meeting of National Statistical Offices of OECD Member Countries, DSTI/EAS/STP/NESTI (95) 34.
Charles Falk, director of the NSF division of Science Resources Studies (SRS) from 1970 to 1985, identified four factors that should affect the organizational location of S&T statistics: credibility, ability to identify important current policy issues, ready and early access to statistical data, capacity to attract the right kind of staff.99 Of these, the first—credibility—was for him the most important: the organization should be “relatively immune to political and special interest pressures,” he wrote. To Falk, credibility depended on the organization’s reputation. At the same time, however, he added that these organizations should not be purely statistical organizations, because they would be too far removed from policy discussions, nor should they be purely analytical study groups too remote and too unfamiliar with data sources: “Hopefully, some central organizations can be found which contain both elements and the science and technology indicators unit’s location within this organization should make possible close interaction with both types of groups.” Falk’s implicit and ideal model was, of course, the NSF, and his recommendations were thus not disinterested. The history of statistics proves, however, that these desiderata are illusions. Autonomy does not necessarily mean neutrality.100 Despite its reputation and its location, an organization can produce legitimate statistics and use them for political purposes, and to the extent that some maintained recently that, to avoid conflicts of interest, the NSF’s SRS division should be transformed into an autonomous statistical agency. It would, they add, at least avoid poor scientific analyses like 99 C. Falk (1980), Factors to Be Considered in Starting Science and Technology Indicators Activities, Paper presented at the OECD Science and Technology Conference, September 15–19, STIC/80.14, Paris; C. Falk (1984), Guidelines for Science and Technology Indicators Projects, Science and Public Policy, February, pp. 37–39. 100 T. L. Haskell (1998), Objectivity is Not Neutrality: Explanatory Scheme in History, Baltimore: Johns Hopkins University Press.
Myths and politics of S&T indicators 311 those devoted to predicting shortages of scientists and engineers in the late 1980s.101
Conclusion The view of statistics and indicators as information for decision-making derives its power and legitimacy from economic theory: the belief that people will act rationally if they have perfect information on which to base their decisions. It was not rare, however, to find skeptical remarks concerning this belief in the literature. In one of the first assessments of the NSF’s Science Indicators (SI ), H. Averch stated: “SI-76 does not now contribute explicitly toward the identification of major policy issues, provide predictions of potential ills and goods from science and technology, or relate the impact of science and technology to social and economic variables.”102 “I cannot deduce from the information in SI-76 what the level of incentives should be or the efficacy and effectiveness of various proposed options.”103 To Averch, “policy options and effects do not flow from indicators (. . .).”104 Other commentators made similar remarks. Authors generally recognized that statistics and indicators do play some role. First, “the OECD has been successful in reshaping the statistical systems of its member countries (. . .).”105 Second, the OECD statistics indirectly shaped policy agendas and priorities by ranking countries: countries are drawn “into a single comparative field which pivots around certain normative assumptions about provision and performance.”106 Inevitably, the establishment of a single playing field sets the stage for constructing league tables, whatever the somewhat disingenuous claims to the contrary. Visually, tables or figures of comparative performance against an OECD or a country mean carry normative overtones (. . .). To be below or on a par with the OECD averagds\]=[-098e invites simplistic or politically motivated comments. (M. Henry, B. Lingard, F. Rizvi, S. Taylor (2001), The OECD, Globalization and Education Policy, Kidlington (Oxford): IAU Press and Elsevier, p. 96) But indicators “fall far short of providing data to government officials for [what they are said to do, i.e.:] making social investment decisions.”107 “Whether the 101 National Research Council (2000), Measuring the Science and Engineering Enterprise: Priorities for the Division of Science Resources Studies, Washington, p. 105; National Research Council (2000), Forecasting Demand and Supply of Doctoral Scientists and Engineers, Washington, pp. 55–56. 102 H. Averch (1980), Science Indicators and Policy Analysis, Scientometrics, 2 (5–6), p. 340. 103 Ibid., p. 343. 104 Ibid., p. 345. 105 M. Henry, B. Lingard, F. Rizvi, S. Taylor (2001), The OECD, Globalization and Education Policy, Kidlington, Oxford: IAU Press and Elsevier, p. 84. 106 Ibid., p. 95. 107 J. Spring (1998), Education and the Rise of the Global Economy, Mahwah, New Jersey: L. Erlbaum Ass., p. 173.
312
Myths and politics of S&T indicators
data collection requirements have also influenced member countries to rearrange their policy priorities, however indirectly, can at present only be speculated upon.”108 What can indicators do, then? To Averch, they can help “to shape lines of argument and policy reasoning.”109 [ Indicators] can serve as checks (. . .), they are only part of what is needed (. . .).”110 For others, indicators cannot be used to set goals and priorities or to evaluate programs, but “what they can do is to describe and state problems more clearly, signal new problems more quickly, and obtain clues about promising new endeavors.”111 This whole question was the subject of a recent debate between T. M. Porter and E. Levy. According to Levy, Porter portrayed quantification as a replacement for judgment in his book Trust in Numbers:112 “Quantification overcomes lack of trust in human judgment, and as such it overcomes weaknesses in scientific communities and in public life by replacing human judgment with numbers.”113 To Levy, statistics serve rather as guidance (p. 730): it is “less a replacement of judgment than the medium and framework in which analysis takes place and judgment is exercised” (p. 735). The use of statistics is not (generally) mechanical. They “require the massive application of judgment on a continuous basis” (p. 735). They have “vastly improved the role of human judgment in decision-making” (p. 736). Porter recognized that Trust in Numbers emphasized “too singlemindedly the drive to turn calculation into an automatic source of objectivity,”114 but these cases exist, he insisted: “the value of numbers as information, raw material on which judgment can be exercised, must be distinguished from reliance on routines of calculation in an effort to make decisions purely automatic” (p. 741). With regard to surveys of S&T, particularly the official R&D survey, these were obviously directed toward some form of instrumental action. Governments (American and Canadian, at least) wanted to mobilize researchers for war. Some departments, among them the US Bureau of Budget, wanted to limit expenses on basic research. Still others thought that statistics could contribute to planning science activities.115 But rarely did government have success. First, the tools, whether registers or surveys, were not detailed enough for this purpose. This was probably the main limitation of S&T statistics for policy purposes. According 108 109 110 111 112 113 114 115
Henry et al. (2001), The OECD, Globalization and Education Policy, op. cit., p. 95. H. Averch (1980), Science Indicators and Policy Analysis, op. cit., p. 344. Ibid., p. 345. T. Wyatt (1994), Education Indicators: A Review of the Literature, in OECD, Making Education Count: Developing and Using International Indicators, Paris, p. 109. T. M. Porter (1995), Trust in Numbers: The Pursuit of Objectivity in Science and Public Life, Princeton: Princeton University Press. E. Levy (2001), Quantification, Mandated Science and Judgment, Studies in the History and Philosophy of Science, 32 (4), p. 724. T. M. Porter (2001), On the Virtues and Disadvantage of Quantification for Democratic Life, Studies in the History and Philosophy of Science, 32 (4), p. 740. On the history of US planning agencies that conducted R&D surveys in the 1930s to 1940s, see: A. H. Dupree (1957), Science in the Federal Government: A History of Policies and Activities to 1940, New York: Harper and Row, pp. 350–361.
Myths and politics of S&T indicators 313 to the OECD itself, “this macro-economic analysis does not have sufficient explanatory value, notably when relating technology and economic growth.”116 Second, statistical units had, or soon claimed, their own autonomy: statistics assumed purposes and meanings quite different from those assigned to them at the beginning. At the OECD, there were two approaches to S&T policy. One was represented by economists and statisticians like C. Freeman, who wrote that in order to make R&D data more useful as a basis for science policy, [. . .] it is necessary to measure a variety of related scientific and technical activities. [A] quantitative approach is necessary for resolving some of the issues involved in the complex chain leading from research to innovation. (C. Freeman (1967), Research Comparisons, op. cit., p. 466) The other approach was more qualitative. It was represented by J.-J. Salomon who viewed, for example, the technological gap between the United States and Europe as follows: “il est fait d’un ensemble de retards, d’inadaptations et de lacunes dont les sources sont si diverses—historique, économique, politique, sociologique, culturelle—qu’il faut renoncer à les identifier en termes quantitatifs.”117 In general, the Policy Division of the DSTI was rather qualitative-minded—at least between 1970 and 1990. Following the Brooks report (1971), The Research System (1974) rejected statistics based on the GERD/GNP, and Technical Change and Economic Policy (1980) criticized surveys of major innovations (output approach), as well as economists’ work on identifying the contribution of S&T to productivity: To attempt to attribute so much experienced economic growth to technical advance, so much to capital formation, and so much to increased educational attainments of the work force, is like trying to distribute the credit for the flavour of a cake between the butter, the eggs and the sugar. All are essential and complementary ingredients. (OECD (1980), Technical Change and Economic Policy, Paris, p. 65. The same example appeared in R. R. Nelson (1981), Research on Productivity Growth and Productivity Differences: Dead Ends and New Departures, op. cit., p. 1054) This tension at the DSA (and DSTI) between the quantitative and qualitative points of view thus resulted in a paradoxical situation in which the main R&D indicator (GERD/GNP) was not really useful for policy purposes, but more for rhetorical ends, while the main policy problem (allocation of resources) could never get quantitative criteria.
116 OECD (1994), Statistics and Indicators for Innovation and Technology: Annex I, DSTI/STP/TIP (94) 2/ANN 1, p. 11. 117. J.-J. Salomon (1967), Le retard technologique de l’Europe, op. cit., p. 917.
Conclusion
Official statistics on S&T are, like any other statistics, constructed.1 Naming a “reality” or social fact, defining, classifying, and measuring it, these are decisions separating concepts as equivocal, exhaustive, and discrete, and choosing what is relevant and what is extraneous.2 Such construction abstracts one property to better compare objects, suppressing differences in order to create similarities. The Frascati manual is a perfect example of social constructivism, and the OECD statistical tables the epitome of the output that arises from it. Statistics as decisions are embedded in, and draw on, current conceptions of society. First of all, they reflect values: statistics and their categories value some points of view and silence others. S&T statistics are full of such hierarchies and dichotomies that reproduce social and cultural values: R&D versus RSA, basic versus applied research, high technology versus low technology, natural sciences versus social sciences and humanities. Statistics, second, reflect interests of people or groups of people, organizations, and countries. Economists, very well represented at the OECD by C. Freeman, were one of those groups of people whose way of looking at the world was determinant. R. R. Nelson has argued that economists’ influence on science policy rested on a powerful theoretical structure.3 I would suggest rather that it is the mystique of numbers that was at play in this case. Numbers have always seduced bureaucrats,4 and it was economists, not sociologists or political scientists, who were reputed to produce them, hired as consultants, and emulated inside the DSTI. Another but related mystique was as important: that of economic growth. In the context of the reconstruction of Europe, productivity was a keyword (if not a buzzword), and it was naturally to economists that governments turned to quantify their economic performances and the contribution of S&T to economic growth.
1 W. Alonso and P. Starr (1987), The Politics of Numbers, New York: Russell Sage Foundation; J. Best (2001), Damned Lies and Statistics: Untangling Numbers From the Media, Politicians and Activists, Berkeley: University of California Press. 2 M. Douglas and D. Hull (1992), How Classification Works: Nelson Goodman Among the Social Sciences, Edinburgh: Edinburgh University Press. 3 R. R. Nelson (1977), The Moon and the Ghetto, op. cit., p. 45. 4 T. M. Porter (1995), Trust in Numbers: The Pursuit of Objectivity in Science and Public Life, Princeton: Princeton University Press.
Conclusion
315
Organizations have also carried their own interests into the field of S&T statistics. The NSF was a very influential organization, not only as a producer of statistics, but also as a user. It exported, via C. Freeman, author of the first edition of the Frascati manual, its methodologies to the OECD and its member countries. The NSF also acted as an important user of its own statistics when it lobbied the US government year after year for more funds based on quantitative analyses of university science. It also produced numbers that uncritically fed European governments’ discourses on the brain drain. Similarly, governments themselves and the interests of their statistical offices exercised considerable pressure to control the field of S&T statistics in their respective countries, and also at the international level through NESTI. All in all, one can reasonably conclude that S&T statistics were, as Pierre Bourdieu said, “un enjeu de lutte” to impose its view of the world.5 In S&T statistics, officials have won over academics, although the latter have always contributed, as experts or consultants, to the construction of official statistics. Statistics, finally, reflect ideologies. Two of these were very influential in the case of S&T: economism and the autonomy of university research. Linking S&T to economics at the OECD in the 1960s6 enabled the organization to define and impose—in economic terms—an important new challenge that had hardly begun to be mastered by governments, science policy,7 but was also a deliberate initiative to give a permanent place to S&T within the new organization. From the start, the Directorate for Scientific Affairs (DSA) developed policy discourses and quantitative analyses aimed at showing how S&T could participate in productivity and economic growth (remember the OECD 50 percent growth objective of the time). Such an orientation was not easily accepted. The OECD had to admit that: “though the importance of research for economic development was increasingly recognized, this concept had not been fully realized by the research workers (. . .).”8 Accordingly, the Committee for Scientific Research (CSR) regularly defended itself as follows: “The injection of economic criteria into the decisions governing the direction of fundamental research is to be avoided (. . .).”9 The role of statistics was, according to the committee, only to fix the overall level of resources. A year later, it added: “Economic growth is not the sole purpose of a science policy”: social welfare, advancement of knowledge for its own sake, military strength, and political prestige are important goals and priorities for government R&D investments.10 It is not surprising, therefore, that at the same time, and often in the same documents, the OECD defended the autonomy of university research.
5 P. Bourdieu (1985), The Social Space and the Genesis of Groups, Theory and Society, 14, pp. 723–744. 6 Indeed, science and technology were mentioned as a means to economic development in article 2 of the founding 1960 Convention of the OECD. 7 B. Godin (2002), Outlines for a History of Science Measurement, op. cit. 8 OECD (1962), Committee for Scientific Research: Minutes of the 3rd Session, SR/M (62) 1, p. 7. 9 OECD (1962), Economics of Research and Technology, SR (62) 15, p. 6. 10 OECD (1963), Economics of Science and Technology, SR (63) 33, p. 2.
316
Conclusion
T. Lefebvre, Prime Minister of Belgium and chairman of the first OECD ministerial meeting in 1963, emphasized what country representatives would repeat during the two-day conference: “all science policy should include safeguards for the freedom of fundamental research.”11 This second ideology is important to explain why, despite its economic focus, the OECD never directly promoted the “control” of research activities in its science policy thinking. The conceptual model of the time (the linear model) clearly suggested that the means for economic progress was through basic science as it existed, namely research freely conducted by researchers. V. Bush was eager to suggest, in 1945, that applied research necessarily depends upon basic research. He recognized a very specific role for government, however: the role of funding research, especially basic research conducted in universities. This ideology was reflected in statistics: basic research constituted for a while a central category of the measurement of S&T. Above all, the measurement of inputs always received greater effort than scientific outputs and impacts (or outcomes), and no survey of university R&D was ever conducted in most countries. Do not disturb university researchers with questionnaires, but hope and wait for the output and impacts of their research—such was the motto of the time. The two ideologies—economism and autonomy of university research—were thus not contradictory. Besides reflecting already-held views or ideologies, statistics look forward, serving two functions. First, statistics influence and determine the way people look at S&T in three senses. Statistics provide and help shape identities. We saw that this was the case for statistics on basic research, which helped universities defending their identity and lobbying for more basic research funds. It was also statistics that contributed to the European Union’s discourse on S&T: the classification on socioeconomic objectives (NABS) in the 1970s and the innovation survey in the 1990s gave the European Union a clear place in a field dominated by the OECD, producing facts that fed its discourses for the construction of Europe. UNESCO followed a similar “strategy”: the concept of scientific and technical activities (STA) allowed extension of the measurement to developing countries, but also gave UNESCO a niche in a field where the OECD frequently congratulated itself that the Frascati manual “attracted considerable interest in other international organizations and in member countries (. . .).”12 Second, statistics allowed people and organizations to define issues in their own terms. We saw how the debate on technological gaps in the 1960s was discussed in terms of disparities in R&D efforts between countries. Although several people tried to bring forth qualitative arguments to the contrary, the issue remained framed in terms of R&D statistics. Similarly, the OECD tried to (or believed it could) influence policies by ranking countries according to indicators. A target, generally the performance level of the United States for a given indicator, was defined statistically as a norm for other countries to emulate.
11 OECD (1965), Ministers Talk About Science, Paris, p. 124. 12 OECD (1964), Committee for Scientific Research: Minutes of the 11th Session, SR/M (64) 3, p. 11.
Conclusion
317
Finally, statistics invent totally new problems. Forecasting the shortages of scientists and engineers and measuring the “brain drain” were two issues that drove the energies of statisticians for decades. The models developed and the facts produced were regularly recommended to “planners and policy-makers as tools to anticipate potential problems and opportunities [and to] students as general indicators of the types of future opportunities available in broad fields of science and engineering.”13 The efforts of statisticians, however, never led to any systematic and general confirmation of the phenomena under study. The second forward-looking function of statistics was to serve decision and action. Here, however, we have to depart from rationalism to properly understand this role. Official statistics generally came after policies were implemented, or were not detailed enough to enlighten the policy process. They never indicated the best alternative to choose. As R. R. Nelson once stated about the statistical links between science, technology, and productivity: “Attempts by governments to influence growth rates are likely to be shallow until the connections among the variables are better understood. And, indeed, I am impressed by the shallowness of most of the prescriptions for faster growth. It is easy enough to recommend that rates of physical investment be increased, or that industrial R&D be expanded, or that time horizons of executives be extended, or that labor and management be more cooperative and less adversarial. But if the prescription stops here, it is hard to see what one actually is to do.”14 What end, then, did statistics serve? It helped the decision-maker and the politician construct discourses aimed at convincing the citizens about a course of action already chosen or taken. This is why users never bothered about the limitations of statistics: they already knew what to do, or had already done it. For policy-makers and politicians, standardization sufficed: because it was arrived at by consensus, standardization became evidence of accuracy. But standardization does not mean either accuracy or consensus. It was rather a panoply of tangled definitions held together by an international organization,15 and the latter had no control over the way definitions were implemented by member countries. This last point is obviously an important area for further study: to what extent do countries respect international norms, or develop and supplement them with their own? I have already discussed how footnotes allow one to take the pulse of incomplete standardization in international statistics. But national governments generally produce more data than those requested and published by the OECD. A literature on national histories of S&T statistics, however, is completely non-existent. I have concentrated here on international organizations and, to a lesser extent, on the United States, the United Kingdom, and Canada. These
13 C. Falk (1979), Preface, in NSF, Projections of Science and Engineering Doctorate Supply and Utilization, 1982 and 1987, NSF 79-303. 14 R. R. Nelson (1991), A Conference Overview: Retrospect and Prospect, in OECD, Technology and Productivity: The Challenge for Economic Policy, Paris, p. 584. 15 G. C. Bowker and S. L. Starr (1999), Sorting Things Out: Classification and Its Consequences, Cambridge, MA: MIT Press, p. 21.
318
Conclusion
National statistical agencies
S&T specialized agencies Academics
S&T departments (Government)
Firms Transnational organizations
Concerned with:
Concerned with:
input
ouput
surveys
databases
raw data
statistics
Figure C.1 The organization of the science measurement system.
three countries were the forerunners, namely countries that developed some influential methodologies or statistical series before the OECD Frascati manual. But the contemporary measurement of S&T is not limited to official statistics. It is the result of a system composed of multiple participants. This system comprises six categories of producers (see Figure C.1): (1) transnational organizations like the OECD, UNESCO, and the European Union; (2) national statistics agencies; (3) government departments; (4) organizations involved in the field of S&T (like the NSF); (5) university researchers; and (6) private firms. These participants play specific yet complementary roles in S&T measurement, and the system is characterized by a relatively clear-cut division of labor. This division tends to have the effect of pitting government departments against autonomous producers of statistics, rather than against their own national statistical agencies, with whom they share objectives. National statistical agencies and government departments specialize in the production of data based on input measurements obtained in surveys. As we have seen, their commitment is inspired by the OECD, and by the historical need to develop informed policies on S&T. At the opposite end of the spectrum are university researchers and private firms, who specialize in output measurement using databases originally constructed for bibliographic purposes, for example. Unlike national statistical agencies and government departments, their business is often the production of “sophisticated” statistics rather than raw data. Their entry into the field of science measurement roughly coincided with that of national governments and the OECD, but they had a different aim in mind: university researchers were attracted by the
Conclusion
319
implications of these empirical measurement tools for an emerging sociology of science.16 Finally, a third group, composed of specialized agencies and inter-governmental organizations, plays an intermediary role between the two previously mentioned groups of organizations. They buy, commission or simply use the information produced by various sources and organizations (with the exception of the NSF, which conducts its own surveys), which they then analyze and organize into summary documents. I call the institutions in this group “clearing houses.”17 Clearing houses play an important role within the functions usually associated with statistics agencies, in that they are concerned with the organization of information. This activity is especially important because the producers of S&T statistics are usually too captivated by their own methodologies to pay attention to what others are doing. Thus, as per OECD works, government organizations measure mostly inputs and, since they conduct their own surveys to that end, their work tends to be concerned only with information that they themselves produce, that is, raw data, which for the most part is distinct from statistics proper.18 The few government organizations that have attempted output measurement— for example, Statistics Canada with its experiment in bibliometrics during the 1980s19—now refuse to repeat the experience. University researchers, on the other hand, rarely work with official micro-data on R&D, partly because of the difficulties involved in ensuring confidentiality. They rely instead upon private secondary databases which, furthermore, allow them to transcend the study of both input data and factual information. The measurement of scientific outputs (publications), which has given rise to the new field of bibliometrics, allows one to go beyond inputs, in that the researchers’ commitment is to discover laws (Price’s law of exponential development,20 Lotka’s law21), to construct indicators (the impact factor22), and to analyze scientific networks.23
16 D. D. S. Price (1963), Little Science, Big Science, New York: Columbia University Press; D. D. S. Price (1961), Science since Babylon, New Haven: Yale University Press. 17 B. Godin (2002), Outlines for a History of Science Measurement, op. cit. 18 On the distinction between data and statistics, see: G. Holton (1978), “Can Science Be Measured?,” in Y. Elkana et al., Towards a Metric of Science: The Advent of Science Indicators, New York: Wiley & Sons, pp. 52–53; G. N. Gilbert and S. Woolgar (1974), “The Quantitative Study of Science: An Examination of the Literature,” Science Studies, 4, pp. 279–294. 19 K. Walker (1988), Indicators of Canadian Research Output (1984), Ottawa: Statistics Canada; J. B. MacAulay (1985), Un indicateur de l’excellence de la recherche scientifique au Canada, Ottawa: Statistique Canada. 20 D. D. S. Price (1956), “The Exponential Curve of Science,” Discovery, 17, pp. 240–243; D. D. S. Price (1951), “Quantitative Measures of the Development of Science,” Archives internationales d’histoire des sciences, 5, pp. 85–93. 21 A. J. Lotto (1926), “The Frequency Distribution of Scientific Productivity,” Journal of the Washington Academy of Sciences, 16 (12), pp. 317–323. 22 E. Garfield (1972), “Citation Analysis as a Tool in Journal Evaluation,” Science, 178, pp. 471–479. 23 H. Small and B. C. Griffith (1974), “The Structure of Scientific Literature: Identifying and Graphing Specialties,” Science Studies, 4, pp. 339–365.
320
Conclusion
Clearing houses serve as a bridge between the two major types of producers. Using information from various sources, their goal is to draw up a complete cartography of S&T by way of the publication of compendia or repertoires of statistical indicators. Most of these are published every two years, and are in their fifteenth edition in the case of the NSF,24 the sixth for the French OST,25 and the third in the case of the European Union.26 Notwithstanding the presence of clearing houses, the overall structure of S&T measurement is entirely founded upon a two-fold division. First, there is a conceptual division, whereby the data on S&T are distributed in accordance with the “input/output” model. Second, there is an institutional division, in which each side of this conceptual dichotomy corresponds to a type of participant and methodology: on one side are the national statistics organizations and government departments that refuse to take measures of university outputs into account, for example, and on the other are the university researchers and private firms, which do. One final dichotomy, in itself a reflection of bureaucratic jurisdictions, and one which clearing houses also strive to transcend, deserves mention: that of education versus R&D. As a rule, statistics on education are rarely found alongside statistics on S&T. They are produced either by a separate division within statistical organizations, or by departments distinct from those concerned with S&T. The systematic integration of education and R&D statistics appears mainly in comprehensive documents produced by clearing houses. Over the period 1930–2000, very few national statistical offices invested in measuring S&T using instruments other than the survey. It was usually ministries and “clearing houses” that developed indicators using other sources of data. When the multiple dimensions of S&T are taken into consideration, however, statistical offices would thus seem to rely on a highly-specific, and therefore limited, range of expertise. The OECD acknowledged the limitations of many of the statistics it produced, among them output indicators, and listed them in a number of documents. According to the OECD, the cause of these limitations was the instrument used to produce them. The OECD literature, however, does not contain similar discussions regarding official R&D surveys. Limitations are of course regularly discussed—and the revisions of the Frascati manual are specifically devoted to increasing the accuracy of surveys—but according to the OECD, the main limitation of R&D surveys is the lack of international comparability: countries have different practices that make comparisons difficult. Hence the publication of Sources and Methods, devoted to reporting discrepancies in the data. But the instrument itself—the survey—was always taken for granted and considered as the main, if not the only, reliable source of data on S&T.
24 Science and Engineering Indicators, 2002. 25 Science et Technologie: Indicateurs, 2002. 26 Third European Report on Science and Technology Indicators, 2003.
Conclusion
321
Three factors help explain the dominant role of the official R&D survey in the measurement of S&T: ●
●
●
Legitimacy of the state: The legitimacy of the survey as a method of data collection is intimately linked to the legitimacy of the state itself. Government has a relative monopoly on the survey because it is government which produces official national data and which defines the standards. Government therefore imposes its own view of the world upon its users. Money: The official survey concentrates on a statistic that is easy to measure, comparable with other government data and readily understood by everyone: money. As Daniel S. Greenberg recently argued: “A one-to-one relationship between money going in and science coming out has never been established. The volume of money, however, is countable, and comprehensible to scientists, politicians, and the public. Understood by all is the necessity of money for the training and well-being of scientists and the nurturing and advance of science.”27 First-mover advantage: R&D statistics were the first to be systematically developed by governments in the history of S&T measurement. It will therefore take time and resources before other forms of statistics acquire a similar status.
At least three themes might retain the attention of historians interested in documenting countries’ experiences, or in pursuing follow-ups to the present work. First, the link between statistics and policy remains to be properly assessed. I suggested that the relationship was exactly the opposite of that usually offered in economists’ and statisticians’ rhetoric: it is policy which informs statistics, rather than the reverse. But how is this arrived at? For which specific rhetorical purposes did governments use statistics in policy documents and white papers on S&T? What was the extent of the influence of OECD statistical and analytical documents on member countries’ policies, if any? In addition to official and international statistics, what other kinds of statistics were used, if any, in the implementation, management, and evaluation of government programs? A second theme for historical study concerns the role of academics in the field of S&T statistics. Scholars are obviously both producers and users of statistics. But how did they use national and international statistical series? Were they more skeptical than policy-makers with regard to the accuracy of the series? As producers, how did scholars participate in OECD exercises? They were invited either as consultants or experts, but what was their message? What specific expertise did they bring to standardization? Above all, how has the field of bibliometrics developed? Bibliometrics, a field entirely “controlled” by academics, is still waiting for its history to be written.28
27 D. S. Greenberg (2001), Science, Money, and Politics: Political Triumph and Ethical Erosion, Chicago: University of Chicago Press, p. 59. 28 P. Waiters (1999), The Citation Culture, PhD Dissertation, University of Amsterdam.
322
Conclusion
To document the role of scholars in S&T statistics, I suggest that one look specifically at the economic literature. Economists developed several themes that were similar to the official statisticians’ program of work—generally preceding official statisticians—with different degrees of influence between the two groups. The first theme was productivity. Most academics identify R. Solow and E. Denison as the first individuals to have attempted an integration of S&T into economic growth theories. They forget, however, that one of the first such analysis was conducted, although very imperfectly, by R. H. Ewell from the NSF, with preliminary data taken from the first industrial R&D survey in the United States.29 Economists’ productivity models, however, exercised considerable influence on the OECD works, above all in the last ten years, but they never led to the development of any valid indicator linking R&D and productivity. Second, in the literature on technology and international trade, economists generally ignored the terms of the public debate on technological gaps as well as the pioneering work of bureaucrats like M. Boretsky from the US Department of Commerce.30 Economists discussed the problem using their own models, but debated the existence of the problem just as much as policy-makers.31 Similarly, economics developed its own understanding of how to measure the impact (or outcomes) of S&T: the social rate of return on R&D investments.32 Official statisticians never really developed or used these kinds of measures, however. A third and final theme of study for historians of S&T statistics could be to look at what use firms made of the statistics. It is clear that national statistics were less than helpful for firms with regard to the management of their research laboratories. But industries developed several other quantitative tools to that end. An important literature, coming mainly from management science, discussed these methods in the 1970s. In general, it concluded that formulas “give no more insight than does mature judgment by management.”33 But several large firms nevertheless developed accounting systems, if not to select among projects, at least to track R&D expenditures. How did these systems appear and develop? 29 R. H. Ewell (1955), Role of Research in Economic Growth, op. cit. 30 See for example: P. Krugman (1995), Technological Change in International Trade, in P. Stoneman, Handbook of the Economics of Innovation and Technological Change, Oxford: Blackwell, pp. 342–365; G. Dosi and L. Soete (1988), Technical Change and International Trade, in G. Dosi et al., Technical Change and Economic Theory, London: Pinter, pp. 401–431. 31 For an example of the state of the debate in the 1980s, see: the special issue of Research Policy in honor of Y. Fabian, particularly: P. Patel and K. Pavitt (1987), Is Western Europe Losing the Technological Race?, Research Policy, 16, pp. 59–85. 32 Z. Griliches (1958), Research Costs and Social Returns: Hybrid Corn and Related Innovations, Journal of Political Economy, 66 (5), pp. 419–431; E. Manfield, J. Rapoport, A. Romeo, S. Wagner, and Beardsley (1977), Social and Private Rates of Return from Industrial Innovations, Quarterly Journal of Economics, May, pp. 221–240. 33 D. L. Meadows (1968), Estimate Accuracy and Project Selection Models in Industrial Research, Industrial Management Review, 9, p. 105. See also: W. E. Souders (1972), Comparative Analysis of R&D Investment Models, AIIE Transactions, 4 (1), pp. 57–64; T. E. Clarke (1974), Decision-Making in Technologically Based Organizations: A Literature Survey of Present Practice, IEEE Transactions on Engineering Management, EM-21 (1), pp. 9–23.
Conclusion
323
What use was made of them? What has been the influence of the official statisticians’ (often compulsory) questionnaire on accounting practices?34 Official S&T statistics occupy a special place in the whole series of statistics produced by the state: they are neither social nor economic statistics, and they have not, up to now, been satisfactorily linked with either field; they are not mandatory or rule-oriented, but usually produced for understanding S&T, and used rhetorically by governments; they intimately meshed with public issues, and were very often located within a government department and produced under the guidance of the policy-makers; they became standardized at the international level very early on. By themselves, these characteristics suffice to call for further studies on S&T statistics. But there is another reason: S&T statistics are one of the most recent types of statistics to have appeared in the history of statistics. Social statistics, which are over 200 years old, are quite well-studied in the literature. Economic statistics, developed mainly in this century, are produced and used almost daily by governments, financial institutions, and the newspapers. S&T statistics are of more recent origin, but spread worldwide very early on and quite rapidly. This is because S&T is said to be, since the 1950s at least, at the very heart of economic progress. Governments actually count on S&T for the well-being of the nation, and a way to convince its citizens of the idea and make its efforts visible is to count S&T resources and regularly publish the statistics, thereby contributing to the legitimacy both of S&T, and of the statistics themselves.
34 K. Robson (1994), Connecting Science to the Economic: Accounting Calculation and the Visibility of R&D, Science in Context, 7 (3), pp. 497–514.
Appendices
Appendix 1: major OEEC/OECD science policy documents ● ● ● ●
International Cooperation in Scientific and Technical Research (1960) Science and the Policies of Government (1963) Science, Economic Growth, and Government Policy (1963) First Ministerial Meeting (1963). Ministers Talk About Science (1965)
●
Second Ministerial Meeting (1966) 1 2 3 4
●
Fundamental Research and the Policies of Governments Government and the Allocation of Resources to Science Government and Technical Innovation The Social Sciences and the Politics of Governments.
Third Ministerial Meeting (1968) Gaps in Technology in Member Countries (1970)
● ● ● ● ● ● ● ● ● ● ●
Science, Growth, and Society (1971) The Research System (1974) Technical Change and Economic Policy (1980) Science and Technology Policy for the 1980s (1981) New Technologies in the 1990s: a Socioeconomic Strategy (1988) Science, Technology, and Industry Outlook (1988—biennial) Choosing Priorities, in Science and Technology (1991) Technology in a Changing World (1991) Technology, Productivity, and Job Creation: the OECD Job Strategy (1996) Managing National Innovation Systems (1999) A New Economy? The Changing Role of Innovation and Information Technology in Growth (2000).
Notes a Conducted by the National Research Council. b Includes data on universities. c Harvard Business School. d Ibid. e On the work conditions of scientists. f On the salaries of scientists. g Including the non-profit sector.
National Science Foundation Canada National Research Council Department of Reconstruction Dominion Bureau of Statistics United Kingdom ACSP DSIR
Bureau of Labor Statistics
United States National Research Council Works Progress Administration National Resources Committee Bush (Bowman report) Kilgore OSRD President’s Scientific Research Board Bureau of Budget Department of Defense
1958
1956
1941
1956
1952 1953d 1953
c
1933 1940 1941a
Industry
Sectors
1947
1953
1950
1945 1947
Government
1960
1956g
1953
1938b
University
1950e 1951f
Others
Appendix 2: early experiments in official measurement of R&D (first editions)
1956
1956
1947
1945
All
326
Appendices
Appendix 3: early directories on S&T (first editions) Sectors Industry United States American Men of Science National Research Council NRPB (Roster) Engineering College Research Council National Science Foundation Canada National Research Council United Kingdom Association of British Workers Royal Society
Personnel Government
University
Others
1920a
1923b 1927c
1906 1920
1940 1951 1957d
1953e 1927f
1936
Notes a Doctorates. b Fellowships. c Societies. d Independent Commercial Laboratories. e The roster is transferred to the NSF. f Appeared in the US NRC repertory (1927).
1930s
Appendices 327
Appendix 4: early NSF statistical publications (1950–1960) Totals for the economy 1 2
Reviews of Data on R&D: Expenditures for R&D in the United States 1953, NSF 56-28. Reviews of Data on R&D: Funds for R&D in the United States, 1953–1959, NSF 59-65.
Federal government 1 2 3
Funds for Scientific Activities in the Federal Government 1953–1954, NSF 58-14. Scientific Manpower in the Federal Government—1954, 1957. Federal Funds for Science (Federal R&D Budget), 1952–1959.
Industry 1 2
3
R&D by Nonprofit Research Institutes and Commercial Laboratories, 1953. Research by Cooperative Organizations: A Survey of Scientific Research by Trade Associations, Professional, and Technical Societies, and Other Cooperative Groups, 1953. Science and Engineering in American Industry, 1956.
Nonprofit institutions 1 2 3
Scientific Research Expenditures by Large Private Foundations, 1956. R&D by Nonprofit Research Institutes and Commercial Laboratories— 1955, 1956. Research Expenditures of Foundations and Other Nonprofit Institutions— 1953–1954, 1957.
Colleges and universities 1
Scientific R&D in Colleges and Universities—1953–1954, 1959.
328
Appendices
Appendix 5: OEEC/OECD committees Manpower (1948) Scientific and Technical Information (WP3) (1949) Scientific and Technical Matters (1951) Productivity and Applied Research Committee (1952) European Productivity Agency (1953) Applied Research (1954) Shortages of Highly Qualified and Technical Manpower (WP25) (1957) Office of Scientific and Technical Personnel (OSTP) (1958) Scientific and Technical Personnel (STP) (1961) Scientific Research (1961) Science Policy (1966) Research Cooperation (1966) Education (1970) Science and Technology Policy (1970)
Appendices 329
Appendix 6: DSTI seminars, workshops, and conferences on S&T statistics Workshops and conferences 1 2
R&D Deflators (1977) Human S&T Resources Aging of Scientific and Technical Personnel (1977) Measurement of Stocks of Scientists and Technical Personnel (1981) Assessing the Availability of and Need for Research Manpower (1988) Measurement of S&T Human Resources (1992) Measurement of S&T Human Resources (1993) Science and Technology Labour Markets (1999) Output Indicators (1978, 1979, 1980) Technological Balance of Payments (1981, 1987)
3
Innovation and patents Patents, Invention, and Innovation (1982) Innovation Statistics (1986) Joint EC/OECD Seminar on Innovation Surveys (1993) Innovation, Patents, and Technological Strategies (1994) Joint EC/OECD Seminar on Innovation Surveys (1999)
4
High technology trade Technology Indicators and the Measurement of Performance in Industrial Trade (1983) High-Technology Industries and Products Indicators (1993) Higher Education (1985) Development of Indicators for the TEP Program (1990)
5
Non-member countries S&T Indicators for Non-member countries (1991) S&T Indicators in Central and European Countries (1993) Application of OECD Methodologies in PIT Countries and the Russian Federation (1995) Implementation of OECD Methodologies for R&D/S&T Statistics in Central and Eastern European Countries (1997) New S&T Indicators for Knowledge-based Economy (1996, 1998) Use of S&T Indicators for Decision-making and Priority Setting (1997) Biotechnology Statistics (2000, 2001, 2002) Health-related R&D (2000)
Other workshops where NESTI was involved 1 2 3 4
Intangible Investment (1992) Economics of the Information Society (1995–97) S&T Labour Markets (1999) Mobility of Highly Qualified Manpower (2001)
330
Appendices
Appendix 7: DSA/DSTI publications on S&T statistics Analytical reports 1967 1971 1975 1975
1979 1979 1984 1986 1989
The Overall Level and Structure of R&D Efforts in OECD Member Countries. R&D in OECD Member Countries: Trends and Objectives. Patterns of Resources Devoted to R&D in the OECD Area, 1963–1971. Changing Priorities for Government R&D: An Experimental Study of Trends in the Objectives of Government R&D Funding in 12 OECD Member Countries, 1961–1972. Trends in Industrial R&D in Selected OECD Countries, 1967–1975. Trends in R&D in the Higher Education Sector in OECD Member Countries Since 1965 and Their Impact on National Basic Research Efforts. Science and Technology Indicators—1. Science and Technology Indicators—2. Science and Technology Indicators—3.
Statistical series 1
International Survey of the Resources Devoted to R&D by OECD Members Countries (1967–1983; biennial) 1967–1973 Four publications for each survey (one per sector and one general) 1975–1983 Fascicles by country (⫹International Volume for one year only)
2 3 4 5 6
“Recent Results” and “Basic Statistical Series” (1980–1983). The two documents would lead to the next two publications: Main Science and Technology Indicators (1988–present: twice yearly) Basic Science and Technology Statistics (1991, 1997, 2000, 2001) Research and Development Expenditure in Industry (1995, 1996, 1997, 1999, 2001) Science, Technology, and Industry Scoreboard of Indicators (1995, 1997, 1999, 2001)
Methodological manuals (First editions) 1963
The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Development (Frascati manual). 1989 The Measurement of Scientific and Technical Activities: Proposed Standard Practice for Surveys of Research and Experimental Development (Supplement to the Frascati manual). 1990 The Measurement of Scientific and Technical Activities: Proposed Standard Practice for the Collection and Interpretation of Data on the Technological Balance of Payments. 1992 Proposed Guidelines for Collecting and Interpreting Technological Innovation Data (Oslo Manual).
Appendices 331 1994 The Measurement of Scientific and Technical Activities: Data on Patents and Their Utilization as Science and Technology Indicators. 1995 Manual on the Measurement of Human Resources in Science and Technology, Paris.
Periodicals Science Resources Newsletter (published between 1976 and 1988). STI Review (1986–2002).
332
Appendices
Appendix 8: mandates of the ad hoc review group First review1 (i) (ii) (iii) (iv) (v)
To assess the needs and priorities of member countries and of the OECD itself for R&D statistics. To assess the importance of R&D statistics for the 1973 program of the Committee for Scientific and Technological Policy. To assess current methods and operational practices of the Secretariat in the field of R&D statistics. To establish the precise effects of the proposed cuts on the Secretariat’s capacity to meet the needs identified earlier. To examine the relevant efforts of other international organizations in order to avoid unnecessary duplication, and particularly to encourage the sharing of common reporting responsibilities.
Second review2 (vi) (vii)
To identify the actual and potential users of OECD R&D statistics. To assess their needs for internationally-comparable R&D data and other S&T statistics. (viii) To test the adequacy and timeliness of current information. (ix) To establish a list of priorities for future work on R&D statistics at OECD, taking into account the capacity of producers to supply the necessary statistics. (x) To assess the current methods and operational practices of the Secretariat in the field of R&D statistics by examining the relevant practices of member countries. (xi) To examine the relationship of the STI Unit with other statistical services within the OECD as well as with other international bodies, including the Commission of the European Communities, in order to ensure the best possible linkages. Third review3 (xii)
To identify the needs of actual and potential users of OECD S&T indicators for internationally-comparable data. (xiii) To assess the comprehensiveness and timeliness of existing indicators, the adequacy of their presentation and dissemination, and the desirability of new indicators. (xiv) To advise the CSTP on priorities for future work while taking into account current and suggested future projects, resource constraints, and other related criteria.
1 OECD (1972), Summary Record of the Second Session of the CSTP, STP/M (72) 2. 2 OECD (1976), Summary Record of the 13th Session of the CSTP, STP/M (76) 3. 3 OECD (1984), Summary Record of the 37th Session of the CSTP, STP/M (84).
Appendices 333
Appendix 9: some definitions of research US National Resources Committee (1938) Investigations in both the natural and social sciences, and their applications, including the collection, compilation, and analysis of statistical, mapping, and other data that will probably result in new knowledge of wider usefulness that aid in one administrative decision applying to a single case (p. 62).
US National Research Council (1941) Organized and systematic search for new scientific facts and principles which may be applicable to the creation of new wealth, and presupposes the employment of men educated in the various scientific disciplines (p. 6).
Canadian Department of Reconstruction and Supply (1947) Purposeful seeking of knowledge or new ways of applying knowledge, through careful consideration, experimentation, and study (p. 11).
Federation of British Industries (1947) Organized experimental investigations into materials, processes and products, and scientific principles in connection to industry, and also development work, but excluding purely routine testing (p. 4).
Harvard Business School (1953) Activities carried on by persons trained, either formally or by experience, in the disciplines and techniques of the physical sciences including related engineering, and the biological sciences including medicine but excluding psychology, if the purpose of such activity is to do one or more of the following things: (1) pursue a planned search for new knowledge, whether or not the search has reference to a specific application; (2) apply existing knowledge to problems involved in the creation of a new product or process, including work required to evaluate possible uses; (3) apply existing knowledge to problems involved in the improvement of a present product or process (p. 92).
NSF (1953) Systematic, intensive study directed toward fuller knowledge of the subject studied and the systematic use of that knowledge for the production of useful materials, systems, methods, or processes (p. 3).
334
Appendices
OECD (1970) Creative work undertaken on a systematic basis to increase the stock of scientific and technical knowledge4 and to use this stock of knowledge to devise new applications (p. 31). UNESCO (1978) Any systematic and creative work undertaken in order to increase the stock of knowledge, including knowledge of man, culture, and society, and the use of this knowledge to devise new applications.
4 “including knowledge of man, culture and society” was added in 1976.
Appendices 335
Appendix 10: activities to be excluded from R&D (Frascati manual) 1963 1 Related activities 2 Non-scientific activities. 1970 1 Related activities 2 industrial production and distribution of goods and services and the various allied technical services. 1976 1 activities using the disciplines of the social sciences such as market studies. 1981 1 education and training 2 other related scientific and technological activities 3 other industrial activities. 1993 1 R&D administration and indirect support activities. Related activities 1963 1 scientific information 2 training and education 3 data collection 4 testing and standardization. 1970 1 scientific education 2 scientific and technical information 3 general purpose data collection 4 testing and standardization 5 feasibility studies for engineering projects 6 specialized medical care 7 patent and license work. 1976 1 policy-related studies.
336
Appendices
1981 1 scientific and technical information services 2 general purpose data collection 3 testing and standardization 4 feasibility studies 5 specialized medical care 6 patent and license work 7 policy-related studies. 1993 1 routine software development. Non-scientific activities 1963 1 legal and administrative work for patents 2 routine testing and analysis 3 other technical services. Industrial production 1963 1 prototypes and trial production 2 design and drawing 3 pilot plant. 1970 1 prototypes 2 pilot plant 3 trial production, trouble-shooting, and engineering follow-through. 1976 1 prototypes 2 pilot plant 3 trial production, trouble-shooting, and engineering follow-through. 1981 1 innovation 2 production and related technical services (see specific cases). 1993 1 innovation 2 production and related technical services (see specific cases).
Appendices 337 Innovation 1981 1 R&D 2 new product marketing 3 patent work 4 financial and organizational changes 5 final product or design engineering 6 tooling and industrial engineering 7 manufacturing start-up 8 demonstration. 1993 1 R&D 2 tooling-up and industrial engineering 3 manufacturing start-up and preproduction development 4 marketing for new products 5 acquisition of disembodied technology 6 acquisition of embodied technology 7 design 8 demonstration.
Specific cases 1981 1 prototypes 2 pilot plants 3 very costly pilot plants and prototypes 4 trial production 5 trouble-shooting 6 feed-back R&D. 1993 1 industrial design 2 tooling up and industrial engineering.
Administration and other supporting activities 1993 1 purely R&D financing activities 2 indirect supporting activities.
338
Appendices
Appendix 11: OECD/ICCP publications 1 Transborder Data Flows of the Protection of Privacy, 1979 2 The Usage of International Data Networks in Europe, 1979 3 Policy Implications of Data Network Developments in the OECD Area, 1980 4 Handbook of Information, Computer and Communications Activities of Major International Organisations, 1980 5 Microelectronics Productivity and Employment, 1981 6 Information Activities, Electronics and Telecommunications Technologies, Volume 1: Impact on Employment, Growth and Trade, 1981 Volume 2: Expert’s Report’s (“Background Papers” Series) 7 Microelectronics, Robotics and Jobs, 1982 8 An Exploration of Legal Issues in Information and Communication Technologies, 1983 9 Software: An Emerging Industry, 1985 10 Computer-Related Crime, Analysis of Legal Policy, 1986 11 Trends in Information Economy, 1986 12 Information Technology and Economic Prospects, 1987 13 Trends in Change in Telecommunications Policy, 1987 14 The Telecommunications Industry: The Challenges of Structural Change, 1988 15 Satellites and Fibre Optics—Competition Complementarity, 1988 16 New Telecommunications Services—Videotex Development Strategies, 1989 17 The Internationalization of Software and Computer Services, 1989 18 Telecommunication Network-Based Services: Policy Implications, 1989 19 Information Technology and New Growth Opportunities, 1989 20 Major R&D Programmes for Information Technology, 1989 21 Trade in Information, Computers and Communications Services, 1990 22 Performance Indicators for Public Telecommunications Operators, 1990 23 Universal Service and Rate Restructuring in Telecommunications, 1991 24 Telecommunications Equipment: Changing Materials and Trade Structures, 1991 25 Information Technology Standards: The Economic Dimension, 1991 26 Software Engineering: The Policy Challenge, 1991 27 Telecommunications Type Approval: Policies and Procedures for Material Access, 1992 28 Convergence Between Communications Technologies: Case Studies for North America and Western Europe, 1992 29 Telecommunications and Broadcasting: Convergence or Collision? 1992 30 Information Networks and New Technologies: Opportunities and Policy Implications for the 1990s, 1992 31 Usage Indicators: A New Foundation for Information Technology Policies, 1993 32 Economy and Trade Issues in the Computerized Database Market, 1993 33 The Economics of Radio Frequency Allocation, 1993
Appendices 339 34 International Telecommunications Tariffs: Charging Practices and Procedures, 1994 35 Telecommunications Infrastructure: The Benefits of Competition, 1995 36 International Telecommunications Pricing Practices and Principles: A Progress Review, 1995 37 Price Caps for Telecommunications: Policies and Experiences, 1995 38 Universal Service Obligations in a Competitive Telecommunications Environment, 1995 39 Mobile Cellular Communication: Pricing Strategies and Competition, 1996.
340
Appendices
Appendix 12: UNESCO conferences and meetings on S&T statistics Meetings of experts 1 2 3 4 5
June 15–17, 1966 May 6–8, 1968 November 1969 September 1970 November 17–20, 1971.
Joint UNESCO/EEC meetings on the development of S&T statistics 1 2 3 4
June 2–6, 1969 November 27–December 1, 1972 January 19–23, 1976 May 4–7, 1981.
Other meetings and workshops 1 2 3 4 5 6 7 8 9 10 11 12 13 14
Quantification of S&T Related to Development, 1973 Indicators of S&T Development, 1974 Higher Education Sector, 1974 Problems at the National Level in Science Statistics, 1975 Statistics of Science and Technology, 1976 International Technology Flows, 1976 Standardization, 1977 International Technology Flows, 1977 Draft Recommendation, June 1978 Development of S&T Statistics, 19805 Education and Training, 1982 STID, October 1–3, 1985 Lifelong Training, 1989 Improvements of Coverage, Reliability, Concepts, Definitions and Classifications in the Field of Science and Technology Statistics, 1994.
5 Mainly devoted to technology transfer.
Appendices 341
Appendix 13: UNESCO documents on S&T statistics 1960 1968 1969 1970 1970 1970 1971 1974 1976 1976 1977
1977 1978 1978 1979 1979 1980
1980
1980 1980 1980 1980
Requirements and Resources of Scientific and Technical Personnel in Ten Asian Countries, ST/S/6 A Provisional Guide to the Collection of Science Statistics, COM/MD/3 The Measurement of Scientific and Technical Activities, ST/S/15 World Summary of Statistics on Science and Technology, ST/S/17 Measurement of Output of Research and Experimental Development, ST/S/16 Manual for Surveying National Scientific and Technological Potential, NS/SPS/15 The Measurement of Scientific Activities in the Social Sciences and the Humanities, CSR-S-1 The Quantitative Measurement of Scientific and Technological Activities Related to Research and Experimental Development, CSR-S-2 R&D Activities in International Organizations, CSR-S-3 Statistics on Science and Technology in Latin America: Experience with UNESCO Pilot Projects 1972–1974 The Statistical Measurement of Scientific and Technological Activities Related to Research and Experimental Development: Feasibility Study, CSR-S-4 Guide to the Collection of Statistics on Science and Technology (second edition), ST-77/WS/4 Development in Human and Financial Resources for Science and Technology, CSR-S-5 Recommendation Concerning the International Standardization of Statistics on Science and Technology Statistics on Research and Experimental Development in the European and North American Region, CSR-S-6 Estimation of Human and Financial Resources Devoted to R&D at the World and Regional Level, CSR-S-7 National Statistics Systems for Collection of Data on Scientific and Technological Activities in the Countries of Latin America, Part I: Venezuela, Colombia, Mexico and Cuba, ST-80/WS/18 National Statistics Systems for Collection of Data on Scientific and Technological Activities in the Countries of Latin America, Part II: Brazil and Peru, ST-80/WS/29 Statistics on Science and Technology, CSR-S-8 Participation of Women in R&D: A Statistical Study, CSR-S-9 Statistics on Science and Technology, CSR-S-10 Manual for Statistics on Scientific and Technological Activities (provisional), ST-80/WS/38
342 1981
1981 1982 1982 1982 1982 1983 1984 1984 1984 1984 1984 1985 1986
1986 1986 1988 1988 1990 1991
Appendices National Statistics Systems for Collection of Data on Scientific and Technological Activities in the Countries of Latin America, Part III: Uruguay, Argentina and Chile, ST-81/WS/14 Statistics on Science and Technology, CSR-S-11 Human and Financial Resources for Research and Experimental Development in the Productive Sector, CSR-S-12 Trends in Human and Financial Resources for Research and Experimental Development, CSR-S-13 Statistics on Science and Technology, CSR-S-14 Proposal for a Methodology of Data Collection on Scientific and Technological Education and Training at the Third Level, CSR-S-15 Human and Financial Resources for Research and Experimental Development in Agriculture, CSR-S-16 Estimated World Resources for Research and Experimental Development: 1970–1980, CSR-S-17 Manual for Statistics on Scientific and Technological Activities, ST84/WS/12 Guide to Statistics on Science and Technology (third edition), ST-84/WS/19 Guide to Statistics on Scientific and Technological Information and Documentation (STID), ST-84/WS/18 Manual on the National Budgeting of Scientific and Technological Activities, Science and Policy Studies and Documents no. 48 Estimate of Potential Qualified Graduates from Higher Education, CSRS-19 (ST-85/WS/16) Science and Technology for Development: Scandinavian Efforts to Foster Development Research and Transfer Resources for Research and Experimental Development to Developing Countries, CSR-S-20 (ST86/WS/7) Integrated Approach to Indicators for Science and Technology, CSR-S-21 (ST-86/WS/8) Human and Financial Resources for Research and Experimental Development in the Medical Sciences, CSR-S-22 Financial Resources for Fundamental Research, CSR-S-23 (ST88/WS/4) Human and Financial Resources for Research and Experimental Development in the Higher Education Sector, CSR-S-24 (ST-89/WS/1) Manual For Surveying National Scientific and Technological Potential (Revised Edition) Estimation of World Resources Devoted to Research and Experimental Development: 1980 and 1985, CSR-S-25 (ST-90/WS/9).
Appendices 343
Appendix 14: NSF committee’s choice of indicators (1971) Indicator A. Scientific output measure 1. Number of papers in top quality, refereed journals 6. Utility of knowledge 30. Number of referenced articles; citations 32. Number of refereed publications originating from particular research grants or projects and estimated cost per paper 34. Longitudinal number of patents/population 22–64 years B. Activity measures 2. Ratio of basic research funds to total investment in R&D 3. Federal support of total research by field of science 4. Ratio of number of scientific research project support proposals warranting support to number of grants awarded by field of science (NSF and NIH only) 7. Ratio of applied research funds to total R&D 8. Ratio of development funds to total R&D 9. Ratio of Federal R&D funds to total Federal expenditures for such functions as health, transportation, defense, etc. 10. Federal basic research dollars by field 11. Total funding of academic R&D (expenditures) and Federal funding of academic science (obligations) 21. Basic research, applied research, development, and total R&D dollars by source and performer 22. Split of Federal research support between academic young and senior investigators 23. Industrial R&D for R&D performing companies as a percent of sales dollars 24. R&D dollars in industry by type of industry 33. Federal academic science support by agency 35. Non-profit R&D, by source 43. Geographic distribution of R&D 44. Industrial R&D funding, by source C. Science education measures 12. Percent of freshmen selecting science careers 13. Distribution of new bachelors, masters, and doctorates by field 14. Number of science and engineering degrees as a percent of total degrees 15. Stipend support of full-time graduate students by: field, type of support 31. Ratio of percentage of science and engineering freshmen enrolments and doctorates per geographic origin of students to percentage of total population of that region 36. Enrolments in science and math courses in public high schools 37. Postdoctoral training plans of doctorates by field 38. Ratio of science faculty to degrees and to graduate enrolments, by field of science
Score
Feasibility
50 45 38 35
N1 D, N1 N1 N2
35
D
50 50 50
D D D
45 45 45
D D D
45 45
D D
40
D
40
D
40
D
40 35 35 30 30
D D D D D
45 45
D D
45
D
45
D
38
D
35
D
35 35
D D (Continued)
344
Appendices
Continued Indicator
Score
Feasibility
42. Distribution of Freshmen science and engineering probable majors by H. S. grades, class standing and test scores
31
D
40
N1
40 35
N1 N1
45
D
40
N1
50
N2
45
N2
45 45
N2 D
45
D
40 40
D D
35
N2
D. Attitudes toward and interest in science 25. Prestige ratings of scientific occupations vs. ratings of other fields of endeavor according to public opinion polls 26. Poll of views about science on part of students 39. Poll of views about science on part of public at large E. Manpower measures 16. Relative and absolute employment of scientists and engineers by sector, degree, and field of science 27. Percentage of scientists and engineers unemployed by degree and field of science compared with equivalent ratios for other areas of professional employment F. Extent of new thrusts 5. Major new frontiers of science opened up during a specific year 17. Major “frontier” facilities in various areas of science which are feasible and are not being constructed. Comparison with a similar list developed for the rest of the world G. International 18. Ratio of US scientific publications to world total 19. Relationship of US R&D/GNP per capita among various nations 20. R&D scientists and engineers per 10,000 population in different countries 28. R&D/GNP in different countries 29. Scientific and engineering personnel per 10,000 population in different countries 40. Nobel (and other) prizes per capita won by US each year compared with other countries Notes D Basic data in hand. N1 New data to be developed (with comparative ease). N2 New data to be developed (with comparative difficulty).
Indicators considered but not recommended Federal intramural R&D funding, by agency. Percent of science drop-outs during college career. Number of people taking science courses where there are no such requirements. Nationality of invited speakers at large international meetings. R & D expenditures per capita for different countries. Total numbers of papers produced by US scientists per year. Geographic distribution of academic science dollars for various groupings of institutions (Magnitude of Federal academic science dollars, number of science and engineering bachelors, number of science PhDs, etc.).
Appendices 345 Increase in number of scientific category jobs in the Department of Labor’s Dictionary of Occupational Titles. Relationship of US scientific papers to world papers as compared with US GNP against world’s GNP. Number of technicians/scientists and engineers in different countries. Longitudinal studies of the publication history of a sample of PhDs in a variety of fields from a variety of institutions. Number of people who choose to visit science exhibitions or natural science museums. Ratio of number of federally supported articles to federal research funds allocated, by field. Types of instrumentation and techniques cited in the papers. Degrees and graduate enrolments by average GRE score of Masters and PhDs Projections of supply and utilization for all scientists and engineers as well as doctorates by field and activity Projections by degree and field of science. Balance of payments over time. Growth in cubic footage in university, government, and private research laboratories. Percent of university budgets allocated to scientific departments vs. other departments. Annual average percentage of front-page stories in the New York Times that deal with scientific subjects. Salaries commanded by those in “scientific” job categories vs. those in nonscience categories. Membership in professional societies as a percent of total working population. Percent of those listed in Who’s Who who have scientific backgrounds. Attendance at scientific symposia, etc. Subscriptions ( per capita) to science magazines and science book purchases. List of facilities in various areas of science which are feasible and are not being constructed. Comparison with a similar list developed for the rest of the world. Percentage of utilization by facility as compared to maximum possible utilization in terms of shifts of operation, number of experiments being performed, etc.
346
Appendices
Appendix 15: coverage of activities in early R&D surveys
Government R&D National Resources Committee (1938) Kilgore (1945)
Canadian DRS (1947)
NSF (1953)
Dominion Bureau (1960)
Inclusions
Exclusions
Collection of data
Routine work
Pilot plant
Social sciences Routine Exploration Design Routine work
Surveys Analysis Administration Dissemination Social science Indirect costs
Planning and administration Capital Data collection Scientific information Scholarship and fellowship
Industrial R&D FBI(1947) Harvard Business School(1953)
Scale, Pilot plant Design Information
Dominion Bureau(1955)
NSF(1956)
DSIR(1958)
Routine work Mapping and surveys Exploration Dissemination Training
Pilot plant Design Laboratory scale Prototypes Design Prototypes
Routine work Testing Exploration Market research Economic research Legal work Market research Routine Patent work Advertising Social science Exploration Market research Economic studies Legal work Technical services Routine work Tooling up Market research
Appendices 347
Appendix 16: differences in definition and coverage according to Freeman and Young (1965) Expenditures (a) Not all countries included the social sciences in the survey (United Kingdom), and when they did it did not cover the enterprise sector. (b) Some countries included depreciation of capital expenditures (United States, Germany). (c) Some countries (France) included contributions to international research organizations in their national statistics. (d) Only Norway estimated related scientific activities in order to exclude them from R&D. (e) The French derived R&D expenditures funded by government from the source of funds rather than from the performer. (f ) High rate of non-responses from industry (France). Manpower (a) Varying definitions of scientists and engineers (some are based on qualifications, others on occupations). (b) Only a few countries (United States) estimated full-time equivalent personnel for the higher education sector. (c) A more liberal definition of support personnel in the United States than in Europe. Others (a) (b) (c) (d)
Varying definition of sectors according to countries. Difficulties in estimating the basic/applied dimensions of research. Classification of enterprises by industry rather than by product. Exchange rates missing.
348
Appendices
Appendix 17: OECD standard footnotes6 (a) Break in series with previous year for which data is available (b) Secretariat estimate or projection based on national sources (c) National estimate or projection adjusted, if necessary, by the Secretariat to meet OECD norms (d) (Note used only for internal OECD data-processing) (e) National results adjusted by the Secretariat to meet OECD norms (f ) Including R&D in the social sciences and humanities (g) Excluding the social sciences and humanities (h) Federal or central government only (i) Excludes data from the R&D content of general payment to the higher education sector for combined education and research (public GUF) ( j) Excludes most or all capital expenditures (k) Total intramural R&D expenditures instead of current intramural R&D expenditures (l) Overestimated or based on overestimated data (m) Underestimated or based on underestimated data (n) Included elsewhere (o) Includes other classes (p) Provisional (q) At current exchange rate and not at current purchasing power parity (r) Including international patent applications. (s) Unrevised breakdown not adding to the revised total (t) Do not correspond exactly to the OECD recommendations (u) Includes extramural R&D expenditures (v) The sum of the breakdown does not add to the total.
6 OECD (2000), Main Science and Technology Indicators (2), Paris.
Appendices 349
Appendix 18: GERD and its footnotes (millions of current $)* 1993 Australia Austria Belgiump Canada Czech Republic Denmark Finland France Germany Greece Hungary Iceland Irelandc Italy Japan Korea Mexico Netherlands New Zealand Norway Poland Portugal Spain Sweden Switzerland Turkey United Kingdom USA j
1994
1995
1996
1997
1998
2,286.0 3,491.3a 9,188.4 1,302.2t
5,581.2 2,511.4c 2,686.7c 3,645.2 3,853.1 10,110.9 11,051.4 t 1,249.5 1,293.3a
6,776.3 6,758.7 2,895.0c 3,123.6 3,476.8c 4,105.7 4,270.6 11,059.1 11,711.2 12,366.9c 1,391.3 1,569.8 1,680.3
1,786.4 1,754.3 26,441.6 36,186.6 545.0 802.5t 66.3 609.8 11,483.2 74,382.2i 10,403.2 1,303.0c 5,456.8 545.6
2,203.1 1,943.3 2,203.6 26,520.1 27,722.6 37,028.5c 39,451.5c 652.0 759.3a 680.4 72.0 91.7 749.7 877.2 11,343.7 11,522.8 75,287.2i 84,783.3i 12,771.4 15,345.7 1,831.4 1,923.1 5,880.0a 6,528.9 606.2
2,360.4c 2,534.0 2,604.1c 2,529.1c 2,858.6 3,246.5 27,783.8 27,060.8 27,880.4p 39,902.3c 41,751.5c 43,556.7c 721.7 618.7 721.4 720.3 120.4 141.1c 947.3 1,083.8 12,100.8 11,913.4 12,613.2p 85,469.6a 89,632.5 92,663.1 17,287.5 19,000.0 16,980.7 2,066.0 2,441.8 6,837.8 7,378.0 7,391.7 752.1
1,597.0 1,720.5 4,765.7 4,519.9c 4,984.0a, m 1,465.2 21,246.1
1,156.7 21,765.1
3,644.7c, p 12,775.0p 1,721.7 2,770.0c 3,652.4c, p 46,218.0c 763.9 137.5c 13,310.7p
1,739.6a 1,951.8 1,875.6 2,023.5 2,010.3 2,258.8 774.5 946.3 4,838.6 5,182.9c 5,297.7 6,116.8c 6,095.4a, m 6,845.4 4,867.6 1,321.3 1,699.0 1,996.9 21,672.5 22,467.8 22,682.2 23,445.2
2,145.6p 2,456.6
196,995.0 212,246.0 226,653.0p
243,548.0p
165,868.0
169,270.0 183,694.0
Total OECDb
390,699.8
402,062.9 440,887.7a 468,026.1 494,388.3 518,113.7p
North America European Union Nordic countries
176,359.4
181,212.3 196,668.6
121,724.6b 124,629.5b 131,081.7 10,187.9a
1999
12,333.5a
* Notes’ definitions appear in Appendix 17.
6,443.5c
210,120.0 226,399.0 241,869.2b, p 259,315.3b, p 135,165.1b 138,490.7 144,989.7b, p 14,310.1
350
Appendices
Appendix 19: industries (ISIC) Agriculture, hunting, and forestry Mining Manufacturing Food, beverages, and tobacco Food products and beverages Tobacco products Textiles, wearing apparel, fur and leather Textiles Wearing apparel and fur Leather products and footwear Wood, paper, printing, publishing Wood and cork (not furniture) Pulp, paper, and paper products Publishing, printing, and reproduction of recorded media Coke, petroleum, nuclear fuel, chemicals and products, rubber, and plastics Coke, refined petroleum products, and nuclear fuel Chemicals and chemical products Chemicals and chemical products (less pharmaceuticals) Pharmaceuticals Rubber and plastic products Non-metallic mineral products (“Stone, clay and glass”) Basic Metals Basic metals, ferrous Basic metals, non-ferrous Fabricated metal products (except machinery and equipment) Machinery equipment, instruments, and transport equipment Machinery Office, accounting, and computing machinery Electrical machinery Electronic equipment (radio, TV, and communications) Television, radio and communications equipment Medical, precision and optical instruments, watches and clocks (instruments) Motor vehicles Other transport equipment Ships Aerospace Other transport Furniture, other manufacturing Furniture Other manufacturing Recycling Electricity, Gas, and Water Supply (Utilities)
Appendices 351 Construction Service Sector Wholesale, retail trade, and motor vehicle, etc., repair Hotels and restaurants Transport and storage Communications Post Telecommunications Financial intermediation (including insurance) Real estate, renting, and business activities Computer and related activities Software consultancy Other computer services Research and development Other business activities Community, social and personal service activities, etc.
352
Appendices
Appendix 20: fields of science (FOS) 1. Natural sciences 1.1
Mathematics and computer sciences; mathematics and other allied fields: computer sciences and other allied subjects (software development only; hardware development should be classified with the engineering fields) 1.2 Physical sciences (astronomy and space sciences, physics, other allied subjects) 1.3 Chemical sciences (chemistry, other allied subjects) 1.4 Earth and related environmental sciences (geology, geophysics, mineralogy, physical geography and other geosciences, meteorology and other atmospheric sciences including climatic research, oceanography, vulcanology, palaeoecology, other allied sciences) 1.5 Biological sciences (biology, botany, bacteriology, microbiology, zoology, entomology, genetics, biochemistry, biophysics, other allied sciences, excluding clinical and veterinary sciences)
2. Engineering and technology 2.1
2.2
2.3
Civil engineering (architecture engineering, building science and engineering, construction engineering, municipal and structural engineering and other allied subjects) Electrical engineering, electronics (electrical engineering, electronics, communication engineering and systems, computer engineering (hardware only) and other allied subjects) Other engineering sciences (such as chemical, aeronautical and space, mechanical, metallurgical and materials engineering, and their specialized subdivisions; forest products; applied sciences such as geodesy, industry chemistry, etc.; the science and technology of food production; specialized technologies of interdisciplinary fields, e.g.: systems analysis, metallurgy, mining, textile technology, other allied subjects).
3. Medical sciences 3.1
3.2
3.3
Basic medicine (anatomy, cytology, physiology, genetics, pharmacy, pharmacology, toxicology, immunology and immunohaematology, clinical microbiology, pathology) Clinical medicine (anaesthesiology, paediatrics, obstetrics and gynaecology, internal medicine, surgery, dentistry, neurology, psychiatry, radiology, therapeutics, otorhinolaryngology, ophthalmology) Health sciences (public health services, social medicine, hygiene, nursing, epidemiology).
Appendices 353 4. Agricultural sciences 4.1
Agriculture, forestry, fisheries and allied sciences (agronomy, animal husbandry, fisheries, forestry, horticulture, other allied subjects) 4.2 Veterinary medicine. 5. Social sciences 5.1 5.2 5.3 5.4
Psychology Economics Educational sciences (education and training and other allied subjects) Other social sciences [anthropology (social and cultural) and ethnology, demography, geography (human, economic and social), town and country planning, management, law, linguistics, political sciences, sociology, organization and methods, miscellaneous social sciences and interdisciplinary), methodological and historical S & T activities relating to subjects in this group. Physical anthropology, physical geography and psychophysiology should normally be classified with the natural sciences].
6. Humanities 6.1
History (history, prehistory and history, together with auxiliary historical disciplines such as archaeology, numismatics, palaeography, genealogy, etc.) 6.2 Languages and literature (ancient and modern languages and literatures) 6.3 Other humanities [philosophy (including the history of science and technology), arts, history of art, art criticism, painting, sculpture, musicology, dramatic art excluding artistic “research” of any kind, religion, theology, other fields and subjects pertaining to the humanities, methodological, historical and other S & T activities relating to the subjects in this group].
354
Appendices
Appendix 21: socioeconomic objectives (SEO) OECD categories 1 Development of agriculture, forestry and fishing 2 Promotion of industrial development technology 3 Production and rational use of energy 4 Development of the infrastructure 4.1 Transport and telecommunications 4.2 Urban and rural planning
5 Control and care of the environment 5.1 The prevention of pollution 5.2 Identification and treatment of pollution 6 Health (excluding pollution) 7 Social development and services 8 Exploration and exploitation of Earth and atmosphere 9 General advancement of knowledge 9.1 Advancement of research 9.2 General university funds 10 Civil space 11 Defense 12 Not specified
NABS categories Agricultural production and technology Industrial production and technology Production, distribution, and rational utilization of industry Infrastructure and general planning of land use Transport systems Telecommunication systems General infrastructure and land planning research, construction and planning of buildings, water supplies, infrastructure R&D Control of environmental pollution
Protection and improvement of human health Social structures and relationships Exploration and exploitation of the Earth Non-oriented research Research financed from general university funds Exploration and exploitation of space Defense Other civil research
Appendices 355
Appendix 22: taxonomies of research Taxonomies of R&D Huxley (1934) background/basic/ad hoc/development Bernal (1939) pure (and fundamental)/applied Bush (1945) basic/applied Bowman (in Bush, 1945) pure/background/applied and development PSRB (1947) fundamental/background/applied/development NSF (1953) basic/applied/development Carter and Williams (1959) basic/background-applied/product-directed/ development OECD (1963) fundamental/applied/development Other labels used for pure, fundamental, and basic research Autonomous (Falk, 1973) Curiosity-driven (Irvine and Martin, 1984) Exploratory (IRI, 1978) Free (Waterman, 1965) Intensive (Weisskopf, 1965) Long term (Langenberg, 1980)7 Non-mission oriented (NAS, 196?) Non-oriented (OECD, 1991)8 Non-programmatic (Carey, 19??) Uncommitted (Conant, in NSF, 1951; Harvard Business School, 1953). Sub-classes for basic research Generic (GAO, 1987) Objective (Office of the Minister for Science, 1961) Oriented (UNESCO, 1961; OECD, 1970; UK Government, 1985) Strategic (House of Lords, 1972; 1990; Irvine and Martin, 1984; NSF Task Force, 1989). Extensions of the concept of basic research Basic technological research (Stokes, 1997; Branscomb, 1998; DTI/OST, 2000).
7 D. N. Langenberg (1980), Memorandum for Members of the National Science Board, NSB-80-358, Washington. 8 Main Science and Technology Indicators.
Index
Anthony, R. N. 26, 66, 161, 239, 303; see also USA: Harvard Business School Argument from minimizing limitations 162–3, 171, 180, 251, 277 Background research 76, 77, 265, 266 Basic research 28, 54, 137, 153, 199, 227, 242, 262–86, 298–302, 316; Alternatives 277–8, NSF 278–80, OECD 280–2, UK 282–3; First official measurements: Bush 265–6, NSF 270–1, PSRB 267–70; Origin of the taxonomy 264, Bernal 264, Bush 265–7, Huxley 264–5; Problems with the definition 15, 262–3, 272–7 Ben-David, J. 229 Bernal, J. D. 202, 264, 276 Bibliometrics 2, 132–5, 197, 319, 321 Boretsky, M. T. 236, 322 Brooks, H. 276, 291 Bush, V. 24, 67, 70, 112, 203, 241, 263, 265, 266, 268, 285, 316 Canada: Department of Reconstruction and Supply 31, 60, 77; Dominion Bureau of Statistics 30–1, 159, 163; National Research Council 30–1, 67; Statistics Canada 9, 138, 145, 309, 319 Cognard, P. 223, 224 Eastern countries 29, 44, 49, 85, 92, 93, 102 European Commission 42–4, 49, 105, 256, 302; Classification on socioeconomic objective (SEO) 188;
Current statistics 43; First surveys 42; Gaps 42, 238; Innovation 42, 147 European Productivity Agency (EPA) 32, 220–3, 250; Committee of Applied Research (CAR) 33, 223; Office of Scientific and Technical Personnel (OSTP) 33, 248–52, 251; Productivity gaps see Gaps; Working Party 3 (WP3) 220, 249; Working Party 26 (WP26) 222 Fabian, Y. 39, 121 Falk, C. 110, 239, 281, 310 France: Délégation Générale de la Recherche, de la Science et de la Technologie (DGRST) 116, 223, 280; Gaps see Gaps Freeman, C. 5, 32, 85, 94, 95, 115, 118, 134, 164, 198, 208, 232, 239, 252, 287, 292, 294, 295, 297, 299, 302, 306, 313, 315 Fundamental research see Basic research Gaps 37; Personnel 33–4, 249–54; Productivity 6, 116, 219–23; Technology 17, 37, 105, 116–18, 125, 129, 142–3, 150, 198, 199, 211, 218–19, 233, 306, 313, 316, 322, European Commission 42, 238, France 223–5, OECD 225–9, United States 229–37, USSR 34, 42 Gass, J. R. 291, 307 Germany: High technology and Heidelberg Studiengruppe fur Systemsforschung 80; Related Scientific Activities (RSA) and Fraunhofer Institute for Systems and Innovation Research 130 Gerritsen, J. C. 33
358
Index
Great Britain: Advisory Council on Science Policy (ACSP) 31–2, 220, 246, 248; Association of Scientific Workers (ASW) 31, 202; Department of Scientific and Industrial Research (DSIR) 31, 67, 70, 159; Federation of British Industries (FBI) 32, 64, 140; House of Lords Select Committee on Science and Technology 282, 283 Gregoire, R. 221 Gross Domestic Expenditures on R&D (GERD) 9, 57, 201–2; Country rankings 36, 198, 207, 212; Matrix of flows 185, 205–7; National science budget: Bernal 202–3, Bush 203, National Science Foundation (NSF) 203–4, President’s Scientific Research Board (PSRB) 203; OECD 207–25 High technology 127–31, 236–7 Huxley, J. S. 264, 265, 266, 268, 285 Indicators 105–6, 236; Definition 106–8; Input 39, 40, 48, 57, 111, 114, 120, 135–6, 194; National Science Foundation (NSF) 38, 50, 78, 79, 105, 108–13, 143, 199, 204, 311; OECD 113–27; Output 40, 51, 57, 120, 124, 134, 138, 146, 194, 197, Beginning 38–40, 108, 112, 114, OECD 121–35; Policy-oriented indicators 40–2, 288 Innovation 141–3; Activity approach 14, 87, 138, 140, 141, 143–52; Oslo manual 42, 138, 147–50; Output approach 141–3; Proxies 16, 117, 124, 139–40, 305 King, A. 117, 248, 251 National Science Foundation (NSF) 22, 27, 65, 78, 108, 141, 270; Basic research see Basic research; Indicators 38, 50, 79, 105, 109, 118, 119, 143, 183, 199, 204; Lobbying for funds 29–30; Role in official statistics 28–9, 164, 315 OECD 32; ad hoc review groups 33, 47, 49, 98, 1st group 37–8, 50, 54, 2nd group 38, 49, 113, 122, 166, 3rd group 51, 4th review 39–40; Committee for Science Policy (CSP) 37, 38;
Committee for Scientific and Technological Policy (CSTP) 10, 16, 33, 48, 113, 128, 252; Committee on Scientific Research (CSR) 4, 35, 121, 223, 315; Databases 39, 52, 53, 114, 115, 123, 171, 175, 193, Analytical Business Enterprise R&D (ANBERD) 53, 171, Main Science and Technology Indicators (MSTI) 105, 114, 121, 123, 124, 125, 127, 129, 131, 177, 199, Structural Analysis (STAN) 53, 171, 193; Directorate for Science, Technology and Industry (DSTI) 10, 40, 48, 114, 189, 193, 211, 214, 240, 254, 258, 313; Directorate for Scientific Affairs (DSA) 10, 16, 34, 37, 40, 50, 80, 121, 199, 218, 249, 258, 291, 315; Economic Analysis and Statistics Division (EAS) 41, 48; Information, Computer and Communication Policy committee (ICCP) 81; Information Policy Group (IPG) 80, 81; Manuals 4, 16, 115, Bibliometrics 132, Government R&D and socio-economic objectives 175, Higher education 132, 169–70, Human resources (Canberra) 86, 198, 256–7, Innovation (Oslo) 42, 87, 138, 147, Patents 123, R&D (Frascati) 15, 32, 36, 45, Scientific and Technical Information (STI) 80, Technological Balance of Payments (TBP) 125; National Experts on Science and Technology Indicators (NESTI) 40, 47–54, 99, 147, 179, 188, 257, 315; Science and Technology Indicators Unit (STIU) 38, 39, 51, 167, 181, 296; Science Resources Unit (SRU) 34, 37, 50, 121; Science, Technology and Industry Indicators Division (STIID) 40, 41; Technology/Economy Program (TEP) 8, 40, 51, 255 Organization for European Economic Co-Operation (OEEC) 4, 33, 34, 80, 220, 222, 249, 250, 251, 258, 302 Patents 123–5, 133 Pavitt, K. 117, 118, 140, 141, 142, 301 Policy and statistics: Administrative organization 309–11, 318; Economics 34–8, 294; Rational management 293–6, Balancing R&D funding 298–302, Controlling R&D expenses 296–8, 303–4; Rhetoric 28, 305–11 Pure research see Basic research
Index R&D surveys 68; All sectors: National Science Foundation (NSF) 22, 27, 158, 204, OECD see Gross Domestic Expenditures on R&D, President’s Scientific Research Board (PSRB) 25, UK 32–3; Government R&D 31, 160, 172, Research classification: Beginning 21, 23–7; Industrial R&D 26, 31, 32, 64, 68, 137, 303, Internationalization 37; Official rationale 26–7, 30, 312; University R&D 22–7, 167; see also Basic research, Research classification, Survey methodology Related Scientific Activities (RSA) 13, 45, 72; First measurements: Canada 88, NSF 77–80; OECD 15, 80–2, 88; UNESCO 82–6; see also Scientific and Technical Information and Documentation Research: Classification: Classification problems 191–4, Government R&D 188–91, Industrial R&D 185–6, University R&D 186–8; Definition 159, 272, Contrasted to Related Scientific Activities (RSA) 74–5, R&D 66–8, Systematic research 58–68, 70; Origin of the term 58; Taxonomies 74, 159–60, 265, 268, 279, 299 Salomon, J.-J. 218, 232, 291, 300, 313 S&T activities 72; Contrasted to R&D 84, 95; Early measurement 21, Canadian Department of Reconstruction and Supply 77, NSF 78, US National Resources Committee 76; Non-scientific activities 74, 86–7; OECD 75–6; UNESCO 94–7, 96 S&T personnel: Brain drain 33, 97, 240, 246–9, 259, 317; Early measurement 25, 29, 240, 258, OECD 33, UK 32; Human Resources for Science and Technology (HRST): Canberra manual 256–7, OECD 254–8, UNESCO 254–6; International surveys 249–54, 258; Rosters: UK 31, US 23, 243; Shortages 25, 29, 33, 198, 240, 241, 245, 258, 317 Schmookler, J. J. 2, 123, 124, 139 Schumpeter, J. 64, 139
359
Scientific and Technical Information and Documentation (STID): NSF 78–80; OECD 80–1; UNESCO 45, 82, 86, 95, 100 Servan-Schreiber, J.-J. 117, 223, 225 Social sciences and humanities 61, 73, 89, 100, 256 Steelman, J. R. 24; Steelman Report see USA: President’s Scientific Research Board Survey methodology 52–3; Accounting problems 160–2; Differences between countries 164–7; Estimates 52, 166, 171; First methodological meeting 27–8; Metadata 176–9; Pre-standard problems 158–64; Problems with data: Government R&D 172–6, Industrial R&D 170–2, University R&D 167–70; R&D boundaries 159–60; Rhetoric see Argument from minimizing limitations; Standards see OECD: manuals; Timeliness of data 51–2 System of National Accounts (SNA) 15, 16, 59, 98, 183–5, 206–7 Technological Balance of Payments (TBP) 120, 125–7 UNESCO 44, 86, 97, 100; First surveys 44, 91–2; Manuals 44–5; Recommendation on S&T activities 44–5, 72, 86, 90, 95; Related Scientific Activities (RSA) 13, 45, 76, 83–6; Relationship with OECD 97–8; Scientific and Technical Information and Documentation (STID) 45, 86, 100; Scientific and Technological Potential (STP) 82, 97; STA see S&T activities USA: Bureau of Budget (BoB) 25, 29, 108; Bureau of Census (BoC) 27, 161, 183; Department of Commerce (DoC) 118, High technology 236–7, Steacie report 144–5; Department of Defense (DoD) 25, 65, 203, 303; General Accounting Office (GAO) 110, 174, 183, 278; Harvard Business School 26, 65, 160, 161, see also Anthony, R. N.; Industrial Research Institute (IRI) 26, 278; National Research Council (NRC) 22, 62–3, 67, 139, 159, 182, 239, 243; National Resources Committee 24, 76, 273; National Resources Planning Board (NRPB) 23, 63, 159, 243;
360
Index
USA (Continued ): National Science Board (NSB) 105, 109, 115, 270, 279; NSF see National Science Foundation (NSF); Office of Education 23, 243; Office of Science and Technology (OST) 109, 110; Office of Scientific Research and Development (OSRD) 25, 67;
President’s Scientific Research Board (PSRB) 24, 203, 267; Social Sciences Research Council (SSRC) 39, 106; Work Progress Administration (WPA) 63–4 USSR 4, 34, 42, 85, 198, 249, 258, 269 Waterman, A. T. 30, 109, 277, 280