Venturing into the Bioeconomy
Also by Alexander Styhre and Mats Sundgren MANAGING CREATIVITY IN ORGANIZATIONS: Critiq...
44 downloads
753 Views
2MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Venturing into the Bioeconomy
Also by Alexander Styhre and Mats Sundgren MANAGING CREATIVITY IN ORGANIZATIONS: Critique and Practice Also by Alexander Styhre PERCEPTION AND ORGANIZATION: Arts, Music, Media SCIENCE-BASED INNOVATION: From Modest Witnessing to Pipeline Thinking
Venturing into the Bioeconomy Professions, Innovation, Identity Alexander Styhre and
Mats Sundgren
© Alexander Styhre and Mats Sundgren 2011 All rights reserved. No reproduction, copy or transmission of this publication may be made without written permission. No portion of this publication may be reproduced, copied or transmitted save with written permission or in accordance with the provisions of the Copyright, Designs and Patents Act 1988, or under the terms of any licence permitting limited copying issued by the Copyright Licensing Agency, Saffron House, 6–10 Kirby Street, London EC1N 8TS. Any person who does any unauthorized act in relation to this publication may be liable to criminal prosecution and civil claims for damages. The authors have asserted their rights to be identified as the authors of this work in accordance with the Copyright, Designs and Patents Act 1988. First published 2011 by PALGRAVE MACMILLAN Palgrave Macmillan in the UK is an imprint of Macmillan Publishers Limited, registered in England, company number 785998, of Houndmills, Basingstoke, Hampshire RG21 6XS. Palgrave Macmillan in the US is a division of St Martin’s Press LLC, 175 Fifth Avenue, New York, NY 10010. Palgrave Macmillan is the global academic imprint of the above companies and has companies and representatives throughout the world. Palgrave® and Macmillan® are registered trademarks in the United States, the United Kingdom, Europe and other countries. ISBN 978–0–230–23836–7 This book is printed on paper suitable for recycling and made from fully managed and sustained forest sources. Logging, pulping and manufacturing processes are expected to conform to the environmental regulations of the country of origin. A catalogue record for this book is available from the British Library. Library of Congress Cataloging-in-Publication Data Styhre, Alexander. Venturing into the bioeconomy : professions, innovation, identity / Alexander Styhre and Mats Sundgren. p. cm. Includes bibliographical references and index. ISBN–13: 978–0–230–23836–7 ISBN–10: 0–230–23836–X 1. Pharmaceutical industry—Case studies. 2. Biotechnology industries— Case studies. I. Sundgren, Mats, 1959– II. Title. HD9665.5.S79 2011 338.4’76151—dc22 2011012477 10 9 8 7 6 5 4 3 2 1 20 19 18 17 16 15 14 13 12 11 Printed and bound in Great Britain by CPI Antony Rowe, Chippenham and Eastbourne
Contents List of Tables and Figures
vii
Preface
viii
Acknowledgements
xii
Introduction: Studying the Organization of the Bioeconomy
1
2
3
1
The body in parts
1
Some historical antecedents of biopharmatechnology
5
Studying the organization of the bioeconomy
13
Outline of the book
15
Professional Ideologies and Identities and Innovation Work 18 Introduction
18
Populating the bioeconomy
19
Summary and conclusions
40
The Bioeconomy, Biocapital and the New Regime of Science-based Innovation
42
Introduction
42
The regime of the bioeconomy
46
Images of the body
56
The concept of biomedicalization
61
The tissue economy
70
Genetics, genomics and pharmacogenomics: the new technoscientific regime
78
Summary and conclusion
97
Innovation Work in a Major Pharmaceutical Company
100
Introduction
100
The new drug development process
102
Setting the stage from 2008 to 2009: great cash-flow, but many concerns
106
v
vi Contents
4
5
6
Coping with uncertainty in new drug discovery: epistemic objects and the culture of prediction
111
Summary and conclusion
150
The Craft of Research in Biotech Companies
152
Introduction
152
The emergence and growth of the biotech sector
153
Biotechnology entrepreneurs and the potentiality of the life sciences
160
Summary and conclusion
190
Exploring Life in the University Setting
192
Introduction
192
Innovation work in the university setting
193
The field of research: systems biology
201
Academic researchers in the field of systems biology
206
Summary and conclusion
229
Managing and Organizing the Bioeconomy
233
Introduction
233
Technoscience and its ‘impurity’
234
Professionalism in the bioeconomy
239
Studying the bioeconomy on the basis of organization theory
243
Managerial implications
246
The future of the life sciences
249
Summary and conclusion
255
Appendix: On Methodology and Data Collection and Analysis
257
Bibliography
266
Index
287
List of Tables and Figures Tables 1.1 Significant characteristics in the three phases of industrial innovation
38
2.1 International R&D intensities, 1998
92
Figure 3.1 The drug discovery research process
vii
103
Preface One of the most suitable metaphors for book writing is to think of the book-in-the-making as a coral reef, gradually emerging over the course of time; new quotes and passages are added to the old ones and, slowly but seemingly inevitably, the text grows. The initial assumptions and beliefs need to be re-examined as new data and new insights are generated, at times a painful grappling with what initially looked like good research ideas and propositions, but that, further down the road, are slightly embarrassing reminders of how little one knew or understood from the outset. At the same time, these ‘starting conditions’ are always present and traceable in the coral reef of the book-in-the-making; original structures and key terms are often too dear to the writer(s) to be abandoned. The writer being told by ruthless editors and reviewers to ‘kill their darlings’ experiences feelings like those Abraham must have felt on the way to kill his beloved Isaac to prove his faith in God – being torn between two incompatible objectives, faith in what he/she honestly believes is right and what he/she is told to do. However, just like God intervening in the cruel act and safeguarding arguably the first ‘Hollywood ending’ in the Christian credo, the writer working through his or her text (no Hollywood ending in sight in this case, despite the fact that editors may erroneously believe they are a god) generally notices that the text is, in fact, becoming more readable, less messy, more to the point and so forth, than they initially thought would be possible. This book is one of the outcomes from the collaborative efforts of the two authors stretching back to the beginning of the new millennium. Over the years we have studied and written about new drug development in a number of journal articles and in the book Managing Creativity in Organizations (Palgrave Macmillan, 2005). While Alexander has remained faithful – by virtue or by indolence – to the academic, Mats has been able to maintain one foot in both industry and academic camps. For both of us, this ability to jointly keep one eye on the practice of the biopharmaceutical industry and one on scholarly debates and concerns has been highly fruitful since it has given us a chance to make contributions in both domains. This book, Venturing into the Bioeconomy: Professions, Innovation, Identity, is another attempt at taking advantage of our diverse interests and responsibilities. While Mats has been engaged in various international pharmaceutical projects commissioned by, for viii
Preface
ix
instance, the European Union, we have maintained a conversation regarding the future of the industry and the influence of new technoscientific frameworks. This book is, in other words, propelled by a general curiosity regarding what will come next in the field of the life sciences in the so-called ‘post-genomic era’. The book is also written from the perspective of the social and economic sciences and, more specifically, an organization theory perspective. Alexander’s background in the institution of the business school is thus complemented by Mats’ more than 25 years of experience from the field of new drug development and the pharmaceutical industry. For some readers, business school faculties, having only limited expertise in the field, should not believe they are capable of understanding the intricacies of the life sciences; for others, their work may represent a highly relevant field of research in an economy characterized by an increased weight for science-based innovation work. No matter what position is taken, the expertise in the field of one of the authors will hopefully iron out the shortcomings of the influence of the less insightful writer of the book. Having made all these excuses for lacking all the desirable expertise, it is our belief and hope that this kind of research will broaden the scope and depth on the discussion of the role of the life sciences as the primus motor of the future economy. In the period of ‘high modernity’ (say, 1870–1970), social life and the economy was driven by technological developments on the level of artefacts and large-scale technological systems: cars, highways, airline systems, wired electricity and so forth essentially defined the period. The period since the 1970s has operated on another scale, that of the virtualization of financial capital, on the microbiological, molecular level of biological systems, in microelectronics and digitalization of information and media. Today, accomplishments in engineering, medicine, the sciences and the economy are less salient – they take place on what may be called an ‘infra-empirical level’, largely invisible to the untrained eye. The building of a highway is a conspicuous accomplishment, immediately observable and at least partially understood, but the change from, for example, the second to the third generation of mobile phone systems, rests on engineering competence that is relatively obscure for common-sense thinking and does not lead to straightforward social changes as did the construction of the highways. In this period of time, knowledge, expertise and professional skills are becoming increasingly complex to decode, interrelate and understand. This book is thus an attempt at understanding some of the changes in the life sciences and its principal industries, those of the pharmaceuticals and biotechnology.
x
Preface
Almost daily, human beings paying at least a minimum of attention to news reporting hear, in passing, about new advancements in the life sciences. The general public are, for example, informed about the relation between certain genes and diseases, about clinical research efforts leading to recommendations regarding vitamin uptake of various groceries, and about other social practices (e.g., working out, jogging, or meditation) that are supposedly leading to positive health effects. This science journalism reporting may make sense for the layman, but there are, in fact, a stream of similar instructions, in some cases contradictory, making the life sciences essentially a fragmented and diverse field from which a variety of statements apparently may be legitimately drawn. This book attempts at examining how all these new technoscientific approaches and analytical frameworks are ‘put into use’ in the field of the sciences. The book does not seek to provide any final answers to predefined questions, as much as it seeks to capture at least some of the diversity in views, opinions and practices in what is referred to under the imprecise term ‘life sciences’. In accomplishing this task, we have interviewed scientists, researchers and decision-makers in the pharmaceutical industry, the biotechnology sector and in the academic research universities. The image of the life sciences being constructed on basis of these encounters, and the opinions and hopes for the future being articulated by these interviewees, is far from unified or coherent. Technoscientific approaches and analytical procedures are invented and put to use, become hyped or fall from grace, eventually find their useful applications, or disappear. The uses of various methods and approaches are highly contingent on specific research interests and what may work for one specific researcher may be totally irrelevant for another. Speaking in terms of ecology as a root metaphor or master narrative for the life sciences, ecology is highly sophisticated, enabling an amazing variety of species and life forms to co-exist and co-evolutionary development to occur. There are also possibilities – just as in any advanced ecosystem – to nourish highly specialized competences and take positions in narrow niches of the industry. Hopefully, the book is capable of providing a sense of this diversity and manifold nature of ‘life forms’. Speaking in terms of the organization theory framework, the studies reported here are positioned within the so-called knowledge economy and the study of knowledge-intensive professional work and innovation management. Even though this body of literature is not being reviewed in detail in the theoretical framework of the book, this field of research is the anchoring point of the study of all the work being conducted in
Preface
xi
the bioeconomy. The field of organization theory and management studies are thus, we assume, relevant for the commercialization of life science know-how. ALEXANDER STYHRE MATS SUNDGREN
Acknowledgements This work has been accomplished on basis of a research grant from the Bank of Sweden’s Tercentenary Fund. The authors would like to thank all the interviewees in the pharmaceutical company, the biotechnology firms and the research universities that participated in the research. We would also like to thank Dr Johan Gottfries for helping us to contact some of the interviewees participating in the study. Alexander Styhre could like to thank colleagues at School of Business, Economics and Law, the University of Gothenburg and the Department of Technology Management and Economics, Chalmers University of Technology, for fruitful and engaging discussion. Mats Sundgren would especially like to thank Sverker Ljunghall, Head of Science Relations AstraZeneca R&D, for his support and interest in this area.
xii
Introduction: Studying the Organization of the Bioeconomy
The body in parts In the Canadian movie Jésus de Montréal (1989), a young actor, engaged in a play about the life of Jesus, is killed in an accident. However, as his organs are donated to transplantations, his second coming is not as a resurrection but in the form of a distributed life; his organs are brought back to life in many different hospitals all over Montreal. The Canadian Jesus is thus ‘returning’ from death not in the form of a person coming alive but as a sort of inverted Frankenstein; in Mary Shelley’s imaginary, life emerges as a set of organs patched together and brought to life by means of electricity, but the Canadian Jesus, in contrast, comes to life through being distributed, a set of parts that, on their own, can bring back or safeguard life. Another curious second coming is the case of Joseph Paul Jernigan, of Waco, Texas, placed on ‘death row’ in 1981 after being convicted for burglary and murder. In August 1993, Jernigan was executed by injection with a lethal dose of potassium chloride (Waldby, 2000: 1). Jernigan had agreed to donate his remains to the Visual Human Project, an attempt at creating an internet-based service providing a comprehensive set of images of the human anatomy. During his life, Jernigan was described as a ‘cruel and murderous drunk’, a ‘mad dog’. After death, Jernigan was described in different terms. In an article in the Chronicle of Higher Education, Jernigan was described as an ‘internet angel’: ‘In his life he took a life, in his death he may end up saving a few,’ the Chronicle speculated (cited in ibid.: 54). ‘As murderer, Waldby (ibid.) argues, ‘Jernigan steps outside of the social contract, but as raw material for the Visible Human Project he is understood to redeem himself by making a direct contribution to biovalue, preserving the life and integrity of bodies more valuable than his own.’ Jernigan’s 1
2
Introduction
appearance on the internet was a scientific event with unmistakenly theological overtones. Besides the theological and ethical questions touched by Jésus de Montréal and the Visible Human Project, the cases also provide an indication of the value (economic, political, social and humanitarian) of human tissues and the role the access to human tissues can play. In the emerging bioeconomy, not only more ‘advanced’ human organs, such as hearts and livers, are useful; increasingly, a great variety of human tissues, ranging from sperm to stem cells and cell lines grown from tissues collected from individuals with specific health conditions, also are either having direct economic and social value or could potentially attain such values (Kent et al., 2006; Lock, 2001). In the past, death was, if not meaningless, then definitely certain. In the present regime of the bioeconomy, death is always someone’s death but it may also produce life elsewhere as tissues and organs may be moved from organism to organism, circulated from place to place. For critics (Habermas, 2003), in many cases nourishing beliefs regarding what a line of demarcation between what is normal and artificial occurs, such forms of ‘bioengineering’ are problematic because they violate certain norms, either directly (as in the case of organ theft, unethical organ trade or organ tourism) or as potentially creating new concerns for individual persons, policy-makers and professional and occupational groups in the health care sector (see, e.g., Konrad, 2004). For instance, Prainsack et al. (2008: 352–3) speak about the controversies around stem cell research as ‘stem cell wars’ and suggest that ‘the stem cell wars provide a powerful demonstration of the ways in which science and society are co-produced, always mutually influencing and constituting each other, rather than developing independently’. They continue: ‘[S]tem cell science has a powerful symbolic currency of the remaking of human life and the manipulation of human origins. This science stands in for diverse social, religious and historical agendas – from the debates concerning abortion, to the legacies of the Second World War’ (ibid.: 356). For others, taking a more affirmative view of the new possibilities, the new biopharmaceutical advances are opening up unprecedented possibilities; the line of demarcation between life and death becomes less definite and subject to technoscientific manipulations. These various discussions and discourses have been growing over the last 15 years, and have been brought to the public’s attentions especially through the announcement of the human genome project and, more recently, various stem cell research programmes (Waldby, 2002; Franklin, 2005; Salter and Salter, 2007; Hoeyer et al., 2009). It is commonplace for analysts and
Studying the Organization of the Bioeconomy 3
media pundits to declare that the bioeconomy will be the next primus motor for the world economy, playing a role similar to that of the automotive industry in the twentieth century; in the bioeconomy, what Rabinow (2006: 136) calls the ‘two universalised products’ of Western bourgeoisie culture, technoscience and modern rationalized capitalism, are more closely intertwined than ever before in history. The bioeconomy is the economic regime of accumulation where technoscientific knowhow developed in the life sciences is capable of making the lived body a principal surface of economic value creation. The history of the life sciences, the pharmaceutical industry and biotechnology stretches back to medieval times; great progress took place in the early nineteenth century, when physicians such as Xavier Bichat and other pioneers of modern medicine advocated a more scientific understanding of the human body (Foucault, 1973). Prior to the late eighteenth century, medicine, in fact, relied on ancient thinking and was generally rather ineffective in curing illnesses that are today easily handled – the black death in the fourteenth century, for instance, which wiped out significant parts of the European population and caused an economic downturn that took centuries to overcome, would have been easily cured by today’s medical know-how. In addition, the enormous growth and advancement of various technologies enabling new forms of investigation of the human body and more accurate and detailed diagnoses have helped to make medicine a prestigious domain of expertise. The medical profession, even today, accrues more status and is capable of accomplishing more than ever before in history. At the same time, some commentators claim, we are on the verge of the biorevolution and the twenty-first century may be very likely to offer a set of new concepts in terms of how to manage not only health but also ‘life itself’ (Rose, 2007). From the evangelical texts we learn that the resurrection of Christ was regarded as a major, indeed a paradigmatic (if this term could be used in this context), event in the Christian theology. The theme has been subject to an endless number of artistic and literary accounts and discussions over the centuries in the Christian world. As suggested by the movie Jésus de Montréal, the second coming may not be quite as an extraordinary event in the emerging bioeconomy; as a carrier of a donor card, allowing organs to be passed on after an untimely death, I know that, as an organism, I may perish, but my heart, my liver, my kidneys and other tissues may live on elsewhere after I am long since gone. Already now, we are capable of transplanting organs between organisms such as human beings and we are increasingly capable of growing organs based on biological materials (e.g., stem
4
Introduction
cells) or replacing organs with material resources. All these changes call into question what the renowned physician Erwin Schrödinger (1944) addressed in the 1940s by asking: ‘What is life?’ If life is no longer strictly a theological matter, a gift from God or some divine force of nature, but a process to be managed and monitored, how should we, as humans, relate to this great task? What are the ethical and practical rules that should be enacted and adhered to and how should we be able to follow these rules in a world filled with technoscientific possibilities? It is little wonder the last period of time saw the emergence of a new professional category: the bioethicists. This book is an attempt to examine the changes in the field of the life sciences (here a rather broad term, including health care, the pharmaceutical industry and various parts of the biotechnology sector) and how these are affecting the innovation work in organizations and companies in this sector. It should be noted that this book is written from the perspective of organization theory or within a management studies tradition, and does not make any claims to cover the entire field changes in life sciences. However, while a great number of researchers, both internal and external to the life sciences, have debated the changes sketched above, there have been relatively few discussions about the changes from an organization theory perspective. However, the conventional pointing at ‘missing spots on the map’, so popular with academic researchers when justifying a research project, may not of necessity make a topic of study interesting per se. Still, when speaking from an organization theory perspective (and, more specifically, the intersection between a number of theories, including innovation management, the literature on professionals and professionalization and institutional theory) there are interesting changes in the field that may be of relevance for a broader understanding of how institutional changes affect professional identities and consequently managerial practice. This book therefore aims to, first, summarize the rather substantial and heterogeneous literature on the bioeconomy and its various concepts attached to this notoriously elusive term and, thereafter, report a number of empirical studies in organizations and companies actively engaging in this bioeconomy. The ambitions and objectives are thus rather conventional for the discipline we hope to represent: to offer, if not exhaustive, at least a comprehensive literature review and to account for empirical material collected in the domain of relevance for the targeted literature. At the same time, the book provides some insights into a variety of empirical domains (a large multinational pharmaceutical company, a number of biotech companies, academic research groups)
Studying the Organization of the Bioeconomy 5
that are often not brought together and that work in different domains within life sciences but nevertheless operate within a similar epistemological, theoretical, methodological and technical framework. While the connections between these spheres of the life sciences are intricate and manifold, they still operate in relative isolation, responding to their own individual demands and idiosyncratic challenges. The book thus wants to provide some insight into the breadth and scope of the bioeconomy and the emerging life sciences that will play a key role in shaping human lives and human societies in the future.
Some historical antecedents of biopharmatechnology By nature all things must have a crude origin. Vico (1744/1999), p. 138 Underlying the rationalist history of the sciences, staged as the gradual advancement and triumph of scientific modes of thinking pertaining to theological dogmatism and traditional common-sense thinking and manifested in a series of scientific and technological marvels such as the telescope, microscope or the steam-engine, there is a history of what we today deem (at best) as pseudo-science or mere mysticism. Underlying the advancement of the key figures of the pantheon of science such as Kepler, Galilei, Newton and Einstein, there is a history of the production of systematic forms of knowledge that have not been able to stand the tests of time and the emerging qualities of scientific rigour (Zielinski, 2006; Tomlinson, 1993). Alchemy, various forms of mysticism, parascientific speculations and so forth, are today dismissed as being embarrassing reminiscences from an age less sophisticated and indulging in archaic forms of thinking. However, as Žižek (2008: 1) insists, one must resist the temptation to take ‘an arrogant position of ourselves as judges of the past’; instead, one should pay homage to the advancement that led subsequently to more viable forms of thinking. For instance, BensaudeVincent and Stengers (1996) suggest that modern chemistry – perhaps the first truly academic discipline, established by the mid-nineteenth century, capable of producing both theoretical and practical knowledge – is closely bound up with the history of alchemy. The desire and ambition to turn various less valuable resources into precious gold – one must not underestimate the influence of greed in social progress – produced a long series of techniques and tools for accomplishing such a task.1 In addition, the structuring of a field of knowledge into written documents and publications set a standard for how to relate to and explore a domain of
6
Introduction
interest and also for scientific rigour, later to be used in other endeavours. The second mistake one may make when seeking to understand the history of the sciences and human thinking more broadly is to expect pockets of pure genius to be immune to ‘para-scientific thinking;. However, some of whom we today regard as great thinkers and sound minds may, during their lifetimes, engaged in activities that, in hindsight, we would now find less impressive. For instance, Empedocles (490–430 BC), one of the greatest pre-Socratic thinkers and one of the paragons of the antique period, was reported by Diogenes Laertius (third century AD) to possess the skill of rain-making (Gehlen, 1980: 12). In the early modern period, noted scientists like Jan Baptist van Helmont, the discoverer of carbon dioxide and the man who introduced the concept of gas to chemistry, nourished a belief in vitalism – suggesting that life is capable of appearing anywhere, anytime – and published an account of how a mouse could be brought to life on basis of a lump of organic material, such a piece of old bread. In the modern period, a widely admired inventor and entrepreneur, Thomas Alva Edison, hinted during his later years (albeit possibly as a way to create a buzz around his inventions) that he was ‘experimenting with electric ways to communicate with the dead’ (Nye, 1990: 147). What qualifies as a sound and legitimate claim to knowledge is, thus, highly contingent; some ideas may be abandoned and buried beneath a heap of similarly antiquated and slightly embarrassing ideas, while others may be greeted as major achievements. Some ideas, like Mendelian genetics,2 may be brought back into the public and scientific awareness when ‘its time has come’. As has been pointed out many times, history tends to unfolds as a ‘success story’; what remains is a relatively unexplored and highly rich and diverse tradition of para-, pseudo- and quasi-science, regarded, at best, as curiosities. In the case of the biopharmaceutical industry, the primus motor for the industry has been not so much ancient and medical medicine relying on Galen’s influential but obsolete work on the excess and lack of fluids in the human body, but chemistry and, more specifically, synthesis chemistry (Sunder Rajan, 2006: 16). It is important to keep in mind that, even though medicine has been held in esteem for various reasons, as a scientific discipline it has been rather archaic, resting on ‘virtually no valid expertise at all’ (Collins, 1979: 139) until at least the end of the eighteenth century, when Jenner’s smallpox vaccination was developed (1798). When tracing the roots of the biopharmaceutical industry, one should examine the history of chemistry. Bensaude-Vincent and Stengers (1996) offer an intriguing introduction to the history of chemistry. Unlike many other scientific endeavours, chemistry has, from the
Studying the Organization of the Bioeconomy 7
outset, been instrumental in terms of seeking to accomplish practical effects. Therefore, to understand the genesis of modern chemistry is to understand its instrumental techniques, Bensaude-Vincent and Stengers (ibid.: 7) suggest. The very term ‘chemistry’ is etymologically derived from the Egyptian word for ‘black’ (from the ‘black earth of Egypt’) or from the Greek word cheo, ‘to pour a liquid or cast a metal’ (ibid.: 13). However, this etymology is disputed. Anyway, there is a clear Arabic origin of the discipline and Alexandria, one of the intellectual hubs of the Arab world, was the centre of Arabic chemistry. In the period 800–1000 AD, chemistry thrived in Alexandria and elsewhere, leaving valuable information about experimental findings in written documents and books: The techniques are described with care and precision, the quantity of reagents and their degree of purity are specified, and signs indicating how to recognize the different stages are given. In brief, Arab scholars devoted themselves to the production and transmission of reproducible, practical knowledge, whether we call it secular chemistry or not. (Bensaude-Vincent and Stengers, 1996: 16) Among the key figures in the Alexandrian community of practicing chemists was Mary the Jewess, an Alexandrian scholar making many seminal contributions to the field. However, as the Arabic civilization declined, its accomplishments were forgotten and in medieval Christian Europe, preoccupied with sorting out which beliefs and scientific practices were adequately pious and in harmony with the religious scriptures, much of chemistry was denounced as heretic knowledge. For instance, in 1417, Lucretius’s De rerum natura, a text differentiating between primary and secondary qualities (i.e., qualities of an object independent and dependent on the seer, respectively), was rediscovered (ibid.: 28), but the religious authorities of the time regarded this work as being overtly materialistic and the followers of Lucretius were suspected of atheism. However, during the entire medieval period, various mystics and amateurs conducted alchemic experiments and produced a substantial literature on the matter. Even in the eighteenth century, the domain of chemistry was suffering from a lack of standards and unified and coherent terminology, preventing a more systematic field of research to be consolidated. Bensaude-Vincent and Stengers colourfully testify to the difficulties when reading a chemistry thesis from this period: One becomes lost, without landmarks, in a jungle of exotic and obscure terminology. Some products were named after their inventor
8
Introduction
(Glauber’s salt, Libavius’s liquor), others after their origin (Roman vitriol, Hungarian vitriol), others after their medicinal effects, and yet others after their method of preparation (flower of sulfur, obtained by condensing vapor, precipitate of sulfur, spirit or oil of vitriol, depending on whether the acid was more or less concentrated). On every page: what is he talking about? What substance is he describing? (Ibid.: 46) However, in the eighteenth century, chemistry was gradually professionalized and developed in tandem with the mining industry in countries like Germany and Sweden, while in France chemistry remained a ‘science of amateurs’, similar to that of the study of electricity, undertaken in the form of ‘experimental demonstrations in salons or public and private courses’ (ibid.: 64). Because of its practical use in industrial production, the future of chemistry was bright and it was, in fact, the first science to ‘go public’. Even before electricity was introduced into the public forum in the last centuries of the nineteenth century (Nye, 1990), chemistry fed the literature on the marvels of science and industry and became a university discipline, in the modern sense of the term, quite early. For instance, chemistry was the first discipline, Bensaude-Vincent and Stengers (ibid.: 95) note, to organize an international congress for specialists and researchers at Karlsruhe in 1860. As in other sciences, chemistry had its own ‘founding fathers’, such as Antoine Laurent de Lavoisier (whose life was ended by the guillotine in the aftermath of the French revolution, thereby providing an excellent historical case in support for the argument that new regime was barbaric and unable to appreciate and value scientific talent),the Prussian court physicist Georg Ernst Stahl and, in Sweden, Jöns Jacob Berzelius. These founding fathers were pursuing pioneering work, identifying and systematizing the so-called ‘simple bodies’, or what Robert Boyle earlier had referred to as ‘elements’. A table that Lavoisier compiled in 1789 contained 33 ‘simple substances’; in 1854, ‘Thénard named 54 simple bodies in his textbook, and in 1869 Dmitri Mendeleyev counted 70’ (ibid.: 111). Mendeleyev’s ‘periodic system’ is still today the bedrock of elementary chemistry training. In its formative period, from the end of the eighteenth century to the early twentieth century, chemistry formulated ‘an arsenal of laws’ (ibid.: 119). From the very outset, chemistry has been highly practically oriented and largely devoid of theological speculations or pressing ethical concerns to address – that is, prior to the more recent debate over ‘sustainable development’ and the role of chemicals in polluting of the environment – and has therefore played
Studying the Organization of the Bioeconomy 9
an immense role in constituting modern society and modern life. The classic Du Pont slogan ‘Better living through chemistry’ certainly holds a kernel of truth. If chemistry and synthesis chemistry, both underlying to the pharmaceutical industry, have a vibrant history, the history of biotechnology is just as diverse and manifold. Bud (1993: 3) points at concurrent development in a number of geographically dispersed places such as Copenhagen, Budapest and Chicago. The term ‘biotechnology’ was first addressed in terms of zymotechnology, a term suggested by Georg Ernst Stahl in his Zymotechnia Fundamentalis (1697) and etymologically derived from the Greek word for ‘leaven’, zymo. Zymotechnology, for Bud (ibid.), represents a ‘vital stage in bridging the gap between biotechnology’s ancient heritage and its modernist associations’. The term ‘zymotechnology’ was used by Stahl and his followers to denote all forms of ‘industrial fermentation’. From the outset, zymotechnology examined how, for instance, the brewing of beers could be conducted more effectively if specific forms of yeasts were added in the process. Zymotechnology thus conceived of biological processes of fermentation as being possible to influence through the use of various organic resources. However, in the course of the nineteenth century, the line of demarcation between ‘natural’ and ‘synthetic’ substances was gradually eroded. In 1828, the German chemist Friedrich Wöhler managed to synthesize urea (ibid.: 10), thereby paving the way for more advanced scientific analyses of organic material. In Budapest, a major agriculture centre in the last third of the nineteenth century, Karl Ereky coined and popularized the concept of biotechnology. The idea of refining crops in order to enhance the amount of seed produced was one of the starting points for a more systematic use of biotechnology. In the USA, the MIT started its first ‘biological engineering’ and ‘biotechnology’ unit in 1939; UCLA in 1947 (ibid.: 32–7). In the decades before the World War II, pioneering work such as that of Barbara McClintock (a future Nobel laureate unable to find a position because Cornell University, where she conducted her first research at the end of the 1920s, did not hire female professors until 1947) showed that crops such as maize demonstrated remarkable features in their hereditary material. In the course of its development, biotechnology has moved from being focused on agricultural interests to more generic interests in understanding not only the hereditary material of crops, animals and humans, but also rather practical concerns such as how to regulate and control reproduction processes (Clarke, 1998). While it is possible to trace the history of chemistry back to the first centuries after Christ, biotechnology is a more modern
10
Introduction
concern. However, both sciences, increasingly intertwined in the biochemical industry, have their period of consolidation in the nineteenth century. Bud (1993) claims that the idea of biotechnology and beliefs in what it may accomplish has been fundamentally influenced by, for instance, the philosophy of Henri Bergson, who advanced his ideas about biological creation in his Creative Evolution, first published 1907 in France. Bergson’s work created a craze that resulted in no less than 417 books and articles being published on Bergson in France alone by 1912, and, in 1914, Creative Evolution had been through no less than 16 editions (ibid.: 54). Bergson’s ideas about the manifold and inherently creative and constantly evolving quality of biological organisms must have struck a chord in the community of scientists and the general public at the time. While Charles Darwin had caused much controversy a few decades earlier with his Origin of Species, published in 1859, making theological authorities harshly reject his ideas about the ‘decendence of man’ (possibly from primates) as an intolerable proposition, Bergson’s ideas about innate creativity and vitality did not cause any similar response. The Bergsonian credo of the malleability and plasticity of nature perhaps culminated in 1988, when Harvard University managed, after several years of struggle with American and European patent offices regarding how to interpret the relevant laws, to patent the first transgenetic mouse (ibid.: 216) – the (in)famous OncoMouse (Haraway, 1997), a designed creature perhaps only rivalled by Dolly the sheep, the first cloned animal (Franklin, 2007), in terms of being the most famous ‘fabricated organism’. These organisms are both fascinating and, to some extent, alarming contributions from the life sciences that partially rely on the idea that specimens of nature can be shaped and formed by technoscientific procedures. The history of the biopharmaceutical sciences are longstanding and diverse, and accomplishments such as the OncoMouse or Dolly the sheep are preceded by centuries of practical and laboratory-based work to understand how to affect biological specimens on the level of the phenotype, or on the cellular, molecular, or genetic level. In more practical and institutional terms, the modern pharmaceutical company is the outgrowth from the European community of pharmacies that existed throughout the populated parts of the continent in the late eighteenth and early nineteenth centuries; ‘Small but well-outfitted laboratories often were a part of the better pharmacies at this time,’ Swann (1988: 19) notes. However, the modern research-oriented pharmaceutical industry started to take shape in the second half of the nineteenth century, first in Germany and eventually elsewhere, as a by-product
Studying the Organization of the Bioeconomy 11
of the coal-tar dye industry (ibid.: 20). Even though the universities in Europe and in the USA at the period had the competence to mutually support one another, it was not until the end of the nineteenth century, and especially in the inter-war period, that more fruitful, collaborative relationships between industry and the pharmaceutical companies were established. These collaborations were grounded in the mutual need for ‘intellectual, technical, and economic support’ (ibid.: 25). Early attempts to bridge the world of academe and industry were impeded by the idea of universities conducting ‘fundamental research’ – one of the pillars of the Humboldtian University, eventually imported to the USA and first implemented at Johns Hopkins University in Baltimore – and therefore being unfit to address more practical concerns. Today, a century and a few decades after the initial collaborations started, the relationships between industry and university are intimate and manifold – perhaps to the point where it is complicated to separate scientific work and marketing activities (Healy, 2004; Sismondo, 2004; Fishman, 2004; Washburn, 2005; Mirowski and van Horn, 2005). Powell et al. (2005) emphasize the continuing intimate relations between the universities and industry in the field of the life sciences: A number of factors undergird the collaborative division of labor in the life sciences. No single organization has been able to internally master and control all the competencies to develop a new medicine. The breakneck pace of technological advance has rendered it difficult for any organization to stay abreast on so many fronts; thus linkages to universities and research institutes of basic science are necessary. (Ibid.: 1142) Similar results are reported by Gottinger and Umali (2008: 597), claiming that ‘a strong, statistically significant, positive correlation exists between the collaboration rate of large pharmaceutical firms and their performance in terms of market valuation and total return over the long run’. Powell et al. (2005) also stress that while, in other, technologybased industries, the role of the university is gradually diminishing as the technology matures, in the life sciences and in the biopharmaceutical industry, university–industry collaborations remain a central facet of the field: [S]ome aspects of the life sciences are rather idiosyncratic. There are a wide range of diverse forms of organizations that exert influence on the development of the field. In many other technologically
12
Introduction
advanced industries, universities were critical in early stage discovery efforts, but as the technology matured, the importance of basic science receded. In biotech, universities continue to be consequential, and career mobility back and forth between university and industry is now commonplace. (Powell et al., 2005: 1190) As university–industry collaborations were established on a more regular basis, the growth of a multinational and highly successful – both with respect to financial performance and therapeutic developments – biopharmaceutical industry was established on the basis of a number of scientific advancements. Advances in and acceptance of the germ theory of disease at the turn of the twentieth century and what Galambos and Sturchio (1998: 251) call the ‘chemo-therapeutic revolution’ of the 1930s and 1940s further reinforced the role of the pharmaceutical industry. In the 1940s and 1950s, the progress of virology provided a new set of entrepreneurial opportunities, later on to be followed by breakthroughs in microbial biochemistry and enzymology, serving as the ground for new drug development for the rest of the decade (ibid.: 252).3 Not until the end of the century did the well-established new drug development model, based on microbiology, the synthesis of small molecules and large-scale clinical testing, fail to deliver the target blockbuster drugs. In the first decade of the new millennium, there is a need for new thinking and new practices regarding the innovation work in the life sciences. In summary, the history of the biopharmaceutical industry thus stretches back to medieval times and the inception of the sciences as systematic endeavour to understand or even explain the world and brings us into the modern period of highly sophisticated technoscientific life sciences capable of accomplishing the most astonishing things (see, e.g., Thacker, 2004). While history may appear linear and constituted by sequential steps, like beads on a string, in common-sense thinking, it may in fact be understood through a variety of geometrical metaphors; it may very well be conceived of as a curved (non-Euclidean) space where the past is always already present as a virtuality – not centuries away, but playing an active role in the duration of the sciences. Seen from this view, the medieval modes of thinking are not once and for all ‘embarrassing reminiscences’ (as Nietzsche spoke of in Thus spoke Zarathustra), but are lively components of everyday thinking (see, e.g., Sconce, 2000). This view of history may not be helpful when trying to understand the very technoscientific procedures and outcomes, but much more helpful regarding the reception and wider debates surrounding the possibilities enabled by the biopharmaceutical sciences.
Studying the Organization of the Bioeconomy 13
Studying the organization of the bioeconomy While there are many intriguing, fascinating and mind-boggling actual accomplishments and future possibilities which can be derived from the totality of the biopharmaceutical industry and the life sciences, offering a great may inroads to systematic research, business school research should examine organizational and managerial problems and challenges and not scientific practice per se. One of the central traditions of research in business schools is the field of innovation. Pledging allegiance to what may be called a ‘Schumpeterian’ tradition of thinking, conceiving of the capitalist economic regime as being volatile, fluxing and changing, and demonstrating an insatiable demand for new innovations (in the form of either goods or services or in the firm of new practices, so-called ‘process innovations’), organizations are by and large positioned as principal sites for innovation. Innovation may appear in the form of traditional R&D departments, in joint ventures such as alliances or collaborations, or they may be outsourced or in-housed through mergers and acquisitions, but there is always an emphasis on producing innovations. Needless to say, the literary corpus on innovation is massive (whereof some will be discussed in Chapter 1), including a variety of theoretical orientations and methodological approaches. Before outlining such positions and propositions for this work, we may turn to Deborah Dougherty’s recent concern regarding the nature of innovation research in the pharmaceutical industry. Dougherty (2007), one of the most renowned researchers in the field of innovation management, is critical of the tendency to use the same analytical models when studying science-based innovation as when studying technology-based innovation. Since technology – to simplify the argument, keeping in mind that technology is by no means a trivial matter (Winner, 1977; Simondon, 1980; Bijker et al., 1987; Bijker, 1995; Stiegler, 1998) – demonstrates certain features (i.e., being composed of separate elements), it is assumed that science-based innovation work is what Dougherty (2007: 265) calls ‘decomposable’ and that products are ‘scalable’. While innovation work in manufacturing industry (say, the automotive industry) could be examined meaningfully as a series of transformational events and occurrences leading to the final output, the new car model to be launched on the market, in the pharmaceutical industry, for instance, no such reductionist approach is adequate, Dougherty argues. However, many major multinational pharmaceutical companies have invested substantial resources into what Dougherty (ibid.: 266) calls ‘mega-technologies’ such as ‘rational drug design,
14
Introduction
high-throughput screening, combinatory chemistry, imaging technologies or genomics’. That is, in Dougherty’s view, industry representatives have treated drug discovery as a ‘technological problem’, leading to the misconceived idea that as long as one is capable of bringing in ‘more machinery, devices, automation, assays and other scale-ups to do more things faster’ (ibid.), then innovations are expected to be generated. Dougherty (ibid.) dismisses such beliefs as a ‘techno-hype’ that is preventing both industry representatives and researchers from understanding the nature of innovation and from understanding the knowledge and the skills demanded to produce radical innovations. Rather than simply being produced by means of advanced technology and machinery, innovation in the pharmaceutical industry is based on nondecomposable processes mobilizing compounds that interact with ‘[t]he complex life system in unique ways’ (ibid.: 270) and therefore there ‘can be no simplifying frameworks’. In Dougherty’s view, innovation research has for too long relied on generic models that are relatively insensitive to the local conditions and specific features of science-based innovation work. Assuming that Dougherty (ibid.) is here pointing at significant features of both the field of research on innovation management and the nature of innovation in the biopharmaceutical industry, then the very idea of innovation needs to be broadened. This book is an attempt at conceiving of innovation works as something that is taking place within a grid of specific technological, economic, cultural and social settings whose intricate relationships constitute a rich texture that needs to be examined in detail to fully enable an understanding of innovation work. Therefore, rather than examining the very innovation processes as such, outlined as a series of events, occurrences, practices, decision-points and so forth, leading forward to the point where a new product is delivered or launched in the market, concepts such as professionalism, professional ideology and professional identity are central to the study of and understanding of science-based innovation work. In this book, innovation work is examined in three highly complementary but also different yet interrelated domains of research, namely the major pharmaceutical company, the smaller biotech company and the academic university department. These three domains are all staged as being part of what has been called the bioeconomy (Rose, 2007), a specific economic regime of accumulation that in various ways are centred on the possibilities and accomplishments of the biopharmaceutical industry and the research conducted in the life sciences in universities and research institutes. Rather than thinking of innovation as what is produced through the adherence to prescribed and standardized innovation management
Studying the Organization of the Bioeconomy 15
models, innovation is what is produced on a basis of values, norms, practices, beliefs, as aspirations established on a social level; innovation work is then not to be conceived of as the very chopping up and cooking of the groceries but must take into account the broader framework wherein – to maintain the gastronomic metaphor – how groceries are grown and produced, harvested, distributed and marketed before they enter the kitchen, where they are finally turned into delicate dishes in the hands of the skilled chef. That is to say, rather than engaging in what may be intriguing speculation about the potential and concerns regarding the bioeconomy, on the basis of a transdisciplinary body of literature, this book is setting the task before itself to examine the organizational and managerial aspects of the new economic regime. That is, what is examined in the following chapters is not only of interest for the field of social theory, the life sciences themselves, the domains of business ethics and so forth; we also study what has organizational and managerial implications. We hope to show that innovation work in the bioeconomy is strongly shaped and formed by forms of professionalism and professional ideologies, and that these conditions suggest that students of innovation work must follow scholars like Dougherty (2007) in her dismissing of ‘techno-hype’. Technology is constitutive of modern life and certainly so for the technosciences (as suggested by the very term); it is not an autonomous force but rather a tool in the hands of professionals. Expressed differently, innovation work is an organizational and managerial matter including technology beside a variety of other resources. At the bottom of the innovation process lies its very organization, the integration of a variety of practices.
Outline of the book This book is composed of six chapters. The first two chapters constitute the theoretical framework of the study. In Chapter 1, the concepts of profession and professional ideologies and professional identities will be examined. ‘Professionalism’ is a key term in the social sciences and in this setting, that of the advanced technoscientific life sciences, the concept of profession plays a key role in organizing and structuring the field of expertise. In Chapter 2, the concept of the bioeconomy and its various activities, practices and institutionalized modes of operations will be discussed. The bioeconomy is characterized by the belief that biological entities and specimens may be translated into biocapital and, further, into financial capital. That is, the bioeconomy is the pursuit of making life a source of economic activity. The chapter demonstrates
16
Introduction
the manifold activities in the life sciences and how the there is an economic interest in biological entities that previously attracted little such interest. Chapter 3 is the first of three empirical chapters, reporting a study of innovation work in a major multinational pharmaceutical company. Based on a case study methodology including interviews and observations, the chapter demonstrates how major pharmaceutical companies are struggling to effectively implement and use the new technoscientific approaches and frameworks being developed in the genomic and post-genomic periods. Chapter 4 presents a study of biotechnology firms and how representatives for this industry are conceive the possibilities of venturing into the bioeconomy. While the biotechnology industry has been widely recognized and hailed as the future of the life sciences, there is evidence of relatively limited therapeutic output from the research activities. The chapter suggests that, while biotechnology firms have made substantial contributions to the life sciences on a methodological level, the new therapies are still relatively few. Chapter 5 examines the work of life scientists in the university setting, underlining the key role of the academic research setting in the bioeconomy. Contrary to common belief, academic researchers, even in highly practical and industry-related fields of research (socalled ‘applied science’) are concerned with maintaining an academic identity. Examining the concept of systems biology, many academic researchers point at some of the merits in wedding computer science and information management approaches with more traditional ‘wet lab biology’, but still think of the life sciences as being on the verge of major breakthroughs. The final chapter, Chapter 6, addresses some of the practical and theoretical concerns that the three part studies give rise to. For instance, what are the roles of professional ideologies and identities in the bioeconomy, an economic regime characterized by the life sciences as the primus motor for the economic activities in Western, late modern societies.
Notes 1. Similarly, only natural and expensive dyestuffs existed until the midnineteenth century, when German scientists made it possible to synthesize cheap organic dyestuffs – really the birth of the modern pharmaceutical industry. 2. Merton (1973: 456–7) adds a few examples of such unfortunate scientific careers: ‘The history of science abounds in instances of basic papers having been written by comparatively unknown scientists, only to be neglected for
Studying the Organization of the Bioeconomy 17 years. Consider the case of Waterson, whose classic paper on molecular velocity was rejected by the Royal Society as “nothing but nonsense”; or of Mendel, who, deeply disappointed by the lack of response to his historic papers on heredity, refused to publish the results of the research; or of Fourier, whose classic paper on the propagation of heat had to wait thirteen years before being finally published by the French Academy.’ 3. In their review of innovations in the pharmaceutical industry over the period 1802–1993, Achilladis and Antonakis (2001: 545) identify five ‘waves’ of rapid advancement in innovation: the period 1802–80, when alkaloids and organic chemicals were developed; 1881–1930, when analgesics/antipyretics were invented; 1931–60, when vitamins, sex hormones and antihistamines were produced; 1961–80, dominated by antihypertensive/diuretics, tranquilizers and antidepressants; and 1981–93, which brought calcium ion channel antagonists, ACE inhibitors, serotonin inhibitors and gastric and duodenal ulcers. In Achilladis and Antonakis’s (ibid.) account, pharmaceutical innovation is path-dependent and emerges in clusters of interrelated drugs (see also Nightingale and Mahdi, 2006).
1 Professional Ideologies and Identities and Innovation Work
Introduction In this first chapter, the research reported in the empirical chapters of the book will be seated in an organization theory and management studies context. That is, rather than being a more general social theory analysis and critique of the biopharmaceutical industry and the life sciences, the book aims to point at the organizational and managerial concerns when venturing into the bioeconomy. Speaking in such terms, the operative vocabulary of the three studies reported in Chapters 3–5 are centred around three concepts: profession, identity and innovation. In the organization theory literature, these are three key concepts that have been used in a variety of research efforts and different industries and settings. First, the concept of profession has been part of the sociology literature since the inception of the discipline in the second half of the nineteenth century. The professions have been a central organizing principle in what today is referred to as knowledge-intensive work, mediating between organizational goals and objectives (the structure of knowledge work) and individual interests and concerns (the actor in knowledge work).The professions have, in short, served a key role in advocating standards and routines for how to organize and evaluate work that demands specialist skills and know-how. The concept of identity derives from the behavioural sciences but has increasingly been discussed in the organization theory and management literature as a key component in the regulation of control of knowledge-intensive industries and firms. Merging the two terms, one may speak of professional identities regulating professional work in terms of imposing standard operation procedures and rules of conduct in a professional field. Finally, the concept of innovation is of great importance in the 18
Professional Ideologies and Identities and Innovation Work
19
contemporary capitalist regime of accumulation wherein new goods, services and principles for organizing work are constantly being produced and launched. The contemporary economic regime is fundamentally shaped by the demand for novel products and services, and consequently the sub-discipline of innovation management is acquiring substantial interest in both academic circles and in industry. The chapter thus intends to outline an organization theory and management framework that complements the more sociological perspective of bioeconomy that is discussed in the next chapter.
Populating the bioeconomy Professionals and professional communities In this book, a variety of professional workers active in different domains of the bioeconomy will be examined. Therefore, the concept of the professional is targeted as one of the central operational terms in the book and the literature on professionals will be examined in some details. Profession is one of the central concepts in a sociology of work and is also one of the most debated topics. By professionals we mean occupational groups whose domain of expertise is in various ways ‘monopolized’ or sheltered from competition by formal or semi-formal entry-barriers such as formal education or training or membership of professional organizations (Empson, 2008). Leicht and Fennell (2008) define professional work: as occupational incumbents: (a) whose work is defined by the application of theoretical and scientific knowledge to tasks tied to core societal values (health, justice, financial status, etc.), (b) where the terms and conditions of work traditionally command considerable autonomy and freedom from oversight, except by peer representatives of the professional occupation, and (c) where claims to exclusive or nearly exclusive control over a task domain are linked to the application of the knowledge imparted to professionals as part of their training. (Ibid.: 431) In terms of everyday work, professional groups belong to the aristocracy of workers, accruing prestige, social influence, high pay and other fringe benefits. Commonplace examples of professionals are medical doctors, lawyers, engineers, university professors and scientists. The literature on professionals is substantial and the topic has been examined from a variety of perspectives. The Weberian tradition of thinking conceives
20
Venturing into the Bioeconomy
of professional groups as communities that have been relatively more successful in monopolizing and erecting entry-barriers around their domain of expertise than other, comparative communities. Attaining the status as a professional community is then, per se, a joint accomplishment whose historical, social and economic conditions must be examined in detail. In this book, such a Weberian view of professionals is taken. Larson (1977: 74) suggests that the ‘professional project’ is part of a organizational project; without the organization of the production of professionals and the transactions of services for the market, there would be no professionals. The professional project culminates, Larson says, in the establishment of ‘distinctive organization’ such as the professional school and the professional association. Access to tertiary (university) education and eventually membership in professional associations regulates the output of specific professionals. For instance, in order to serve as a practising medical doctor, one needs to be able to demonstrate formal credentials from a legitimate medical school and to receive one’s formal documentation. Larson suggests that the establishment of such monopolizing educational system emerge in two distinct phases: The achievement of this monopoly of instruction depends on two related historical processes: the first is the process by which an organization of professional producers agrees upon a cognitive base and imposes a predominant definition of professional commodity. The second is the rise and consolidation of national systems of education. (Ibid.: 211) In other words, first there need to be reasonably shared ideas of what constitutes the boundary between legitimate and non-legitimate knowledge claims; what is, for instance, proper medical knowledge and what is pseudo-science or mere quackery. As soon as there is agreement on theory, practices, technologies and other resources making up the professionals’ everyday work, more formalized systems may be established. Expressed differently, professionals are, in the first place, as Scott (2008: 223) suggests, ‘institutional agents’, that is ‘definers, interpreters, and appliers of institutional elements’. Professionals are therefore, in Scott’s mind, the most influential contemporary creators of institutions. Institutions not only protect professionals’ jurisdictional claims but also help transform and translate professional authority into new domains and areas. One may say that professionals and institutions are two sides of the same coin, enabling professional jurisdiction
Professional Ideologies and Identities and Innovation Work
21
to be maintained over time and space (Fourcade, 2006). While professionals are supported by the various institutions established over time, there is always an ongoing struggle over professional boundaries and jurisdictional claims (Gieryn, 1983; Bechky, 2003); professionals always have to defend themselves against competing professional groups and groups seeking to be part of the domain of expertise claimed by the professional group. A significant body of studies has examined such ‘boundary-work’ (Gieryn, 1983) and these studies show that professions are in fact dynamic, continually restructuring and reconfiguring social categories (Abbott, 1988). Since the most influential and prestigious professional communities are mobilizing substantial economic, social and symbolic resources to maintain their social status and role in society, one may expect a rather tight coupling between formal education and the practices of professionals. On the contrary, however, such common-sense thinking is not supported by empirical studies. Collins (1979), for instance, found a surprisingly weak correlation between the requirements of educational credentials and the skills/knowledge requirements of jobs: Education is often irrelevant to on-the-job productivity, and is sometimes counterproductive. Specifically vocational training seems to be derived primarily from work experience rather than from formal school training. The actual performance of schools themselves, the nature of the grading system and its lack of relationship to occupational success, and the dominant ethos among students suggests that schooling is very inefficient as a means of training for work skills. (Ibid.: 21) For instance, in the case of professional managers, a study of 76 companies conducted in 1952 reported that 90 per cent of the managers dismissed from their jobs lacked the desired personals traits rather than lacking adequate technical skills (ibid.: 32). Such empirical findings further advance the Weberian perspective on professionals, suggesting that professional communities are primarily politically grounded communities rather than primarily being based on scientific or technical expertise. Professional communities should therefore be defined in organizational rather than epistemological terms; being a professional is not always to be in the possession of superior know-how but to belong to a privileged group sheltered by credentials and jurisdictional claims (Timmermans, 2008). Another consequence from this perspective is that, rather than being oriented towards the tasks conducted,
22
Venturing into the Bioeconomy
professional groups are often defined as ‘an occupation which tends to be colleague-oriented, rather than client oriented’ (Larson, 1977: 226; emphasis in original). That is, a member of a professional group may be more concerned about what other professionals think than the general public’s opinion; a scientist may be more eager to hear the response from the leading peers of the field of expertise; a string quartet may be more interested in performing at the peak of their capacity than being appreciated by (potentially lesser-knowing) audiences (Murningham and Conlon, 1991), and so forth. In summary, professional communities are important organizational units in the contemporary knowledge society, effectively organizing and structuring forms of know-how into operational communities with clear jurisdictional claims. Even though the conflict and controversies between professional groups – surgeons and radiologists, for instance (Burri, 2008; Golan, 2004) – may be time-consuming and daunting for individuals seeking to broaden their scope of work, professional communities still optimize the maintenance and reproduction of knowledge and knowledge-claims in structured forms. However, as suggested by, for instance, Abbott (1988), professionals must be understood as operating in open, ecological systems under the influence of external conditions such as technological changes or new market conditions. Therefore, as exogenous conditions change, one may expect professional ideologies and professional identities to be modified at least to the extent where the new conditions are accommodated in the professions and work can continue, if not exactly as before, at least in a similar manner. In the next section, the concepts of professional ideologies and professional identities will be examined in some detail. Professional ideologies and professional identities Just as the concept of professionals is a central entry in the encyclopaedia of the social sciences, so is the concept of ideology (Hawkes, 1996). ‘Ideology’ is a term that has taken on many meanings. In the Marxist, critical tradition, ideology means something like ‘deceiving ideas’ that prevent individuals and communities of individuals from seeing their life world situation correctly; ideology represents beliefs that serve to veil the real world. Sarah Kofman (1999: 14) characterizes this tradition of thinking: ‘Ideology represents real relationships veiled, under cover. It functions, not as a transparent copy obeying the laws of perspective, but, rather, as a simulacrum: it distinguishes, burlesques, blurs real relationships.’ Žižek (1994: 7) is critical of such a view and claims that ideology must be examined in less mystified terms, having
Professional Ideologies and Identities and Innovation Work
23
little to do with ‘illusion’, what is a ‘distorted representation of its social content’ and what Žižek calls a ‘representationist problematic’. Instead, ideology is part of the operative social reality being reproduced in everyday life. At the same time, Žižek maintains that there is a need for an ‘ideology critique’ but that such a critique must not think of ideology as smoke and mirrors – as illusion. Žižek (ibid.) is thus close to what Pierre Bourdieu speaks of as ‘doxa’ – ‘that there are many things people accept without knowing’ (Pierre Bourdieu, in Bourdieu and Eagleton, 1994: 268). Ideology operates on the level of everyday thinking and consciousness, in the very actions and beliefs that are reproduced on an everyday basis. In more recent social theory, ideology has been used in a less grand manner as that which helps individuals and communities of individuals to make sense out of their life situations and their practical undertakings. The anthropologist Clifford Geertz (1973) talks about a ‘strain theory of ideology’, a theory that emphasizes everyday commonsense thinking rather than conceiving of ideology as advanced machineries of smoke and mirrors set up to dominate certain groups in society. In this tradition of thinking, ideology is not, in Galloway’s (2006: 317) formulation, ‘something that can be solved like a puzzle, or cured like a disease’. Instead, ideology is to be understood as a ‘problematic’, a ‘[s]ite in which theoretically problems arise and are generated and sustained precisely as problems in themselves’ (ibid.). For instance, why do certain groups entertain specific beliefs under determinate conditions? Such a perspective on ideology is more productive in terms of lending itself to empirical investigations. In the tradition of Louis Althusser (1984), the contemporary philosopher or social theorist perhaps making the greatest contribution in turning the Marxist concept of ideology into an operative term in the social science vocabulary, ideology operates on the level of what Émile Durkheim (1933) called ‘the collective consciousness’, as being what is taken for granted and instituted as common-sense thinking; ‘Ideology never says: “I am ideological”,’ Althusser (1984: 49) says. At the same time, remaining true to the Marxist tradition of thinking, Althusser (ibid.: 32) defines ideology as ‘[t]he system of the ideas and representations which dominate the mind of a man or a social group’. When operationalizing ideology, one may recourse to mainstream theory. For instance, speaking within the social sciences, Mir et al. (2005: 170) define ideology (with reference to Louis Althusser) as a ‘process by dominant social groups in which communities and societies control oppressed groups with a minimum of conflict, through recourse to a putative “common sense”’. They continue: ‘This common sense is produced through the management of a framework
24
Venturing into the Bioeconomy
of symbols and values that legitimize the current order.’ Although Mir et al. (2005) fail to leave the traditional view of ideology behind altogether – speaking about ‘control oppressed groups with a minimum of conflict’, implying a certain belief in ideology as that which needs to be cured – the key term here is ‘common sense’. Common sense is what structures everyday life and wards off any critical accounts as being ‘irrelevant’, ‘overtly abstract’, or any other argument in favour of a continuation of common-sense thinking. In more recent thinking about ideology, propelled by the voluminous work of the renowned Slovenian philosopher and social theorist Slavoj Žižek, ideology is no longer conceived of as oppressive and deceiving but is rather positioned as providing an illusion of openness, of leniency, of alternatives. In this view, ideology is neither ‘smoke and mirrors’, as in the traditional Marxist view, nor a communal cultural and cognitive order (i.e., common-sense thinking) that must not be violated, but a sense of unrestrained and costless possibility. Žižek explains: Ideology is not the closure as such but rather the illusion of openness, the illusion that ‘it could happen otherwise’ which ignores how the very texture of the universe precludes a different discourse of events . . . Contrary to the vulgar pseudo-Brechtian version, the basic matrix of ideology does not consist in conferring the form of unavoidable necessity upon what is actually dependent on a contingent set of concrete circumstances: the supreme lure of ideology is to procure the illusion of ‘openness’ by rendering invisible the underlying structural necessity (the catastrophic ending of the traditional ‘realist’ novel or the successful final deduction of whodunit ‘works’) only if it is experienced as the outcome of a series of [un]fortunate contingencies. (Žižek, 1992: 241–2) Bourdieu (2005) has advocated the term ‘illusio’, sharing much with Žižek’s (1992) thinking (without drawing any further conclusion regarding these two different theorists), to denote the degree of self-deceit that must exist in any society to function properly. For instance, the belief in norms such as ‘hard work pays off’, that the juridical system functions properly, that education is a worthwhile investment, etc.; in short a belief maybe not in all the rules of the game but in most of them and certainly the value in the game itself. The idea of ‘freedom of choice’ and the American dream of being a self-made man or woman are excellent examples of ideological workings of this kind; such ideas do not, in the first place, impose the idea of the importance of hard work and
Professional Ideologies and Identities and Innovation Work
25
diligence but position the subject in a situation where he or she is expected to be able to shape his or her own future. This ideology does not present an idea about society but an idea of the enterprising subject, facing many challenges whereof all could be overcome if one only really, really wants to succeed. Žižek’s (1992) ideology operates on the level of the psychological apparatus, in the domain of what Lacan calls desire, and therefore the ideology of a certain society is not regarded as being oppressive but liberating and enabling (Roberts, 2005). As Foucault (1980: 59) once pointed out in a much-cited passage, power would be ‘a fragile thing’ if its only function was to repress. Instead, Foucault says, power is strong because ‘it produces effects at the level of desire’. This is what Žižek (1992) emphasizes, that power is, in liberal and democratic societies, manifested not in repressive practices but in the sense of being in a position to accomplish one’s desires. Ideology, then, appears in the form of making the individual believe that ‘even though the situation is like it is, it could be completely different’. For instance, Washburn (2005: 208) reports that, at the top 146 American colleges and universities, ‘75 percent of the students come from the top income quartile of families, and just 3 percent hail from the bottom quartile’. Since tertiary education is widely regarded as what qualifies one for high income, prestigious work, social security and a long row of other desirable effects, admittance to Harvard, Princeton, Yale, or Stanford are more or less entry tickets to middle-class society. Belonging to the bottom quartile, one may take comfort in thinking that ‘I was not admitted to the elite university but it could just have been completely different’, even though this belief is poorly supported by empirical studies. In the repressive regime of power, ideology states flatly that ‘only the top income groups are admitted at the elite universities’, while in the new regime of power wherein ideology serves to create a sense of possibility, it is announced that ‘anyone, when having the right qualities and the right ambition and energy, is capable of making it into Harvard’, even though empirical studies suggest otherwise. In summary, ideology is, in Žižek’s (1992) perspective, no longer the cunning use of manipulative devices, operating like Potemkin fake settlements, but is instead serving to maintain a sense of openness and possibility: ‘You too can make it, and become successful, happy and prosperous,’ is the message in this regime of power. Professional ideologies Professional groups are one of the central organizing principles in contemporary society. In order to qualify as a legitimate member of
26
Venturing into the Bioeconomy
a professional community, one not only needs the formal credentials (education diploma and adequate experience) but must also take on identities and nourish beliefs that are shared within the professional community. In the following, the concept of professional ideologies will be examined. Such professional ideologies are naturally acquired in tertiary education. Schleef’s (2006) study of undergraduate students in two elite education programmes – a law school and a business school – offers some insights into the process of becoming a professional. Schleef argues that students enrolling in elite education programmes are taking on professional ideologies and professional identities in a gradual process of socialization. Since all professional work rests on the ability to think critically, question assumptions and be held accountable for one’s action, students need to be able to think on their own. At the same time, professional ideologies and identities demand an acceptance and enactment of collectively established professional beliefs and practices. The students, therefore, have to be able to strike a delicate balance between being critical and submitting to the professional tradition. Schleef (ibid.: 4) says that ‘far from being unwilling dupes of ideological indoctrination, students are self-reflective, and they strategically accommodate and resist the ideologies of their education. During professional socialization, they must confront and rationalize their future status as a means of facilitating and thus legitimizing the reproduction of elite privilege.’ Therefore, during the ‘elites-in-training’ programmes, students move from being sceptical outsiders to representatives of professionals-in-the-making, ready to take on their professional talks and serve society. During training, students ‘contest, rationalize, and ultimately enthusiastically embrace their dominant position in society’, Schleef (ibid.) suggests. In some cases, the transformation is gradual and seamless while in other cases it is more disruptive and momentous. For instance, Danielle, a law school student at ‘Graham University’ (an American elite university), firmly believed during her first year in law school that lawyers were overpaid and took advantage of their powerful position in society; by the end of her education, she says, without criticism, that ‘lawyers work really, really hard . . . the money is deserved. I think lawyers are really, really smart. I think they are very articulate and on top of things’ (ibid.: 2). Schleef does not suggest that such change in beliefs is an act of opportunism but it is rather the outcome from a process wherein Danielle is enacting her own mandate to serve society with the authority of a professional. That is, professionals need to believe they are embodying the qualities and the ethical standings demanded by the profession. Since students need to be able to balance
Professional Ideologies and Identities and Innovation Work
27
critical thinking and an enactment of prescribed professional beliefs, ideologies and practices, they face what Bateson (1972) once called a ‘double-bind situation’; if they maintain a critical position, they do not enact a true and sincere belief in their forthcoming professional role, and if they accept all professional ideologies and practices offhand they are not equipped with the adequate skills for critical thinking. Schleef (2006) here introduces the term ‘surface cynicism’ as a process mediating these two positions and objectives. In order to be critical about the professional role and position in society while simultaneously embracing such a role, students direct their critical attention towards the education and training procedures. That is, it is the university training system and all its various routines, practices, didactics, pedagogical features and so forth, that is criticized for being ‘irrelevant’ or ‘counterproductive’ for the future professional work. Law school students are critical of the Socratic method used to interrogate students on specific cases and business school students claim the theoretical training is poorly related to the everyday work of the world of business. Schleef explains the role of surface cynicism in more detail: Surface cynicism is a symbolic resistance that creates and strengthens elite solidarity. Students unite against the elements of their schooling that they can reject, in order to show that they have not been too taken in by school rhetoric and do indeed see behind the façade of professional ideology. At the same time, the dynamics of student resistance actually fortify many aspects of professional ideology and cause students to become more intricately invested in their disciplines . . . Criticism of school is an expected part of the student persona, but total rejection or acceptance of law school rhetoric is not. Students can recognize and critique messages about the pedagogy without jeopardizing their investment in the professional hierarchy. (Schleef, 2006: 91) In the course of elite training, students move from taking a critical view of the profession, gradually enacting both beliefs, ideologies and practices, and become critical of the formal training procedures, eventually becoming full-fledged professionals-in-the-making, ready to serve society through their ability to adhere to professional standards while thinking critically. Speaking in Freudian terms, the education programme serves the role of a ‘transition object’ onto which the students can project all their anxieties and concerns regarding their role as professionals-in-the-making. The surface cynicism is therefore a central
28
Venturing into the Bioeconomy
mechanism for helping students reconcile what they regard as opposing objectives. Schleef’s (2006) study is helpful in showing and explaining how professional identities are always shaped and formed by the institutional setting and how professionals have to learn to cope with and accommodate opposing and seemingly irreconcilable positions. As representative of regimes of knowledge (e.g., medicine, law, engineering, etc.) and being authorities in their own right, professional communities have to be maintained by certain ideologies and beliefs to remain stable over time and space. As a consequence, what Strauss, et al. (1964) call ‘professional ideologies’ are not only reproduced in tertiary training at the universities but serve a key role in everyday professional work. In Strauss et al.’s (1964) study of the field of psychiatry, they suggest that rather than being a fixed and coherent category, professional communities are ‘emergent’ and inextricably bound up (in the case of psychiatrists) with ideologies and treatment practices. In a perspective later to be further developed by Abbott (1988), the specialist orientations within psychiatry are ‘[a]nything but stable entities with relatively fixed boundaries and tasks’ (Strauss et al., 1964: 6). Instead, the boundaries between different categories of psychiatrists are fluid and porous and subject to continual negotiations. Strauss et al. suggest that professional ideologies are what define the boundaries between these groups or ‘schools’. Strauss et al. (ibid.) define three professional ideologies in psychiatric work. First, there is the somatic ideology, emphasizing the influence of organically based etiology and procedures when examining and treating psychological disorders. For the proponents of somatic therapies, illnesses need to be understood as what is residing in the materiality of the human body and therapies should focus in re-establishing what is poorly functioning on the level of corporeality. Second, Strauss et al. speak of a psychotherapeutic ideology, emphasizing the psychological disorder as what is best treated through psychoanalytical or other psychotherapeutic approaches. In this professional ideology, the human psyche is not to be confused with somatic illnesses; psychological illnesses belong to an entirely different category of illness and need to be treated with specific methods and not with psychopharmacological drugs. Needless to say, the somatic and the psychotherapeutic ideologies in many ways prescribe radically different approaches to the professional work and countless debates between these two approaches are present in the academic and popular literature. On the one hand, the proponents of a somatic ideology enact the psychological disorder as being material and embodied; on the other, the psychotherapeutic ideology locates the illness to the cerebral functions of the body, essentially
Professional Ideologies and Identities and Innovation Work
29
removed from the somatic order. In addition to these two competing ideological positions, Strauss et al. speak of a milieu therapy position, emphasizing the crucial importance of ‘environmental factors in etiology or treatment’ (ibid.: 8). Rather than solely locating psychological disorder in the body or the cerebral regions, the proponents of milieu therapy bring the wider social context into the analysis and treatment of psychological disorders. To some extent, one may say that the proponents of milieu therapy open up the problems facing certain patients to more sociological or even cultural explanations. For instance, is suicidal patients’ unwillingness to continue their life to be explained on the level of somatic disorders, in terms of absences or overproduction of certain hormones, or should their despair be understood as strictly psychological disorders to be treated through psychoanalysis, or are they, as suggested by proponents of milieu therapy, also to be understood as members of a society and culture imposing certain objectives, expectations and even desires that may be complicated for the individuals to live up to? For Strauss et al., the complementary and, at times, overlapping professional ideologies in psychiatry are not only classifications that make sense for the external analyst but is what is strongly influencing the everyday work and the choice of therapies on everyday basis. Strauss et al. suggest that the ‘social structure’ of the psychiatric hospital is derived from three sources: (1) the number and kinds of professional who work there, (2) the treatment ideologies and professional ideologies of these professionals and (3) the ‘relationship of the institution and its professionals to outside communities, both professional and lay’ (ibid.: 351–2). Speaking from the perspective of institutional theory (e.g., Meyer and Rowan, 1977: 346), ‘organizations tend to disappear as distinct and bounded units’ but become open systems under the influence of external changes and institutions. In the case of psychiatric hospitals, the academic research conducted and published and the therapies developed elsewhere are important sources affecting ‘social structure’ in the focal psychiatric hospital. However, such external changes are not continuous but occasional and, in everyday work, the established professional ideologies set the boundaries for what practices and beliefs can be tolerated. Strauss et al. (1964) say that any newcomer to a psychiatric hospital has to converge (within reasonable limits) to the predominant professional ideologies: The fieldwork data suggests that institutions are both selective and productive in terms of ideologies. They are selective in that only certain types of ideology can be tolerated or implemented within the
30
Venturing into the Bioeconomy
limits set by both institutional necessities and the particular organization of treatment. For example, on the state hospital treatment services, any young psychiatrist whose ideological orientation was basically psychotherapeutic had to develop a scheme of operation drastically modifying the psychotherapeutic approach appropriate in other institutions. (Ibid.: 360) Expressed differently, professional ideologies are not overtly theoretical systems of propositions removed from everyday matters but are, instead, ‘abstract systems of ideas’ mediated by what Strauss et al. call operational philosophies. Operational philosophies are ‘[s]ystems of ideas and procedures for implementing therapeutic ideologies under specific institutional conditions’, they (ibid.: 360) say. That is, professional ideologies are transcribed into operational philosophies which are further translated into actual practices. In other words, depending on what professional ideology the psychiatrist is adhering to, his or her actual operations will differ notwithstanding similar institutional conditions: ‘[G]iven similar institutional conditions, persons with different ideological positions operate differently, that is, they emphasize different elements of the possible array of services and organize their working conditions accordingly’ (ibid.: 361, emphasis in original). Even though they may work in the same hospital, a psychiatrist following a somatic credo prescribes different therapies than a psychiatrist having a firm belief in psychotherapeutic treatment. Strauss et al. emphasize this connection between ideology and practices. While the future professional is engaged in acquiring the specific skills of his trade and the professional identity that will guide his activity, he also acquires convictions about what is important or basic to treatment and what is proper treatment, he learns treatment ideology as an integral part of his professional training. (Ibid.: 363) In addition, the professional ideologies adhered to are not only regarded as instrumental scripts for accomplishing the best therapeutic effects; the professional ideologies are also ‘highly charged morally’ (ibid.: 365) – that is, the specific professional ideologies operate not only on the level of functionality but also on the level of values and norms more broadly. In Lakoff’s (2006) study of the use of psychopharmacological drugs in Argentina in the beginning of the new millennium, he found that in among the community of Buenos Aires psychoanalysts, often dedicated Lacanians, there was a general critical view of such therapies because
Professional Ideologies and Identities and Innovation Work
31
psychoanalysis was regarded as being ‘anti-capitalist’ and ‘left-wing’ and essentially in opposition to the mass-produced drugs of big pharma. In the case of the Argentinian ‘psychology community’ – the mundo psi – psychotherapeutic ideologies were ‘morally charged’ in terms of underlining the opposition between (liberating) psychotherapy and (repressive) psychopharmacological drugs. In the same manner, the psychiatrists studied by Strauss et al. (1964) pointed at the connection between professional ideologies and personal or professional ideologies. In summary, Strauss et al. show the connection between professional ideology and everyday practice. ‘Ideologies provide frameworks for judging both how patients should be helped and what is harmful for patients,’ Strauss et al. (ibid.: 365) contend. Ideologies are no otherworldly speculations but are, in many cases, abstract systems of thought that transcribe themselves into everyday practices and operations. What is of particular interest is that Strauss et al. (1964) point to the strength of professional ideologies while at the same time they are open to external influences and changes; professional ideologies are thus semi-stable regimes of beliefs and practices, entrenched through training or practical experiences or embedded in personal or collective values or norms, that are open to negotiation if new ideas or evidence are provided. Just like any institution, professional ideologies are never carved into stone once and for all, but are rather malleable and changeable under the right conditions (Timmermans, 2008). Professional identities Another analytical concept that is helpful when studying professional communities is the concept of identity. Arguably more down to earth and less politically charged than the concept of professional ideology, the concept of professional identity is useful when understanding how professional ideologies are modified or enacted over time and under determinate conditions. For instance, in Strauss et al.’s (1964) study of the American psychiatric hospitals, they emphasize that professionals ‘follow careers’, not institutions, and that various institutions (e.g., specific hospitals) are little more than ‘waystations’ towards future goals and more prestigious positions. At the same time, such waystations may be more permanent than the individual careerist professional may hope for and consequently identities are at the same time both embedded in local conditions and local practices and aspirations and hopes for the future. Expressed differently, identities are both actual and virtual; actual in terms of being based on practices and conditions at hand, present in everyday work; virtual in terms of being enacted on basis of ‘what
32
Venturing into the Bioeconomy
may come’.1 Think, for instance, of the identity of Madame Bovary in Gustave Flaubert’s novel, shaped both by the tedious everyday life with a boring husband, a life too small and too insignificant for someone with Emma Bovary’s ambitions, and the possibilities that may open up if only things were different. Most human beings endure a life situation where their durée comprises both past experiences, present undertakings and forthcoming and anticipated events; our life world is based on a termporality that extends in all directions and consequently our identities are a blend of what has been, what is and what may eventually become. A great number of social theorists have discussed the present age – a period best termed after Giddens (1990) as the ‘late modernity’ – as an age of fluidity, changes, disruptive events, of radical breakage from the past, or increasingly situated social identities (DeLanda, 2006; Bauman, 2005; Urry, 2000, 2003; Beck, 2000). The so-called post-modern or late-modern subject in this literature is a fragile construct, resting not on century-long occupational traditions and a firm and uncontested family history but instead shaped and formed by individual accomplishments and undertakings (Foucault, 1970; Braidotti, 1994; Poster, 2001; Stavrakakis, 2008). We are no longer born into stable identities but increasingly acquire such identities in the course of our living. The anthropologist James Clifford (1988) suggests that one must to replace ‘any transcendental regime of authenticity’ with a more are ‘historically contingent’ view of identity: ‘I argue,’ Clifford (ibid.: 10) writes, ‘that identity, considered ethnographically, must always be mixed, relational, and inventive’. Kosmala and Herrbach (2006: 1394), speaking from the perspective of organization theory, address the same issue: ‘[I]ndividual identity is no longer “passively” derived from one’s position in the social space; rather it is the responsibility of each individual to reflect upon how they choose to exist in a historically and culturally specific time.’ In the organization theory and management studies literature, such a fluid and contingent view of identity has been embraced and numerous studies suggest that organizations are in fact one of the primary sites for the acquiring and maintenance of identities. One of the seminal works in this tradition of research is Kondo’s (1990) study of what she calls the ‘narrative production of the self’ (ibid.: 231) in Japanese society. For Kondo, identity is what is produced through a continuous process of storytelling about who one is and what role one plays in both the local and the broader society: [P]eople like Ohara-san [artisan in the factory where Kondo worked] are constantly becoming, crafting themselves in particular, located
Professional Ideologies and Identities and Innovation Work
33
situations for particular ends. Ohara-san spun out his identity in narrative, to me . . . When he could, he enacted his work identity on the shop floor, using my presence, for example, as an opportunity to assert this identity. Ohara-san artisan self was, in short, produced in narrative and in performance, in specific delimited situations. (Ibid.: 257) Being in a state of ‘constantly becoming’ sounds like a poor condition for establishing a coherent and integrated identity for oneself, but what Kondo (1990) suggests is that identities are not only what is stabilized but are processes that are essentially determined by events in the course of life. In organization studies, identities and subject-positions (a term derived from the Foucaultian corpus of literature, denoting the crafting of viable subject positions on basis of regimes of what Foucault calls savoir and connaissance, structured and legitimate forms of knowledge) are seen as inextricably entangled with managerial practices. The subject is then not in opposition to the managerial practices but is instead what is shaped and formed by the very operational procedures of the organizations: a salesman takes on the identity of the salesman on the basis of everyday sales work; the teacher identifies with day-to-day practices in the class-room and when discussing with colleagues and so forth (see, e.g., Leidner, 1993). ‘[P]ractices are not just what people do’, Chia and Holt (2006: 640) argue: ‘Rather, practices are social sites in which events, entities and meaning help compose one another . . . Practices are identity-forming and strategy-setting activities.’ Practices and identities are, unsurprisingly, closely associated; what you do has implications for who you think are or hope to become. Professional identities are therefore shaped by a number of factors and conditions including institutional, gendered, cultural and practical conditions influencing one’s work. Professional identities are never fully stabilized but are always subject to negotiations on the basis of the aspirations of competing professional or occupational communities, technological changes and shifts, and other institutional changes. In addition, personal biographies and accomplishments and disappointments play a role for the individual’s identity. The possibilities for creating a coherent and meaningful subjectposition or identity vary between different occupations, industries and even companies, and the literature on identity-making offers a wide variety of different examples of the contingencies in this social process. In some cases, formal education and training are both enabling and constitutive of identities. For instance, Faulkner (2007: 337) suggests
34
Venturing into the Bioeconomy
that, in the case of engineers, ‘their educational grounding in mathematics and science allows engineers to claim an identity in the material and (mostly?) predictable phenomena governed by the laws of nature, backed up by a faith in cause-and-effects reasoning’. The engineer’s identity is thus grounded in the shared belief that the world is to be understood mathematically and scientifically and that such an understanding demands certain skills and experiences. In other cases, identities are weaker and less firmly anchored in institutional milieus, thereby making the identity-work more cumbersome and more vulnerable to criticism. Clegg et al. (2007: 511) show, for instance, that individuals working with executive coaching, as personal advisers to managers and executives, struggle to construct a robust identity for themselves. Since executive coaches are a rather recent species in the organizational fauna, more or less derived from managerial consulting, a field of expertise often pinpointed as being problematic in terms of its scientific grounding, practical implications and ideological underpinnings, members of this community often define themselves rather defensively, in terms of what they are not: they are neither conventional consultants, nor are they counsellors. Clegg et al. (ibid.) conclude their study: As an industry, business coaching is ill-defined, contradictory and ambiguous. Indeed, it is this apparent lack of an established order within which coaches work that enables them to try to construct their organizational identities. By this account, organizational identity is not an essence of a substance fleshed out by characteristics; rather, organizational identity is enacted and embedded in a field of differences. (Ibid.) In general, the more well instituted the professional or occupational group is, the more easy it is to construct an identity on basis of the work. More recently, gender theorists have suggested that the very idea of a coherent and well-integrated identity is what needs to be undermined and what is in itself an ideological idea (e.g., Braidotti, 1994). Rather than constructing unified identities, identities are always of necessity assemblages, a multiplicity of different social positions and roles: Gender identity, understood this way as rhizomatic or having the qualities of a rhizome, does not originate in multiplicity or acquire multiplicity – it is multiplicity, although the sense of being implied by the word ‘is’ should not be understood as stability, but as constant change of becoming. (Linstead and Pullen, 2006: 1291)
Professional Ideologies and Identities and Innovation Work
35
In this view, identities are never more than transitory points in the course of life; like beads on a string, our lives are made up by a number of complementary and competing positions that, in their totality, arguably constitute our perceived selves. The other end of the perspective on identity does not positioning identity as primarily constructed by the individual subject, but instead as what serves a functional role of organizing individuals into operative units. Benhabib (2002: 72) introduces the term ‘corporate identities’ to refer to group identities that are ‘officially recognized by the state and its institutions’. While Benhabib (2002) speaks about minorities formally recognized by the state, this concept is useful when understanding how identities are formed and even imposed by corporations and organizations. For instance, the gendered professional identity of the stewardess working in a airline company is negotiable to some extent, but it is also shaped by gendered beliefs about women as being caring and nurturing and ‘naturally’ inclined to care for the clients’ best (Tyler and Abbott, 1998; Tyler and Taylor, 1998; Hochschild, 1983). Women who are unwilling to take on such images of the ‘hyperwoman’ (Borgerson and Rehn, 2004) have to resist such as an identity with the means available (e.g., through cynicism). In some cases, corporate identities aim to reconcile opposing identities or objectives, leaving the individual with few chances to fully accommodate such positions. For instance, Henry’s (2006: 278) study of so-called post-bureaucratic organizations shows that middle managers are expected to operate both as ‘morally neutral technician’ and as ‘self-interested market entrepreneur’, two if not contradictory roles at least roles that are complicated to bring into harmony and under one unified and stable identity. In other cases, the identity of a professional or occupational community may be so homogeneous that it is complicated to deviate from the norm and to take on alternative identities. For instance, in Saxenian’s (1994) study of the Silicon Valley IT cluster in the San Francisco Bay area, the majority of the professional computer experts belonged to a reasonably stable and homogeneous category of people: The collective identity was strengthened by the homogeneity of Silicon Valley’s founder. Virtually all were white men; most where in their early twenties. Many had studied engineering at Stanford or MIT, and most had no industrial experience. None had roots in the region; a surprising number of the community’s major figures had grown up in small towns in the Midwest and shared a distrust for established East Coast institutions and attitudes. (Ibid.: 30)
36
Venturing into the Bioeconomy
Even though Saxenian contributes to the reproduction of a highly romantic narrative of the Silicon-based utopia drenched in the California sun where creativity is flowing freely and with little resistance from narrow-minded or uninformed suit-clad executives representing ‘old Eastcoast money’, the corporate identities of the Silicon Valley entrepreneur or professional is portrayed in rather homogeneous terms. Saxenian (1994) does not make any major point about, for instance, the underrepresentation of women in Silicon Valley. In summary, then, professional identities are the sense of an ‘imagined community’ (Anderson, 1983) that members of a professional community share with other members of the profession. This professional identity helps to regulate behaviour and practices and thus contributes to the sorting out of specific privileges and duties in the professional field. The concept of innovation This book examines the professional ideologies and identities of professionals engaging in innovation work within the emerging regime of the bioeconomy. Studies of innovation are one of the central domains of research in the organization theory and management studies literature. Since innovation lies at the very heart of the capitalist regime of accumulation – new goods and services are expected to be continuously brought to the market in the ceaseless circulation of capital – it is often claimed that firms and organizations must innovate or they will eventually perish. While this may be true in many markets, there are also industries and companies that would live well and prosperously off a few ‘cash-cow products’. However, the literature on innovation management or, as we will speak of in this context, innovation work, is massive and several academic journals exclusively target innovation. ‘A growing number of “innovation studies” show little allegiance to any particular discipline, and widely disparate theories and methods coexist in relevant journals and handbooks,’ Pavitt (2005: 87) remarks. The literature is not only voluminous but also ‘disparate’. While the literature review here is far from exhaustive, there will be a few illustrations on how the contemporary discussion on innovation is settled. The starting point for many studies of innovation management practices and conceptual contributions is that the degree of innovations in a particular firm is related to long-term competitive advantage. For instance Bogner and Bansal (2007: 166) found, first, that firms that ‘[g]enerate impactful innovation experience above-average growth and profitability’, and, second, that firms that build disproportionately on
Professional Ideologies and Identities and Innovation Work
37
their own knowledge (i.e., knowledge created in-house) experience ‘above-average growth and profitability’. The ability to have access to and to effectively manage knowledge apparently plays a central role in innovation management. Keith Pavitt (2005: 86) identifies three processes of innovation: ‘The production of knowledge; the transformation of knowledge into artifacts – by which we mean products, systems, processes, and services; and the continuous matching of the latter to market need and demands.’ In this view, the starting point for any innovation process is the creation of a solid knowledge base that will enable the innovation work team to translate that knowledge into an artefact or service. The last phase is here suggested as playing the role of a marketing activity wherein supply and demand are kept in equilibrium. The accumulation of knowledge is, in other words, a very central activity in Pavitt’s (2005) innovation model. However, as innovation management researchers have increasingly emphasized (e.g., Dodgson et al., 2005), innovation processes are rarely as linear and straightforward as suggested by Pavitt’s (2005) conceptual model. Most innovation work, at least technology innovation, emerges as a ‘garden of forking paths’ (Williams and Edge, 1996: 866) – ‘different routes are available, potentially leading to different technological outcomes’. Being to be able to endure such uncertainties is one of the major challenges for both innovation team leaders and the members of such teams. The innovator’s work is disruptive, non-linear and turbulent rather than being the smooth transition between clearly demarcated phases. However, the innovation work may look different depending on the stage of maturity the specific industry. James Utterback (1994) identifies three different development stages of an industry which he refers to as the fluid phase – the early stage characterized by much turbulence and quick changes, the transitional phase – when the industry is reaching some maturity and more stability, and the specific phase – when the industry is growing mature and the speed of innovation is slowing down. Utterback claims that, depending on what stage the industry is in at present, the innovation work will look different. For instance, in the fluid phase, it is often individual entrepreneurs that account for a substantial part of the innovation work; in fact, it is often the innovation per se that is the driver for the entire industry. In the transitional and specific phases, the individual entrepreneur is replaced by more systematic research activities, including full research teams with a variety of expertise. Many firms also establish R&D functions to be responsible for the innovation work. Utterback’s (1994) main arguments are accounted for in Table 1.1.
38
Table 1.1 Significant characteristics in the three phases of industrial innovation Fluid phase
Transitional phase
Specific phase
Innovation
Frequent major product changes
Major changes required by raising demands
Source of innovation Products
Industry pioneer; product users Manufacturers; users Diverse designs, often customized At least one product design, stable enough to have significant production volume Flexible and inefficient, major Becoming more rigid, changes easily accommodated with changes occurring in major steps Focus unspecified because of high Focus on specific product degree of technical uncertainty features once dominant design emerges General-purpose, requiring skilled Same sub-processes labour automated, creating islands of automation
Incremental for product and with cumulative improvements in productivity and quality Often suppliers Mostly undifferentiated, standard products
Production processes
R&D
Equipment
Efficient, capital intensive and rigid; cost of change high Focus on incremental product technologies; emphasis on process technology Special-purpose, mostly automatic, with labour focused on tending and monitoring equipment
Plant Cost of process changes Competitors
Basis of competition Organizational control Vulnerability of industry letters
Small-scale, located near user or source of innovation Low Few, but growing in numbers with widely fluctuating market demand Functional product performance
General-purpose with specialized selections Moderate Many, but declining in numbers after emergence of dominant design Product variation; fitness for use Informational and entrepreneurial Through project and task groups To imitators and patent To more efficient and challenges; to successful product higher-quality producers breakthroughs
Large-scale, highly specific to particular products High Few, classic oligopoly with stable markets Price Structure, rules and goals To technological innovation that presents superior product substitute
Source: Adapted from Utterback, 1994: 94–5.
39
40
Venturing into the Bioeconomy
Utterback (1994) suggests that as firms move from their birth and adolescence to maturity, the rate of innovation slows down, or at least the degree of radical innovations lowers. One recent tendency subject to thorough investigation and research is that innovation work is increasingly organized into a network form. Industry characterized by a high degree of innovation and quick technological development has s benefited particularly from being able to collaborate across organizational boundaries (Powell et al., 1996; Powell, 1998; Young et al., 2001; Harrison and Laberge, 2002; Owen-Smith and Powell, 2004; Powell and Grodal, 2005; Bell, 2005). In summary, venturing into the bioeconomy means to both articulate scientific analytical frameworks and constructing professional categories that are capable of translating life science know-how into products and therapies. Since the professional ideologies of scientists may favour long-term engagement with complicated issues, there may be difficulties in nurturing professional identities that seek to fully exploit the entrenched stock of know-how. Expressed differently, orchestrating innovations on the basis of professional expertise in the life sciences is not a trivial matter since scientists acquire credibility through their contribution to formal knowledge rather than in terms of producing economic wealth. As a consequence, professionalism, identities and innovation must be examined as closely entangled processes that need to be managed as an integrated framework for analysis. In the next chapter, this managerial perspective is complemented and broadened by the social science literature addressing implications from steep growth in life science know-how and social implications.
Summary and conclusions Professionalism, professional ideologies and professional identities strongly determine how highly specialized skills and expertise are exploited in society. A common concern in Europe is that in comparison with the USA, for example, European university researchers are relatively poorer at patenting and commercializing their research findings. The professional ideology of the life science professions emphasizes the contribution of disinterested and formal knowledge while undervaluing commercializing activities. The traditional gap between the universities and industry has been subject to much debate and research effort over the last ten years and there is a firm belief that there is a need to encourage enterprising and entrepreneurial skills and efforts in the community of university professors. However, centuries of tradition and
Professional Ideologies and Identities and Innovation Work
41
carefully negotiated standards for good research practice are not wiped out overnight just because some economist or policy-maker regards university professors’ know-how as a not yet fully exploited resource; the very role of such professional standards historically is precisely to mediate between the more fickle movements and initiatives of the state administration or market-based actors. That is, professional standards are negotiated in the face of emerging economic and social conditions and can change substantially over time, but there is an inertia in the system that must neither be underrated nor dismissed as some kind of evidence of professional indulgence or complacency. Seen in a historical perspective, the production of professional and scientific knowledge has been an unprecedented success story, fundamentally altering the human condition. The restless ambition to make more immediate and quick connections between professional expertise and innovations is largely ignorant of such accomplishments, and is careless to overlook or marginalize the role of professional standards for creating shared stocks of knowledge. Needless to say, professional expertise and its communal beliefs and identities are the infrastructure in the production of scientific know-how; without such infrastructure individual research efforts and initiatives would remain largely uncoordinated, undermining the processes for evaluating and judging knowledge claimed by individual researchers.
Note 1. For the philosophical concept of the virtual – in sharp contrast to the technological use of the term – see Bergson’s seminal work Matter and Memory (1988) and Deleuze’s (1988) analysis of the term in Bergson’s work. A number of excellent commentaries have been published by Murphy (1998), Ansell Pearson (2002), Massumi (2002), DeLanda (2002) and Grosz (2004, 2005). A more accessible introduction to the term is provided by Lévy (1998) and Shields (2003). In the field of organization theory, Thanem and Linstead (2006) and Linstad and Thamen’s (2007) work are representative of this tradition of thinking.
2 The Bioeconomy, Biocapital and the New Regime of Science-based Innovation
Introduction In this chapter, a number of central terms and concepts, developed and used both within and outside of the biopharmaceutical industry, and part of the analytical framework used in the empirical chapters of the books, will be discussed. The literature reviewed in this chapter is highly diverse and includes a variety of social science disciplines such as sociology, anthropology, organization theory, science and technology studies and philosophy. This rather heterogeneous body of literature shares an interest in what Rose (2007) calls the bioeconomy, a broad but useful term denoting an economic regime wherein the biopharmaceutical industry and its accompanying and supporting life sciences play a central role for not only the economic system, in terms of the share of GNP, but also socially and culturally as a predominant paradigm. The twentieth century was, in Bauman’s (2005) characterization, constituted by ‘solids’, immutable engineered artefacts and technological systems such as automobiles, highways, skyscrapers and aeroplanes that helped define the modern period: Engineered, modern space was to be tough, solid, permanent and nonnegotiable. Concrete and steel were to be its flesh, the web of railway tracks and highways its blood vessels. Writers of modern utopias did not distinguish between the social and the architectural order, social and territorial units and divisions; for them – as for their contemporaries in charge or social order – the key to an orderly society was to be found in the organization of space. Social totality was to be a hierarchy of ever larger and more inclusive localities with the supra-local authority of the state perched on the top and surveilling the whole while itself protected from day-to-day invigilation. (Ibid.: 62) 42
The Bioeconomy and the New Regime of Science-based Innovation 43
As opposed to this engineered modernity, grounded in the large-scale transformation of nature into commodities and technologies, the twenty-first century is expected – at least by Bauman (2000) – to be dominated by ideas rather than such material objects. Bauman (ibid.: 151) continues: ‘When it comes to making the ideas profitable, the objects of competition are the consumers, not the producers. No wonder that the present-day engagement of capital is primarily with the consumers (ibid.). Lanham (2006) is here speaking about the attention economy, an economic regime dominated not as much by the access to know-how and information as has been previously suggested by proponents of a number of concepts such as ‘the knowledge society’, ‘the knowledge economy’, or ‘knowledge capitalism’, but by the ability to attract the attention from significant social groups such as decision-makers and consumers: ‘[I]nformation is not in short supply in the new information economy. We’re drowning in it. What we lack is the human attention needed to make sense of it all. It will be easier to find our place in the new regime if we think of it as an economics of attention. Attention is the commodity in short supply,’ Lanham (2006: xi) announces. ‘In information society,’ the Norwegian anthropologist Thomas Hylland Eriksen (2001: 21) writes, ‘the scarcest resource for people on the supply side of the economy is neither iron ore nor sacks of grain, but the attention of others. Everyone who works in the information field – from weather broadcasters to professors – compete over the same seconds, minutes, and hours of other people’s lives.’ A similar view is advocated by Davenport and Beck (2001). However, the entire complex of the life sciences and the biopharmaceutical industry has been portrayed as the industry that will dominate the economic regime of accumulation in the new century. While this complex of know-how, fundamentally resting on the advancement of technoscience into new territories, may be based on fruitful and innovative ideas and the capacity to draw attention – especially when competing for the attention of practising medical doctors and patients – it is also a most sophisticated endeavour to bring scientific savoir-faire, technologies and therapeutic practices into harmony in order to produce new innovative and life-saving drugs. That is, the organization of the biopharmaceutical industry is not only a matter of being externally oriented towards markets and the public, but also, primarily, a matter of internal norms, values, preferences and aspirations. On the other hand, it is complicated; we may learn from a institutional theory perspective, to understand the changes and continuity of an industry or an organization without taking into account the exogenous social, economic, cultural and technological occurrences
44
Venturing into the Bioeconomy
in the broader social context. Therefore, when studying the biopharmaceutical industry it is important to both seek to understand how its various actors conceive of their own roles and opportunities while at the same time examining such external conditions for their work. ‘Drugs are among the economically and culturally most important products of science, and they appear to be only growing in importance,’ Sismondo (2004: 157) suggests. While the engineered space of modernity gave us a long range of technological marvels including the automobile, the washing-machine, the television set, the computer, the mp3-player and so forth, all these technologies are gradually being taken for granted and become instituted in everyday social life. For instance, the pioneers of television were anxious to organize the programmes broadcast during day-time following the standard from radio-broadcasting so housewives could continue their household work;1 television was not immediately integrated into everyday life but had to fit into the pre-existing social structure (Morley, 2007; Boddy, 2004). However, once new technologies become instituted they are no longer conceived as amazing novelties but gradually merge into the fabric of everyday life – they become insignificant and ‘infrastructural’. However, in the present regime of the bioeconomy, targeting human health and well-being, there are always opportunities for better, healthier, fitter, more beautiful bodies. The shift from attention to the material Umwelt to the surface and interiority of the body is a central change in perspective in the bioeconomic regime. What Sismondo (2004) suggests is that drugs are no longer the intersection between esoteric technoscience and the broader public, a meeting point (or ‘trading zone’, in Galison’s [1997] parlance), a rather marginal connection or passage point between the laboratory and the everyday life of human beings, but are increasingly acting as a central hub for more and more economically significant activities. At the same time, these new pharmaceuticals do not sell themselves, at least not in the initial phases of the launching process: Angell (2004: 198) reports that, of all industries, the pharmaceutical industry employs the largest number of lobbyists in Washington, DC; direct-to-customer marketing is substantial – in the USA the majority of TV commercials seems to derive from pharmaceutical companies, and the situation is becoming similar in Europe – and the pharmaceutical company sales representative, ‘the detail man’, performing a blend of sales pitches and educational services, has been around since the inter-war period (Greene, 2004). ‘By one 2000 estimate, the drug industry’s 11 Fortune 500 companies devoted 30% of their revenue to marketing and administrative costs and only 12% of their revenue to research and development,’ Mirowski and van
The Bioeconomy and the New Regime of Science-based Innovation 45
Horn (2005: 533) report. Still, more and more aspects of human lives are being penetrated – in some cases defined – by access to biopharmaceutical therapies (Rosenberg and Golden, 1992). For instance, the domain of reproductive technologies and opportunities has expanded substantially during the last century and as adults in the Western world tend to study longer and invest several years in making careers before they aim to become parents, offering possibilities to the growing ‘reproduction industry’ to help couples or single women who find that they are unable to have children (Clarke, 1998; Thompson, 2005). While the biopharmaceutical industry has been subject to systematic study previously (see, e.g., Braithwaite, 1984; Swann, 1988; Abraham, 1995), quite recently, in the first decade of the new millennium, the biopharmaceutical industry attracted more and more attention in the social sciences. In this chapter, some of this literature will be reviewed and discussed. It is noteworthy that the objective here is not to engage in some kind of muckraking, positioning the biopharmaceutical industry as some kind of straw man put up only to be beaten down. Quite the contrary; having extensive experience (more than 25 years in the case of Mats Sundgren) and more than ten years of research collaborations with pharmaceutical companies (Alexander Styhre), the biopharmaceutical industry is here regarded as no less and no more ‘ethical’ (on the upside) or ‘opportunistic’ (on the downside) than any other industry. Executives, managers and co-workers in biopharmaceutical companies are often highly dedicated to the pursuit of producing new therapies for the public and their work takes place in a highly regulated milieu where the ‘dos and don’ts’ of the work are prescribed in detail and monitored by a number of autonomous national and international regulatory bodies. Many significant contributions have been made, helping human beings live longer and more qualitative lives devoid of much of the suffering that previous generations have been unfortunate to endure. What is aimed at here is rather a form of critique in the Kantian tradition, a form of practico-theoretical pursuit more recently rehabilitated by Foucault. In Thacker’s (2004: 170) account, critique is here what ‘works at the interstices of its object, revealing the points of fissure in the forces that come together to form a given practice, discipline, a given body’. Therefore, critique is not merely the ‘negative work’ done so that a ‘positive resolution’ may follow. Instead, critique is ‘[g]enerative practice at precisely the moment of its negativity: it therefore provides openings, pathways, and alternatives that were previously foreclosed by the structure of a discourse,’ Thacker (2004: 170) suggests. Without proper critical procedures, no discourse may evolve over time and therefore the
46
Venturing into the Bioeconomy
ability to critically examine a social field and a social practice is of vital importance. If the literature addressing the biopharmaceutical industry appears negatively slanted, it is probably because this is, first, the role the social sciences are expected to play, as critical interrogators, and, second, because it is always more intriguing to tell a story with a plot where some critical incident need to be explored in detail. Success stories may be soothing and educative but they lose to the critical account in terms of their inability to engage the reader. The biopharmaceutical industry is, in every sense, an economically, socially and scientifically significant industry that deserves to be examined in detail and studied from a variety of perspectives. In this book, organization theory and innovation management perspectives are pursued with the ambition of making a contribution to the literature on the bioeconomy. This chapter is organized as follows. First, the concept of the bioeconomy is introduced as the overarching framework for the analysis of a series of interrelated processes, activities and projects. Second, the concept of the body and its central role in the life sciences will be discussed. Thereafter, the literature on biomedicalization will be reviewed to demonstrate that the bioeconomy does not strictly operate on the basis of needs and demands. Fourth, we look at the concept of the tissue economy, rendering a variety of biological tissues and specimens an economic value. Fifth, genomic and post-genomic technoscientific approaches and frameworks will be examined and related to practical work in the biopharmaceutical industry and in academic research work.
The regime of the bioeconomy One of the most central terms in the social science literature seeking to examine the relationship between science, politics and innovation work practices is Michel Foucault’s term ‘biopolitics’.2 The corpus of work addressing this term – including the rather recent publication of his lectures at Collège de France in the latter half of the 1970s (Foucault, 2008; Lemke, 2001) – is regularly cited and referenced in a great number of texts, whereof many are cited in this book. The term ‘biopolitics’ is, like many entries in Foucault’s vocabulary, a composite term, worthy of exploring in some detail (Esposito, 2008). The term is given as follows in one of Foucault’s texts: ‘[T]he endeavor . . . to rationalize the problems presented to governmental practices by the phenomena characteristic of a group of living human beings constituted as a population: health sanitation, birthrate, longevity, race’ (Foucault, 1997: 73).
The Bioeconomy and the New Regime of Science-based Innovation 47
Foucault (2003) suggests in his lecture series entitled Society Must be Defended (held in the academic year of 1975–6) that the eighteenth century was the period when the sciences were consolidated and ‘disciplined’ – that is, a discipline received, ‘its own field, criteria for selection that allowed us to eradicate false knowledge or nonknowledge’. This establishment of disciplines also imposed procedures for ‘normalization and homogenization of knowledge-contents’, forms of hierarchization and an ‘internal organization that could centralize knowledges around a sort of de facto axiomatization’ (ibid.: 181). As Foucault emphasized, while science certainly existed prior to the eighteenth century, it did not exist ‘in the singular’; instead, scientists engaged in all sorts intellectual endeavours and the classic ‘renaissance man’ – Leibniz easily comes to mind – was contributing to a wide variety of research. From the eighteenth century, disciplinary boundaries inhibited such omniscient intellectual pursuits – scientists became specialists. In addition, even though amateur researchers continued to play a role in the eighteenth and nineteenth centuries (Charles Darwin, for instance, lacked any affiliation with the universities; the same goes for Søren Kierkegaard), the university gradually replaced the courts and the bourgeoisie salons as the domain for systematic enquiry. The concept of biopolitics is closely related to these institutional changes: [I]n the seventeenth and eighteenth centuries, we saw the emergence of techniques of power that were essentially centered in the body, on the individual body. They included all devices that were used to ensure the spatial distribution of individual bodies (their separation, their alignment, their serialization, and their surveillance) and the organization, around those individuals, of a whole field of visuality. (Ibid.: 241–2) Regulating the population as a number of individual bodies demanded a new regime of discipline, and the sciences contributed to the rendering of the human body a subject for systematic enquiry. For Foucault, the emergence of biopolitics is representative of a new ‘technology of power’ operating through different means: [W]e see something new emerging in the second half of the eighteenth century: a new technology of power, but this time it is not disciplinary. This technique of power does not exclude the former, does not exclude disciplinary technology, but it does dovetail into it, integrate it, modify it to some extent, and above all, use it by sort of infiltrating it, embedding itself in existing disciplinary techniques.
48
Venturing into the Bioeconomy
The new technique does not simply do away with the disciplinary techniques, because it exists on a different level, on a different scale, and because it has a different bearing area, and makes use of very different instruments. (Ibid.: 242) The emergence of biopolitics, for Foucault, represents a major shift in modern society, leading to a series of important changes in social practices, politics, regimes of power and virtually anything we regard as being part of society. For the first time in history, birth, health and death became subject to political (and eventually what may be called managerial) interests. In one of his series of lectures at Collège de France, published as The Birth of Biopolitics, Foucault (2008) traces the beginning of biopolitics to liberalism and liberal politics and economy. Three decades later, Melinda Cooper (2008) makes a similar argument, pointing at the connections between neo-liberalist doctrines in the USA and emerging bioeconomy, starting in the 1970s (see also Donzelot, 2008; McNay, 2009). Liberalism and neo-liberalism represents a political stance that is sceptical towards any kind of state intervention in economic activities. Laissez-faire politics enables the bioeconomy – a term discussed in greater detail shortly – to thrive through both acknowledging the importance of letting the market, not the political system, determine what are feasible economic activities, and through the deregulation of the financial markets, enabling venture capital to flow into the emerging life sciences. While Foucault (2008) emphasizes the political effects of liberalism and neo-liberalism, Cooper (2008) underlines the financial consequences: What neoliberalism seeks to impose is not as much the generalized commodification of daily life – the reduction of the extraeconomic to the demands of exchange value – as its financiation. Its imperative is not so much the measurement of biological time as its incorporation into the nonmeasurable, achronological temporality of financial capital accumulation. (Ibid.: 10) Helmreich (2008) emphasizes both the continuation and the rupture between the concepts of biopolitics and biocapital, underlining the latter term’s importance for contemporary enterprising activities: Biocapital . . . extends Foucault’s concept of biopolitics, that practice of governance that brought ‘life and mechanisms into the realm of explicit . . . Theorists of biocapital posit that such calculations
The Bioeconomy and the New Regime of Science-based Innovation 49
no longer organize only state, national, and colonial governance, but also increasingly format economic enterprises that take as their object the creation, from biotic material and information, of value, markets, wealth, and profit. The biological entities that inhabit this landscape are also no longer only individuals and populations – the twin poles of Foucault’s biopower – but also cells, molecules, genomes, and genes. (Ibid.: 461) Perhaps this is the single most important shift in focus, propelled by the advancement of the life sciences – the shift from individuals and populations to the governance of biological systems on the cellular and molecular level. At the same time, domains like genomics, reproductive medicine and stem cell research are certainly domains pervaded by politics and controversies. Still, the economic interests – the bioeconomic interests – for these domains of research are more significant than the biopolitical governance. Bioeconomy Rose’s (2007) analysis of what he calls the bioeconomy is largely consonant with Foucault’s and Cooper’s (2008) arguments. For Rose (2007), the new bioeconomic regime is both propelled by political laissez-faire doctrines and the access to venture capital, but what is really helping constitute the new regime is the advancement of the life sciences in general and more specifically the ‘geneticization’ of medicine. Rose thus sketches a rather broad change in the contemporary society, from the regulation of health to a ‘politics of life itself’: At the risk of simplification, one may say that the vital politics of the eighteenth and nineteenth centuries was a politics of health – of rates of birth and death, of diseases and epidemics, of the policing of water, sewage, foodstuffs, graveyards, and of the vitality of those agglomerated in towns and cities . . . [t]he vital politics of our own century looks quite different. It is neither delimited by the poles of illness and death, nor focused on eliminating pathology to protect the destiny of the nation. Rather, it is concerned with our growing capacities to control, manage, engineer, reshape, and modulate the very vital capacities of human beings as living creatures. It is, I suggest, a politics of life itself. (Ibid.: 3) Without reducing the argument to simple binary terms, the regulation of health regimes is representative of what Foucault (1977) called the
50
Venturing into the Bioeconomy
disciplinary society, the society wherein deviances and abnormalities (in its non-pejorative, Canguilhemian sense of the word) are detected and ‘corrected’ (i.e., health is restored through various therapies), the politics of life regime is closer to what Deleuze (1992) spoke of as ‘the society of control’, a society where control is seamless and continuous rather than discrete and architectural. In the politics of life regime, there are never any stable states of equilibrium, pockets of health that do not demand any intervention, but life is what is always at stake: one can always live healthier, eat better, get rid of vices such as smoking, engage in more physical exercise and so forth. Life itself is a matter that needs detailed strategies and tactics; it is what should be paid ceaseless attention. Just like in the society of control, the politics of life itself is never at rest. This change in perspective – from life as gift to life as accomplishment – is, Rose (2007) suggests, derived from a number of tendencies and changes in perspective. First, there is a general tendency in the life sciences to envisage life as what operates on the molecular level – that is, ‘as a set of intelligible vital mechanisms among molecular entities that can be identified, isolated, manipulated, mobilized, recombined, in new practices of intervention, which are no longer constrained by the apparent normativity of a natural vital order’ (ibid.: 5–6). Life is what constituted qua interrelated elementary mechanisms to be observed at the molecular level: Molecularization strips tissues, proteins, molecules, and drugs of their specific affinities – to a disease, to an organ, to an individual, to a species – and enables them to be regarded, in many respects, as manipulable and transferable elements or units, which can be delocalized – moved from place to place, from organism to organism, from disease to disease, from person to person. (Ibid.: 15) Second, there is a belief in that one should ‘optimize’ one’s life in terms of primarily avoiding illnesses and ‘lifestyles’ that potentially threaten the quality of life: ‘Technologies of life not only seek to reveal these invisible pathologies, but intervene upon them in order to optimize the life chances of the individual’ (ibid.: 19). Third, the experience of health and related matters is increasingly treated as being part of one’s social role and identity. Rose talks about ‘subjectification’ as the process in which individuals cast themselves as enterprising subjects in the regime of the bioeconomy: ‘Biomedicine, throughout the twentieth century and into our own [time], has thus not simply changed our relations to health and illness but has modified the things we think we might hope
The Bioeconomy and the New Regime of Science-based Innovation 51
for and the objectives we aspire to’ (ibid.: 25). Fourth, Rose points at the growth of what he calls ‘somatic expertise,’ a new range of experts and knowledgeable actors taking the position of advising the general public or political or private bodies on how to relate to the emerging opportunities and choices provided in the bioeconomy. Fifth and finally, Rose (ibid.: 6) says that ‘economies of vitality’ are constituted, a ‘new economic space has been delineated – the bioeconomy – and a new form of capital – biocapital’. This bioeconomy, regulated by biocapital (a term denoting both scientific know-how and financial capital), is characterized by a biomedicalization of illnesses; here ‘medical jurisdiction extended beyond accidents, illness, and disease, to the management of chronic illness and death, the administration of reproduction, the assessment and government of “risk,” and the maintenance and optimization of the healthy body’, Rose (ibid.: 10) suggests. Sismondo (2004: 157) expresses the same idea accordingly: ‘Whereas bodies once were understood as normatively health and sometimes ill, they are now understood as inherently ill, and only to be brought towards health. The treatment of risk factors for illness and not just illness, is a development linked to prospects of dramatically increased sales of drugs.’ While this change in perspective, driven by both financial interests and scientific possibilities, is not problematic per se, it opens up a range of possibilities, choices and trade-offs that may be controversial or worthy of discussion. What is problematic for Rose (2007) is the role played by biotech and pharmaceutical companies in terms of shaping not only the individual human being’s perception of health and quality of life but also the political and scientific agenda: [B]iotech companies do not merely ‘apply’ or ‘market’ scientific discoveries: the laboratory and the factory are intrinsically interlinked – the pharmaceutical industry has been central to research in neurochemistry, the biotech industry to research on cloning, genetech firms to the sequencing of the human genome. Thus we need to adopt a ‘path dependent’ perspective on biomedical truths. (Ibid.: 31) Expressed differently, basic or applied – a line of demarcation increasingly complicated to maintain – research and economic interests are no longer separated in time and space; drugs are no longer – it is questionable if they ever were – strictly developed to satisfy therapeutic demands but are, on the contrary, developed first and only later associated with illnesses and diseases. Researchers in the life sciences
52
Venturing into the Bioeconomy
are, in other words, engaging in what Fujimura (1996) calls ‘doable problems’, developing drugs that may have a therapeutic value and a financial raison d’être rather than venturing beyond such practical and financial interests. In addition, as financial interests grow, then a variety of juridical matters comes with it, and consequently the idea of a ‘free science’, instituted since the medieval times, becomes more complicated to maintain: Basic and applied biological research – whether conducted by in biotech companies or in the universities – has become bound up with the generation of intellectual property, illness and health have become major fields of corporate activity and the generation of shareholder value. (Rose, 2007: 11) Another consequence of financial interest in the bioeconomy is that risk-taking is lowered as biotech and pharmaceutical companies are evaluated by financial analysts with relative little understanding of the scientific procedures underlying the economic values created. As has been emphasized by a number of critics (e.g., Angell, 2004), new drugs launched in the market are increasingly based on modifications of known substances. So-called ‘me-too drugs’, imitations of financially successful drugs, are flooding the market, while some severe and therapeutically complex illnesses are not regarded as being economically feasible to explore further. In 1996, 53 innovative drugs were launched; 27 in 2000; 21 in 2003. This is a substantial reduction ‘[e]ven as the industry had nearly doubled in spending on development over the interval [1996–2003]’ (Mirowski and van Horn, 2005: 533). In Rose’s (2007) account, the bioeconomy represents a new economic regime where health is no longer simply something to be monitored and kept within predefined limits, but is instead something to be optimized. Life is therefore a sort of ongoing and ceaseless project that the individual human being is expected to engage with. The new bioeconomic regime also produces a set of social, cultural, political and – as is suggested here – managerial and organization implications that deserve proper attention. Sunder Rajan (2006), an anthropologist, addresses the same set of conditions and complements Rose’s (2007) analysis in a number of aspects. For Sunder Rajan (2006), changes in the life sciences are derived from rapid advances in genomics and related technologies such as pharmacogenomics. One of the most significant implications of this shift in focus from the study of higher orders of organisms to the molecular level and the hereditary material is that the life sciences are
The Bioeconomy and the New Regime of Science-based Innovation 53
becoming ‘information sciences’ (Sunder Rajan, 2006: 3). Sunder Rajan explicates this position: [T]he idea that life is information has been very much part of the central dogma of molecular biology, which signifies the mechanisms of life as being a series of coding operations, where DNA gets transcribed into RNA, which gets translated into proteins – an algorithmic conception of life that has been promionent within molecular biology since at least the 1950s. The difference now is that genomics allows the metaphor of life-as-information to become material reality that can be commodified. (Sunder Rajan, 2006: 16, emphasis in original) However, as Parry (2004: 50) points out, agreeing with Sunder Rajan (2006) that ‘bio-informational metaphors’ (e.g., the genome as a form of ‘software’ for the biological system) are used abundantly in both the popular press and in academia, it is ‘[s]urprising that relatively few attempts have yet been made to critically assess of refine the use of these bio-informational metaphors or to reflect on the complexities that attend their use’. Claiming that that biological systems are in the first place to be understood as bio-information structures is not a nonpolitical claim or a statement devoid of ontological and epistemological assumptions. Parry (2004: 50) continues: ‘What exactly do we mean when we say that biological or genetic materials are a type of information? Is this to say that genetic or biochemical matter is analogous to other types of information or that it is now actually a type of information? In other words, are such terms employed metaphorically or literally?’ The entire genomics programme, based largely on the bioinformational metaphor, relies on the pioneering work of eminent scientists like the physicist Erwin Schrödinger and Norbert Wiener, the founder of cybernetics. Schrödinger argued in his book What is Life? (1944) that one could identify a ‘code-script’ underlying all forms of life. Schrödinger is, therefore, often regarded as ‘founding father’ of the new discipline of genetics in biology (Kay, 2000: 59). Wiener, for his part, insisted in his The Human Use of Human Beings, published a few years after Schrödinger’s lecture, in 1950, on treating human bodies not so much as intricate organic systems of flesh, bone and blood, nerves and synapses, but as ‘patterns of organization’ (Hayles, 1999: 104). Wiener pointed out that, in the course of the organism’s lifetime, the cells composing the body are changed many times over, and therefore identity
54
Venturing into the Bioeconomy
does not consist of the materiality proper. Consequently, in order to understand human beings, one needs to understand the patterns of information they embody. Both Schrödinger and Wiener pointed at the informational constitution of the biological organism. Since the period after the 1940s and 50s, when Schrödinger’s and Wiener’s works were published, has demonstrated a remarkable advancement of the study of the hereditary material – that is, the information organization of the organism – we are today reaching a point where there dominant working concepts, such as the idea of the gene, are becoming increasingly complicated and new ideas stretching beyond what Francis Crick called the ‘central dogma’ of genetics (that DNA gets transcribed into RNA, which gets translated into proteins) are being formulated. For instance, the proteomics research programme emphasizes the multiplicity of proteins being produced by individual gene sequences. However, Sunder Rajan’s (2006) emphasis on seeing the human organism as, in essence, an informational structure remains a viable framework for a variety of research programmes. Sunder Rajan (ibid.) complements Rose’s (2007) concept of the bioeconomy with the term ‘biocapital,’ a term that aims to capture the process of commodifying life. Sunder Rajan draws on Marx’s concept of commodity fetishism and suggests that the primus motor of the bioeconomy is to produce new forms of biocapital, life inscribed into commodities: Biocapital is creating a series of cultural transformations in the materiality and exchangeability of what we call ‘life.’ These transformations are created through shifting and variable use of market commodification versus public commons or public good formation, both of which are disciplined by new forms of capitalist logic, conforming neither to those of industrial capitalism nor to those of so-called postmodern information capitalism. This is the rationale for the term ‘biocapital,’ which asks the question of how life gets redefined through the contradictory processes of commodification. (Sunder Rajan, 2006: 47) At the very heart of biocapital is striving towards transforming technoscientific know-how into commodities and marketable products: ‘Biocapital is the articulation of a technoscientific regime, having to do with the life sciences and drug development, with an economic regime, overdetermined by the market’ (ibid.: 111). Seen from this point of view, the bioeconomy and biocapital are not representative of any new economic regime but are rather capitalism pursued by other
The Bioeconomy and the New Regime of Science-based Innovation 55
(technoscientific) means. Adhering to a Marxist analytical framework, Sunder Rajan stresses that contemporary technoscience is capable of producing new output that generates economic value. There is, therefore, nothing exceptional in the biocapital generated; on the contrary, it testifies to the continuity and inherent flexibility and adaptability of capitalist production. Sunder Rajan (ibid.: 42) contends that ‘Corporate biotech is a form of high-tech capitalism. Three defining features consequently mark it: the importance of innovation; the role of fact production; and the centrality of information.’ Although Sunder Rajan underlines the continuity between the bioeconomy and previous forms of capitalist accumulation, there are, in fact, a few differences between preceding modes of production and innovation work in the emerging regime. While the old regime emphasized mass production, the new regime is more inclined to emphasize creativity. Sunder Rajan expresses this idea in somewhat dense and almost cryptic terms: Innovation is a qualitatively different (albeit related) concept from the Industrial Revolution or Marxian concept of surplus value generation. It implies not just the generation of infinitely greater amounts of things that already exist (capital or commodity), which itself, as Marx shows, is a mystical and magical generative force, the source of capitalism itself . . . The magic of technoscientific capitalism is not the magic of the endless pot of gold but the magic of being able to pull rabbits out of hats. Therefore one side of the ethos, authority, and magic of technoscientific capitalism has to do not just with capitalism’s generative potential but with its creative potential. (Ibid.: 113–14, emphasis in original) The mass production regime relied on long production series, optimization of resources and stable demand; the biocapital regime, instead, operates in a milieu characterized by an overwhelming amount of information and number of possibilities. With the ‘informatization’ of the life sciences comes also what Thacker (2005: 128) calls a ‘tsunami of data’, a massive inflow of material to handle and examine: The pervasive rhetoric surrounding such rapid information generation is, not surprisingly, almost one of breathlessness, conveying a sense of being overwhelmed with a huge amount of (presumably) valuable data that is virtually impossible to keep up with . . . while nobody quite knows the biological significance of even a fraction of it, any piece of information in this haystack could turn out to
56
Venturing into the Bioeconomy
be extremely valuable, therapeutically and commercially. (Sunder Rajan, 2006: 43) Also empirical studies of the pharmaceutical industry suggest that researchers are expected to navigate terrains where vast resources of data are to be stored, sorted out and examined, and this mind-boggling abundance of possibilities are often experienced as stressful. It is then little wonder that Sunder Rajan (2006) references Georges Bataille’s (1988) idea about the general economy to theoretically grapple with the new regime producing an abundance of data: ‘Georges Bataille argues in Principles of General Economy that excess is a fundamental impulse of capitalism . . . What is particularly interesting here is the way in which excess gets value – seen as a source of surplus value, and valued as a moral system,’ Sunder Rajan (2006: 113) says. The generation of biocapital, in certain respects, shares more with Bataille’s general economy than with the restricted economy of the mass production regime preceding the dominance of the life sciences. We learn from Rose (2007) and Sunder Rajan (2006) that the new economic regime, the bioeconomy regulated by biocapital, is potentially a major shift in the capitalist regime of accumulation. While mass produced commodities, ultimately dependent on the transformation of natural resources into material artefacts, dominated the twentieth century, the new millennium may be more concerned with the transformation of biological resources on the molecular level and the level of the hereditary material. The changes preceding and accompanying the new economic regime are substantial and it is naturally beyond the scope of this book and the competence of its authors to fully account for all technoscientific, juridical and political debates and changes taking place. However, these changes do also have organizational and managerial ramifications and, in the empirical sections of the book, these will be examined in more detail.
Images of the body I used to think that the brain was the most wonderful organ in the body. Then I asked myself, ‘Who is telling me this?’ Comedian Emo Philips (cited in Hayles, 2005: 135) It is complicated to understand the rise of the bioeconomy without addressing the history of the human body in Western societies (for an
The Bioeconomy and the New Regime of Science-based Innovation 57
overview, see Turner, 1992, 1996; Falk, 1994; Crossley, 2001; Holliday and Hassard, 2001). In the medieval times, the human body belonged to the lower levels of the human existence: ‘The body, though not deprived of traces of the divine, belonged to a lower level that often conflicted with the soul’s aspirations. Indeed, it soon came to be held responsible for the inclination to evil. From there it was only one step to identity evil with bodiliness,’ Dupre (1993: 168) writes. As suggested by, for instance, Mikhail Bakhtin in his Rabelais and His World (1968) the human body played an important role in the symbolism of the earthly, the mundane, common sense and so forth, all in contrast with more scholarly and divine pursuits represented by the mind. The body and embodiment was thus a central entity in a material order in contrast to ‘higher’ and more refined human endeavours: The grotesque body . . . is a body of the act of becoming. It is never finished, never completed; it is continually built, created, and builds and creates another body . . . Eating, drinking, defecation and other elimination (sweating, blowing the nose, sneezing), as well as copulation, pregnancy, dismemberment, swallowing up by another body – all these acts are performed on the confines of the body and the outer world, or on the confines of the old and new body. In all these events the beginning and end of life are closely linked and interwoven (Ibid.: 317) The first more formalist models of the human body in the modern times were articulated by Descartes, who advanced a mechanical model strictly separating the materiality of the body and the cognitive capacities. In contrast to the Cartesian model of the body, always split into the res extensa and res cogitans, the material and the thinking substance, Spinoza’s ontology did not assume such a separation into material and cognitive substrata. Instead, mind and body are part of the same substance, which is ‘single and indivisible’: ‘Body and mind enjoy only a modal distance and may be understood as “expressions” or modifications of the attributes of substance, that is, extension and thought, respectively,’ the feminist philosopher Moira Gatens (1996: 109) says. This distinction between the functionalist separation of mind and body versus a more integrative and coherent model is a standing debate in theories of the body. For instance, Ian Hacking (2007) has quite recently suggested that we are increasingly returning to a Cartesian ‘mechanical view of the body’ wherein its parts (e.g., liver, kidneys) could be exchanged for either donated new organic materials or technical devices. ‘The surface of the body was always pretty much a thing, an object that we could decorate
58
Venturing into the Bioeconomy
or mutilate. But we could not get inside effectively except by eating and drinking. Now we can,’ Hacking (2007: 79) argues. Notwithstanding the contributions of major thinkers such as Descartes and Spinoza, it was not until Immanuel Kant that one of the first ‘modern’ definitions of the organism was provided, in his Critique of Judgment (Keller, 2000: 106). Here, the organism is defined as: ‘An organized natural product is one in which every part is reciprocally both end and means. In such a product nothing is in vain, without end, or to be ascribed to a blind mechanism of nature’ (Kant, Critique of Judgment, cited in ibid.: 107, emphasis in original). This definition has stood the test of time and is, mutatis mutandis, still useful. For instance, Keller (ibid.: 108) defines an organism accordingly: ‘What is an organism? It is a bounded, physiochemical body capable not only of self-regulation – self-steering – but also, and perhaps most important, of self-formation. An organism is a material entity that is transformed into an autonomous and selfgenerating “self” by virtue of its peculiar and particular organization.’ Over the course of centuries and decades and the progress of the sciences, the body has been transformed from what is essentially enclosed and mysterious to what is an ongoing project and an individual accomplishment. This enclosed and mysterious body was only possible to regulate through rather obscure and in many cases directly counterproductive ‘therapies’ such as bloodletting and other medical procedures derived from Galen’s ancient medical doctrines, dominating medical practice for centuries until, finally, being gradually displaced in the nineteenth century. Clarke et al. (2003: 181) say that ‘the body is no longer viewed as relatively static, immutable, and the focus of control, but instead as flexible, capable of being reconfigured and transformed’. The body has also been subject not only to scientific investigations but also theoretical elaborations in the social sciences and the humanities. For instance, feminist theorists have been particularly eager to offer new and potentially liberating accounts of what a body is, may be, or can do – Witz (2000) even suggests a ‘corporeal turn’ in the social science and humanities spearheaded by feminist thinking. Women have, feminists claim, to a larger extent than men been associated with their bodies, with the capacity to produce life, and therefore feminist theorists have been concerned to defamiliarize such common-sense views of the female body and open up a new sphere of possibilities. Elizabeth Grosz (2004: 3), one the most renowned post-structuralist feminist theorists, claims, for instance, that we need to understand the body ‘[n]ot as an organism or entity in itself, but as a system, or a series of open-ended systems, functioning within other huge systems it cannot control, through which it can access and
The Bioeconomy and the New Regime of Science-based Innovation 59
acquire its abilities and capacities’ (ibid.). Speaking in a similar manner, Rosi Braidotti (2002: 21) writes: ‘I take the body as the complex interplay of highly constructed social and symbolic forces: it is not an essence, let alone a biological substance, but a play of forces, a surface of intensities, pure simulacra without originals.’ Both Grosz (2004) and Braidotti (2002) aim to release the body from its essentialistic connotations, its close connections to its brute materiality: ‘The body in question here is far from a biological essence; it is a crossroad of intensive forces; it is a surface of inscriptions of social codes,’ Braidotti (ibid.: 244) argues. Speaking of the concept of the lived body, Grosz (1994: 18) suggests that ‘[t]he lived body is neither brute nor passive but is interwoven with and constitutive of systems of meaning, signification, and representation’. Elsewhere, Grosz (1995: 33) describes ‘the body as a surface of inscription’. Taylor (2005: 747) suggests that the body is ‘[n]either . . . an object nor . . . a text, nor only . . . a locus of subjectivity, but rather . . . a contingent configuration, a surface that is made but never in a static or permanent form’. This view of the body as what is shaped and formed by social as well as biomedical technoscientific practices is consonant with Judith Butler’s (1993) seminal work on embodied matter, suggesting the body as ‘[n]ot a site or surface, but as a process of materialization that stabilizes over time to produce the effects of boundary, fixity, and surface we call matter’ (Butler, 1993: 9, emphasis in original). From this perspective, the body is a composite of both material and symbolic or discursive resources; no body is per se or an sich but is always inscribed with social, cultural and technological qualities and possibilities. The obese body is ‘poorly controlled’, the ‘skinny body’ may be caused by eating disorders and so forth. The body is always subject to normative monitoring and evaluations. ‘[W]oman’s body is always mediated by language; the human body is a text, a sign, not just a piece of fleshy matter,’ Dallery (1989: 54) says. Having a body is to walk the tightrope where the ‘normal’ is always under the threat from the ‘pathological’. Feminist theorists argue than women are more exposed to such normative control than men: Women and their bodies are the symbolic-cultural site upon which human societies inscribe their moral order. In virtue of their capacity for sexual reproduction, women mediate between nature and culture, between the animal species to which we all belong and the symbolic order that makes us into cultural beings. (Benhabib, 2002: 84) At the same time as the symbolic or discursive components in embodiment are recognized, a number of theorists emphasize the irreducible
60
Venturing into the Bioeconomy
nature of the human body; blood vessels, amino acids and enzymes operate without paying much attention to what humans think of them. For instance, Shilling (1993: 81) argues that ‘the body may be surrounded by and perceived through discourses, but it is irreducible to discourse’. The problem is only that it is increasingly complicated to determine what is this ‘non-discursive body’ and what is a strictly material, ‘unmediated’ body. With the growth of potentialities for human interventions into the materiality of the body, such a distinction is no longer easily maintained. Today, it is possible to influence and shape the body in a wide variety of ways: ‘Almost daily, we are bombarded with news of innovative technologies capable of repairing bodily injuries (e.g., laser surgery), replacing body parts (e.g. prostheses), and now cloning animals to create genetically identical but anatomically distinct beings’ (Weiss, 1999, cited in Grosz, 1999). For instance, the much-debated social practice of plastic surgery, offering a great variety of possibilities for sculpturing the body (see, e.g., Hogle, 2005), is one such procedure that blurs the boundaries between the ‘natural body’ and the ‘socially constructed body’. Plastic (from Greek, plastikos, ‘fit for modelling’), cosmetic, or aesthetic surgery has been used since at least the early nineteenth century, when ‘syphilitic noses’ – the ‘sunken nose’ hidden behind a mask by Gaston Leroux’s (1868–1927) phantom of the opera – were modified to help the victims of syphilis escape the social stigma of having the disease (Gilman, 1999: 12). Throughout the eighteenth century, surgery was gradually established as a medical field in its own right, strongly supported by new advances in antiseptics and anaesthesia. Surgery is, then, not an exclusive late-modern phenomenon, even though the growth in aesthetic surgery is significant. In addition to the possibilities for surgery, the advancement of new forms of technology on both the micro and nano levels has radically transformed the image of the body and what it is capable of being and accomplishing: ‘Because human embodiment no longer coincides with the boundaries of the human body, a disembodiment of the body forms the condition of possibility for a collective (re)embodiment through technics. The human today is embodied in and through technics,’ Hansen (2006: 95) suggests. Similarly, Milburn (2004) sketches a future where nanotechnology takes humans to the stage of a ‘cyborg logic’, where technical and organic materials collaborate for the benefit of human health and well-being (see also Thacker, 2004): Nanologic is a cyborg logic, imploding the separation between the biological and the technological, the body and the machine . . . one
The Bioeconomy and the New Regime of Science-based Innovation 61
of the arguments legitimizing nanotechnology is that biological machines like ribosomes and enzymes and cells are real, and consequently there is nothing impossible about engineering such as nanomachines. (Milburn, 2004: 124–5) For some social theorists (Hansen [2006] could be counted among them), this is indicative of scientific, technical and social progress helping humans to live better and more qualified lives – Ian Hacking (2007: 85), for instance, admits that he felt nothing but ‘gratitude’ when plastic corneas were implanted in his eye to save him from blindness; he experienced no feelings about ‘losing the old eyes’, no ‘trans-human anxieties’ – while for others such blending of organic and technical materials is a troubling tendency, potentially taking us to a trans-human condition where human life is no longer properly valued. Notwithstanding the ethical and emotional responses to the scientific and technical opportunities in the new era, the body is increasingly becoming what Mol (2002) calls, in her study of medical practice, multiple. The body is inscribed as layers of meanings and potentials and increasingly examined in its parts, the various ‘systems’ that Xavier Bichat defined as life as early as the early nineteenth century (Haigh, 1984: 10). ‘The body multiple is not fragmented. Even if it is multiple, it also hangs together,’ Mol (2002: 55) suggests. Taken together, the view of the body as being open to technical and therapeutical manipulations – a site or a topos for an active reconstructioning of the body – paves the way for what has been called medicalization or biomedicalization; that is, the definition of socially perceived problems as a matter of medical treatment. Such a biomedicalization is one of the most significant social consequences of the bioeconomy, fusing economic interests and the functioning of the normal or ‘normalized’ human body.
The concept of biomedicalization One of the most interesting effects of the bioeconomy from a sociological point of view is what has been called medicalization or biomedicalization – that is, the rendering of a range of personal and social problems as requiring adequate therapies and medication. Conrad (2007) speaks of medicalization in the following terms: ‘Medicalization’ described a process by which nonmedical problems become defined and treated as medical problems, usually in terms
62
Venturing into the Bioeconomy
of illness and disorders . . . While some have simply examined the development of medicalization, most have taken a somewhat critical or skeptical view of this social transformation. (Conrad, 2007: 4) Clarke et al. (2003) provide the following definition: The concept of medicalization was framed by Zola (1972, 1991) to theorize the extension of medical jurisdiction, authority, and practices into increasingly broader areas of people’s lives. Initially, medicalization was seen to take place when particular social problems deemed morally problematic (e.g., alcoholism, homosexuality, abortion, and drug abuse) were moved from the professional jurisdiction of the law to that of medicine. (Ibid.: 164) As Conrad (2007) and Clarke et al. (2003) indicate, medicalization has often been regarded as the ‘easy way out’ when dealing with social and individual problems – that is, in terms of ‘individualizing’ concerns that are potentially social in nature (e.g., eating disorders among younger women) or of using resources at hand rather than dealing with underlying problems (as in the case of prescribing drugs like Subutex™ to help drug addicts get rid of their drug abuse). In addition, there is a suspicion that this tendency to use what is at hand is set up by moneygrabbing biopharmaceutical companies eager to reap the rewards from their investment. Busfield (2006) is representative of such a position, suggesting that: [d]rugs provide an individualized solution to problems that often have social and structural origins, which are not tackled by pharmaceutical remedies, as for instance, where pills are used to treat obesity . . . Pharmaceutical producers use their ideological, economic and political power to play on the anxieties and discontents of life in late modern society creating a market for products that extends well beyond obvious health needs. Health services, which are supposedly based on considerations of welfare and professionalism and a commitment to patients’ interest, become the means of generating large profits for a highly commercial industry that uses scientific fact making as a tool to serve its own interests as much, if not more, than the interests of health service users. (Ibid.: 310) Blech (2006: ix) points at the same phenomenon, saying that ‘illness is becoming an industrial product’ and refers to what pharmaceutical
The Bioeconomy and the New Regime of Science-based Innovation 63
companies and other actors in the field are engaging in as ‘disease mongering’. Such accusations are often dismissed by both the pharmaceutical industry and commentators, who refer to the highly regulated environment they are operating in; if pharmaceutical companies are regarded as ‘greedy’, it is because that is what they are expected to be and their role is, after all, to provide drugs, not social policies. One may also pay attention, as Conrad and Potter (2000: 560) do, to the demandside of the equation, suggesting that ‘the American public’s tolerance for mild symptoms and benign problems has decreased, which may be leading to a further medicalization of ills’. No matter what position is taken, medicalization is not without social costs. Shah (2006) points to the risks of over-consuming drugs: Today, when elderly patients are rushed to the emergency room, it is 50 percent more likely that their problem stems from taking too many drugs, rather than not taking them. Approved drugs kill over one hundred thousand Americans every year, not counting the scores whose bad reactions are unreported or wrongly attributed to the disease the drug is meant to treat, making adverse reactions to pill popping the fifth leading cause of death in the United States. (Ibid.: 61) At the same time as a more balanced view of the phenomenon of (bio)medicalization is called for, it is evident that advances in the life sciences and medicine not only treat diseases and disorders but also discover and/or invent diseases. ‘Physicians claim to have discovered almost 40,000 different epidemics, syndromes, disorders and diseases in Homo sapiens,’ Blech (2006: 3) reports, and exemplifies with the field of psychiatry, where the number of officially recognized disorders, the number of mental illnesses, has ‘[r]isen since the Second World War from 26 to 395’ (ibid.: 9). ‘Medicine is so ahead of its time that nobody is healthy anymore,’ Aldous Huxley once remarked, serving as a perfect bon mot illustrating Blech’s argument (ibid.: 1). Clarke et al. (2003) use the term ‘biomedicalization’ to further advance the term ‘medicalization’ and to connect it to the predominant bioeconomic regime. They explain the term accordingly: ‘Biomedicalization is our term for the increasingly complex, multisite, multidirectional processes of medicalization that today is being both extended and reconstituted through the emergent social forms and practices of a highly and increasingly technoscientific biomedicine’ (ibid.: 162). The concept of biomedicalization is thus consonant with Rose’s (2007) and Sunder
64
Venturing into the Bioeconomy
Rajan’s (2006) concepts of the bioeconomy and biocapital in terms of turning the medical gaze into the interior of the human body: [T]he shift from biomedicalization is a shift from enhanced control over external nature (i.e., the world around us), to the harnessing and transformation of internal nature (i.e., biological processes of human and nonhuman life forms), often transforming ‘life itself.’ Thus, it can be argued that medicalization was co-constitutive of modernity, while biomedicalization is also co-constitutive of postmodernity. (Clarke et al., 2003: 164) In addition, biomedicalization is characterized by ‘a greater organizational and institutional reach’: Biomedicalization is characterized by its greater organizational and institutional reach through the meso-level innovations made possible by computer and information sciences in clinical and scientific settings, including computer-based research and record-keeping. The scope of biomedicalization processes is thus much broader, and includes conceptual and clinical expansions through the commodification of health, the elaborations of risk and surveillance, and innovative clinical applications of drugs, diagnostic tests, and treatment procedures. (Ibid.: 165) No matter if the operational vocabulary employs the term ‘medicalization’ or ‘biomedicalization’; both terms are inextricably entangled with the procedure of defining a disorder and a target for that particular disorder, and thereafter finding a chemical substance capable of affecting the target and thereby producing desirable therapeutic effects. ‘The key to medicalization is definition,’ Conrad (2007: 5) says. ‘That is, a problem is defined in medical terms, described using medical language, understood through the adoption of a medical framework, or “treated” with a medical intervention.’ Such ‘definitions’ aim to establish what Lakoff (2008: 744) calls disease specificity, that an illness can be stabilized as a coherent entity that exists outside of its embodiment in ‘particular individuals’ and that can be ‘explained in terms of specific causal mechanisms that are located within the sufferer’s body’. In order to qualify as a proper illness, eventually subject to biomedicalization processes, the illness must demonstrate some persistency over time: ‘Disease specificity is a tool of administrative management. It makes it possible to gather populations for large-scale research, and more generally, to
The Bioeconomy and the New Regime of Science-based Innovation 65
rationalize health practice,’ Lakoff (ibid.) says. While disease specificity is easily accomplished in many cases, there are also a range of illnesses and diseases that are more evocative and fluid in their nature. Chronic fatigue syndrome, irritable bowel syndrome, or fibromyalgia (see Collins and Pinch, 2005) to mention a few ‘contested’ illnesses, are not yet stabilized and inscribed with disease specificity, thereby rendering them (at best) as ‘somatic illnesses’ or (at worst) as the effect of mere psychological dysfunctions. Sufferers from such illnesses are increasingly mobilizing social networks and interest groups to get attention for their health conditions (Rabinow, 1992; Hacking, 2006; Nevas and Rose, 2000; Rose and Novas, 2005) and, ultimately, to get a proper medical diagnosis. As Rosenberg (1992) emphasizes, a diagnosis is never totally stabilized but is subject to negotiations and modifications, helping the patient, not only explaining the past but also anticipating the future and what to expect from it: From the patient’s perspective, diagnostic events are never static. They always imply consequences for the future and often reflect upon the past. They constitute a structuring element in an ongoing narrative, an individual’s particular trajectory of health and sickness, recovery or death. We are always becoming, always managing ourselves, and the content of a physician’s diagnosis provides clues and structures expectations. Retrospectively, it makes us construe habits and incidents in terms of their possible relationship to present disease. (Ibid.: xix) Biomedicalization is, therefore, a most complex social process that produces a variety of scientific, political, ethical and practical problems and opportunities. Biomedicalization is especially salient in cases where therapies need to be formally supported by scientific claims. Just like any product or service to be bought and sold in capitalist markets, pharmaceuticals are commodities. However, the production of and marketing of pharmaceuticals is a specific kind of commodity, largely entangled with the scientific know-how: [U]nlike many other commodities, pharmaceuticals have the advantage and disadvantage of being dependent upon biomedical sciences for legitimation and approval. This works in favor of the marketing of pharmaceuticals, because drugs can be promoted through scientific claims about the medical benefit, efficacy. And necessity
66
Venturing into the Bioeconomy
supposedly revealed by objective clinical research. On the one hand, pharmaceutical companies are also subject to regulatory controls that limit their direct marketing of products to consumers. Researchers, by mediating drug development, can circumvent any of these restrictions because of their perceived autonomy, expertise, and objectivity, These mediated performances are effective for marketing drugs largely because of cultural idealizations that presume separations between scientific research and politics, economics, and commerce. (Fishman, 2004: 188–9) In other words, there is a bilateral relationship between academic researchers and pharmaceutical companies; pharmaceutical companies need academic researchers to verify the therapeutic qualities of the drugs, and academic researchers are in need of funding and support for their scientific endeavours. In the USA it is quite common, Fishman (ibid.: 188) argues, for academic researchers not only to receive financial rewards as consultants to pharmaceutical companies but also to benefit from the collaboration through gaining ‘professional recognition, funds for their research departments and laboratories, publications and, often, media attention through related public and professional activities’. Examining the case of female sexual dysfunction (FSD) and the production of a category of drugs referred to as sexuo-pharmaceuticals, Fishman (ibid.) argues that, in order to qualify as legitimate, a drug must treat what is regarded as a ‘disease’; in the case of FSD the line of demarcation between disease and non-disease, the normal and the pathological, becomes a central issue to be sorted out (see also Marshall, 2009): So-called lifestyle drugs such as sexuo-pharmaceuticals, have raised questions about how pharmaceutical companies can meet the approval guidelines for new drugs given their questionable status for treating ‘diseases.’ Such drugs will only be approved if they treat and established ‘disease,’ and lifestyle issues of even ‘quality of life’ issues have not traditionally fallen into this category. Hence, the biomedicalization of a condition . . . is in fact necessary for a lifestyle drug to gain approval. (Fishman, 2004: 188, 191) The first step towards producing pharmaceuticals capable of dealing with the experience of FSD is therefore to make this type of alleged ‘dysfunction’ become a legitimate disease – that is, to biomedicalize the condition. The relationship between disease and therapy is, however, not linear but rather what Fishman calls ‘multidirectional’; once a specific substance or
The Bioeconomy and the New Regime of Science-based Innovation 67
class of drugs has been approved by the FDA, there is a tendency that a growing amount of diseases of this kind are diagnosed because there is a therapy responding to the disease. For example, Fishman (ibid.: 188, 193) suggests that access to selective serotonin reuptake inhibitors (SSRI), the active substance in psychopharmacological drugs such as Prozac™ and Paxil™, has led to a rise in depression diagnoses, indicating that that the SSRI market has contributed to an expansion and subsequent commodification of the disease. Thus, one may say that new drugs are not only curing diseases but also produce disease; the establishment of lifestyle-related diseases is therefore not a harmless matter of helping suffering people but is also having substantial consequences for what is regarded as a ‘normal life’: ‘The relationships established between the FDA guidelines, clinical trials researchers, and the pharmaceutical companies help to create a consumer market for a pharmaceutical product while also shaping ideas about normality, in this case, normal sexuality,’ Fishman (ibid.: 188: 194) concludes. In addition, it may also be that, as suggested by Healy (2004), for a practising psychiatrist and researcher in the field of psychopharmacological drugs, it is marketing strategies and marketing objectives and not scientific evidence that determines how a specific drug is launched in the market: Regulatory bodies . . . essentially have only minimal audit functions. It is pharmaceutical companies that decide which trials should be conducted. And trials are conducted to fit the marketing requirements of the company, rather than being dictated by the effects of the drug. For example, SSRIs3 have greater effect on premature ejaculation than on depression, the decision to market these drugs as antidepressants is a business rather than a scientific decision. (Ibid.: 238) Registered drugs, thus, do not only ipso facto establish specific illnesses as more common than previously thought, thereby helping to establish new norms for what counts as normal and pathological; they are also used in a manner that is consistent with marketing strategies and financial objectives. The market for ‘depression’ or ‘social inhibition’ is bigger and more lucrative than the (potential) market for early ejaculation – a concern that, naturally, excludes half of potential users, namely women – the choice regarding the nature of the launch of a psychopharmacological drug is, thus, embedded in extra-scientific interests. While critics of biomedicalization at times tend to portray such social processes as inevitable and foresee a future where any human health concern is treated with some prescribed medication, preferably an easily
68
Venturing into the Bioeconomy
distributed medicine in the form of a pill, studies have shown that there is no such determinism in the history of biomedicalization. Shostak and Conrad (2008) speak about what they call the geneticization of medicine as the process where ‘differences between individuals are reduced to their DNA codes, with most disorders, behaviours, and physiological variations defined, at least in part, as genetic in origin’ (Lippman, 1991: 19, cited in ibid.: S288). Geneticization of illnesses is thus one step further towards the information-based paradigm dominating in the contemporary life sciences, namely that of not only conceiving of illnesses as a personal matter but also as the outcome of (unfortunate) organization of hereditary material. In the first decade of the new millennium, various research communities and research groups announced that that had detected gene sequences that are potentially capable of explaining everything from forms of cancer to adultery. The connection of a ‘gene sequence’ (in the form of single nucleotide polymorphisms (SNPs), or an expressed sequence tag (EST) – i.e., strings of DNA) and a particular ‘disorder’ is almost becoming a cliché in contemporary science journalism. Shostak and Conrad (ibid.) examine the three cases of depression, homosexuality and variation in responses to chemical exposures to show that public opinion and the activism of targeted ‘risk groups’ (e.g., the gay and lesbian community) may prevent or mitigate the tendency towards geneticization and biomedicalization. In the case of depression, there is a general belief in the scientific community and in the broader public that it makes sense to think of depression as being genetically embedded. Consequently, there have been relatively few critical accounts of research projects connecting genetic sequences and psychological illnesses such as bipolar disorder. ‘That much depression is now seen as a genetically caused disease is the result of cultural definitions, institutional forces, and political and economic interests that arose decades ago. These earlier events ensure that genes associated with depression are understood to be causes of a disease condition,’ Shostak and Conrad (ibid.: S304–S305) write. The connecting of homosexuality with specific genes, in contrast, has been heavily criticized from the outset by both scientists and activist groups. When an American scientist declared that he had found specific features of the brain among gay men diseased from AIDS – quickly branded as the ‘gay brain research’ in the media – the gay community was outraged. When another scientist claimed that a gene potentially explaining homosexuality had been detected, a similar response ensued and the advancement of the research project became politically complicated. Today, such research projects are generally regarded as scientifically irrelevant (outside some
The Bioeconomy and the New Regime of Science-based Innovation 69
ultra-orthodox quarters, conceiving of homosexuality as a sin on basis of theological doctrines). Shostak and Conrad thus emphasize social embedding of biomedicalization: The case of homosexuality vividly demonstrates how social movement activism can reinforce a critical juncture, especially by shifting regimes of credibility . . . That is, in contrast to the case of depression, in which there is very little redundancy in events preceding and following the critical juncture, in the case of homosexuality, social movement organization, mobilization, and institutionalization are woven throughout the sequence in which genetics research is embedded. (Ibid.: S307) A similar response was provoked as military scientists found that specific ethnic groups (i.e., black soldiers in the US army) were responding more violently to the exposure to certain chemical substances. In a post-colonial mode of thinking, race and ethnicity should not serve as the basis for a research programmes. In their conclusion, Shostak and Conrad reject any determinist movement from genetics, over geneticization, to biomedicalization. On the contrary, it is social communities and political and practical interests that determine whether specific research projects deserve to be pursued or not: [G]enetic information does not always lead to geneticization, nor does geneticization invariably lead to medicalization. Rather, there is a lack of consistent fit among genetics, geneticization, and medicalization. Examining this lack of consistent fit reveals that genetic information takes its meaning from its embeddedness in different moments in a sequence of events and their social structural consequences. (Ibid.: S310) Although medicalization is a strong undercurrent in contemporary society, there is, Shostak and Conrad remind us, no clear-cut and indisputable connection between the genotype and the illnesses connected to the phenotype.4 One of the consequences of Shostak and Conrad’s study is that it is increasingly complicated to think of biomedicalization as being something that is ‘external’ or ‘additional’ to society, something that interpenetrates the social order. On the contrary, biomedicalization is one of the constitutive elements of society and strongly influences how we think of society: ‘[W]hat we mean by society itself, what we understand the social to be, is itself one of the things that is changing in
70
Venturing into the Bioeconomy
the context of new genetics,’ Sarah Franklin claims (2001: 337, emphasis in original). Elsewhere Franklin and Roberts (2006) advocate a similar view, suggesting that one should abandon the popular and evangelist belief that society drags behind the sciences and should always be infinitely thankful for all the wonders being brought to the table: Rather than depicting medicine and science as ‘ahead of,’ ‘beyond,’ or ‘outside’ society, and pessimistically representing ‘the social’ as perpetually lagging behind, science and society are depicted here as much more deeply intertwined. While it is not helpful to underestimate the radical novelty if many of the new techniques, choices, and dilemmas encountered in the context of new reproductive and genetic technologies, or the difficult issues they present, it is equally unhelpful to overprivilege technological innovation as if it were a force unto itself. (Ibid.: 13) Instead of having the technosciences on the one hand and society on the other, the technosciences are constitutive elements of society (Jasanoff, 2005). The technosciences are already part of the social order and get their resources and legitimacy through serving society. In summary, one of the central features of the bioeconomy is biomedicalization. Social malaises and shortcomings are increasingly becoming subject to individual treatment and therapies. However, such changes do not occur through determining processes but are, rather, the outcome from social negotiations and economic interests. In the bioeconomy, the constant attention to life per se demands a variety of resources and the biotech and pharmaceutical industry provide such resources on the basis of sound financial management and technoscientific possibilities and capacities.
The tissue economy Another consequence of the bioeconomy is that not only is hereditary material becoming subject to systematic investigation and observation but also organic materials on a higher level are increasingly subject to commodification on an industrial basis (Parry, 2004). Human tissue has been collected, bought and sold since ancient times. In the fourth century BC, the Alexandrian scholar Herophilus acquired a reputation as the ‘father of scientific anatomy’ after collecting and examining human bodies and animals. In medieval times, professional anatomists performed public dissections of the corpses of criminals. In the seventeenth
The Bioeconomy and the New Regime of Science-based Innovation 71
century, human corpses were bought and sold like any other commodity, and at the gallows at Tyburn, outside London, grisly scenes took place as merchants of this body trade fought over dead bodies with family members hoping to give the body a decent burial (Lock, 2001: 66). Even after being buried, the body was not safe: ‘[b]urial grounds were rifled for the freshly dead’, Das (2000: 266) notes. As late as the early nineteenth century, the great physiologist Xavier Bichat, using animals in ‘abundance’ in his anatomical research when provoking injuries in specific organs, frequently visited ‘[e]xecutions by guillotine so as to be able to make observations on the severed heads and trunks of the victims’ (Haigh, 1984: 88). Human tissue has always been a lucrative trade in the field of medicine. However, since the mid-nineteenth century and the use of blood transfusions as a medical procedure, human tissue has been collected, stored and sold more systematically and in a more regulated manner. In the case of human blood, in many Western countries there has been what Richard Titmuss (1970) calls the ‘gift relationship’ (a term collected from the work of the French anthropologist Marcel Mauss, 1954) between patients and anonymous donors. In other countries, such as the USA and Japan, blood is bought and sold. Titmuss (1970) strongly advocates the UK system of gift relationships and points at a number of shortcomings with the commercialization of blood: ‘The first is that a private market in blood entails much greater risks to the recipient of disease, chronic disability and death. Second, a private market in blood is potentially more dangerous to the health of donors. Third, a private market in blood produces, in the long run, greater shortages of blood,’ Titmuss (ibid.: 157) argues. Titmuss claims that the UK system is more effective than the American system on all of his four criteria for effectiveness, namely ‘(1) economic efficiency; (2) administrative efficiency; (3) price – the cost per unit to the patient; (4) purity, potency and safety – or quality per unit’. ‘On all four criteria, the commercialized blood market fails,’ Titmuss concludes (ibid.: 205). On the basis of his findings, Titmuss suggests that gift relationships should be the operative principle for blood donations. As technoscience has advanced over the decades and centuries, a growing number of human tissues have attained scientific and economic value. Waldby and Mitchell (2006) list some of the human tissue that may be reused: Solid organ transplantation has been practiced since the late 1950s and commonplace since the late 1970s, as the refinement of tissue typing, surgical techniques, and immunological suppression has
72
Venturing into the Bioeconomy
allowed organ donors to be matched with compatible recipients . . . Skin, bones, heart valves, and corneas can now be banked and used in surgery . . . Reproductive tissue – sperm, ova, and embryos – can be donated and transplanted. (Ibid.: 6–7) In the growing market for human tissue, today, ‘more than 282 million archived and identifiable pathological specimens from more than 176 million individuals are currently being stored in the United States repositories’, Andrews and Nelkin report (2001: 4–5). Every year some 20 million samples of tissue are added to the repositories; virtually every American has his or her tissue ‘on file’ somewhere. In some cases, the tissue is collected for basic or applied research aimed at producing new medicines or therapies, but there are also cases where tissue is collected on a strict commercial basis to take advantage of forthcoming technoscientific advancements. Brown and Kraft (2006) quote a web advertisement for a Californian ‘cryobank’ advising potential clients to ‘store’ their baby’s umbilical cord blood, containing much sought-after stem cells, and thereby ‘[s]afeguarding the future health of your child by providing your baby with a lifetime of insurance needed to take advantage of today’s medical breakthroughs and tomorrow’s discoveries’ (web advertising – Cryobank, cited in ibid.: 322).5 Whether such prospects are adequate is complicated to determine but the ethical consequences may be discussed. In general, the whole tissue economy (to use Waldby and Mitchel’s [2006] term) is surrounded by a sense of walking on thin ethical ice. For instance, social problems like organ theft – ‘biopiracy’ is a commonly used term here (see, e.g., Scheper-Hughes, 2000: 202) – among the poor populations in the developing world, or forms of ‘organ harvest’, are illicit activities that would leave few persons indifferent. ‘Among the most disturbing historical trends is the tendency within the medical marketplace to exploit the bodies of the poor and disenfranchised, where paupers frequently emerge as being of greater worth dead than alive,’ Sharp (2000: 296) remarks (see also Banerjee’s [2008] discussion of what he dubs ‘necrocapitalism’). In addition, the tissue economy is shaped and formed by gendered and colonialist conditions and ideologies that deserve a proper analysis. For instance, the work of the anthropologist Nancy Scheper-Hughes (2000) suggests that the standards and regulations for organ donations instituted in the West are not really adhered to in other parts of the world. For instance, she quotes a Brazilian medical doctor claiming that the US-based organ donation programme routinely sent ‘surplus corneas’ – what he referred to as ‘leftovers’ – to his centre: ‘Obviously,’ he said, ‘these are not the
The Bioeconomy and the New Regime of Science-based Innovation 73
best corneas. The Americans will only send us what they have already rejected for themselves’ (cited in ibid.: 199). Scheper-Hughes reports more cases of unethical international traffic of organs: In Cape Town, Mrs R., the director of her country’s largest eye bank [an independent foundation], normally keeps a dozen or more ‘postdated’ cadaver eyes in her organization’s refrigerator. These poorquality ‘corneas’ would not be used, she said, for transplantation in South Africa, but they might be sent to less fortunate neighboring countries that requested them. (Scheper-Hughes, 2000: 199) Scheper-Hughes notices that the strict Apartheid regime of South Africa in the 1980s and early 1990s did not apply in the field of organ donation and organ harvesting, and wealthy white people frequently received organs from black donors, in most cases without the need for the ‘informed consent’ of the families. The ‘leftovers’ from the organ harvesting did, however, have a market value in certain niches of the industry, as ‘secondary goods’ exported to less financially endowed hospitals and patients. Such markets, unregulated by ethical and professional standards, are a nightmare for many in the West (see Sharp, 2003) while in parts of the world such transactions and practices are only thinly veiled. Even though crimes in the field of organ procurement and organ transplantation are hopefully not very widespread, the field is still surrounded by substantial ethical concerns. Margaret Lock (2002), a Canadian anthropologist, discusses the concept of brain death and the differences between the West and Japan in attitude and traditions when it comes to the procurement of organs from brain-dead cadavers. When observing herself how a variety of organs were ‘procured’ (the somewhat euphemistic term used to denote the ‘organ harvesting’ from the patient’s body) from a donor in a hospital in Montreal, Lock (ibid.: 19) noticed that, when the donor is ‘legally dead’, the ‘care of the organs, rather than of the person, become the dominant concern’ – ‘The donor is merely a container that must be handled with care,’ she commented. Lock (ibid.: 22) is especially concerned about the procurement of the eyes of the donor, an act that appears to violate the moral order more than the procurement of the internal organs – eyes being the organs of vision, the principal metaphor of reason in the Western tradition (Blumenberg, 1993), a reason that is part of the cognitive and intellectual capacities of the body – that is, the ‘soul’ in Christian theology. While Lock (2002) seems to accept the practice of organ procurement
74
Venturing into the Bioeconomy
and donations, she is interested in addressing the social and moral consequences of this rather recent practice, enabled by the advancement of the medical sciences. For instance, Lock asks if there is a relationship between the acceptance of the concept of brain-death, controversial until the end of the 1960s but more or less agreed upon by the early 1980s (ibid.: 110), and the medical possibilities for growth in organ transplantation. Das (2000: 269), on the other hand, is more straightforward in making such a causal connection: ‘The classical definitions of death even in the clinical context were based upon permanent cessation of the flow of vital fluids. But as the perceived need for more organs and tissues arose, the classical definition was sought to be redefined to meet this need.’6 In addition, how can one make sense of and align the commodification of organs with the cultural order of a given society? Clearly, in their original function, body parts are not commodities, but they may become commodified. It is an important, therefore, to consider how and under what conditions body parts accrue value, at times monetary value, and what local resistance there may be to the alienation of body parts. (Lock, 2002: 47) The practice of organ transplantation is based on separation of the donor and the organ; the organ is turned into an object, a ‘thing-initself’ which is ‘entirely differentiated from the individual from which it is procured’ (ibid.: 48). Different cultures mobilize various rationales for accepting organ donations. For instance, in the Christianized West, the concept of ‘the gift’ as an act of altruism strongly supports organ donations. In Japan, Western ideas about ‘the gift of life’ are not widely recognized and death is also much more ‘definite’ than in the Western tradition of thinking, beginning when the body starts to decay. For the Japanese, the spirit of the person, its reikon, lives on, and the reikon is believed to want the family to bring the body back home. Cases where the body cannot be found or identified (in, e.g., aeroplane crashes) are especially cumbersome, causing a great stir among the relatives, because the reikon needs to know that the body is taken care of; there is always the risk of the reikon being disturbed by the fact that the body is missing or not properly handled. As a consequence, the body needs to be maintained as long as possible and the Japanese are more concerned not only about the concept of brain-death, but also about the practice of organ donations more broadly. In addition to cultural responses to the commodification of the body, the turning of the ‘living cadaver’ (as brain-dead patients were once called) into a repository of organs capable
The Bioeconomy and the New Regime of Science-based Innovation 75
of bringing life elsewhere, the very concept of brain-death is examined by Lock. For Lock (2002), the general acceptance of brain-death as a medical and legal condition opens up new opportunities but also new ethical concerns that need to be addressed. Lock tells a story about how a patient who had been run over by a garbage truck is brought into an intensive-care unit in a Montreal hospital: As the orderly maneuvered the patient into the assigned space, an intern picked up the patient charts. ‘Looks like this is going to be a good donor,’ he said. ‘Should we call the transplant coordinator?’ A senior intensivist, on the phone at the time, overheard this comment and immediately said, ‘Not so fast now. Slow down.’ After hanging up, the intensivist looked at the chart himself and briefly observed the busy nurses as they set to work checking the lines and tubes sustaining the patient. He turned to the intern and repeated once again, ‘Not so fast.’ When I left the unit an hour or two later, the patient was stable, but the condition of his brain remained in doubt. A week later I was told that this man was out of the ICU [Intensive Care Unit] and in an ordinary ward where he was breathing on his own and doing well. A full recovery was expected. (Ibid.: 101) Not only is there a risk of jumping to conclusions and rushing into making the patient an organ donor, but also the very term brain-death is a somewhat vague and filled with ambiguities. Lock exemplifies: Brain-dead patients will, we know for certain, ‘die’ as soon as they are removed form the ventilator. We take comfort from this knowledge when proclaiming them dead even though they look alive. But patients in cerebral death are rarely on ventilators and can usually breathe without assistance. All they need is assistance with feeding – as do a great number of patients who are obviously fully alive. (Ibid.:120) Brain-dead persons look as if they are alive and there are a great number of reports of health care personnel observing brain-dead persons ‘crying’, ‘yawning’, or lifting their arms. For Lock (2002) such incidents need to be brought into discussion when determining when and how to draw the line between life and death. If we assume, as Bichat did in the beginning of the nineteenth century (Haigh, 1984), that life is a bundle of systems set up to resist death and that death therefore is gradual – at times almost sneaking up on the individual – then the concept of
76
Venturing into the Bioeconomy
brain-death (and other conceptualizations of death such as ‘lung death’ or ‘cardiopulmonary death’) – need to be subject to discussion. The commodification of the human body and its organs is therefore not easily accomplished but needs to be aligned with the predominant social and cultural order. The tissue economy is largely entangled with and embedded in the predominant social order. Reproduction medicine and the ‘baby business’ One interesting domain of the tissue economy is the reproduction industry – the totality of technoscientific and economic activities engaged in safeguarding the production of new babies (Clarke, 1998; Thompson, 2005; Tober, 2001; Spar, 2006). It is estimated that about 10 per cent of the American population suffers from infertility problems and, since the age of the mother giving birth to her first baby is rising in most Western countries, there are good market prospects for companies in the industry. In addition, there seems to be, Spar (2006) emphasizes, unlike in many other industries, a ‘low price-elasticity’ on the demand side: infertile couples and individual women craving a child are willing to spend a significant amount of their economic resources, time, energy and emotional effort to get a child of their own. Spar (2006) even uses the concept of ‘the baby business’ to denote the totality of the economic activities dedicated to reproductive medicine, adoption services and other practices (e.g., surrogacy) aimed at leading to a child and much-longed-for parenthood. The market for fertility treatment alone (including in vitro fertilization, fertility drugs, diagnostic tests, donor eggs, surrogate carriers and donor sperm) accounted for a total of $2.9 billion in the USA in 2004. Almeling’s (2007) study of egg agencies and sperm banks offers some insights into how this industry is organized and what values and norms structure and guide the day-to-day work. The staff in the egg agencies expect the donors to conform to one or two gendered stereotypes: they need to be highly educated and physically attractive or caring and motherly ‘with children of their own’, and preferably both. The donors are thus expected either conform to aesthetic and meritocratic ideals or to have demonstrated the capacity to act in a ‘motherly’ way. Sperm donors, by contrast, were generally expected to be tall and college educated, but what mattered in this case was the ‘sperm count’, the amount of lively sperms per unit (ibid.: 327). At the same time, as Tober (2001: 139) shows, semen as a ‘vehicle for the transmission of genetic material’ is the carrier of various complex meanings – ’biological, evolutionary, historical, cultural, political, technological, sexual’. Therefore,
The Bioeconomy and the New Regime of Science-based Innovation 77
even though women interviewed by Tober knew that many individual qualities are not hereditary (e.g., the donors’ preference to play ‘basketball’ – a ‘masculine’ sport of choice and thus a desirable quality in a man in the American culture – rather than badminton – an ‘effeminate’ and ‘nerdy’ choice and thus a less attractive marker of the underlying qualities of the sperm donator), they still were concerned about what sperm donor to select, potentially enhancing the chances of getting a healthy and successful child.7 The sperm donors were not totally neglected in the equation. Still, in general, ‘[w]omen are perceived as more closely connected to their eggs than men are to their sperm,’ Almeling (2007: 328) observes and consequently the female donors were paid ‘regardless of how many eggs they produce’, while the male donors had to stand the quality test to get their pay-cheque. However, the egg donors were also subject to various moralist beliefs and scrutiny that the sperm donors did not have to endure. Egg donors were expected to engage in gift relationships regarding the economic transaction between the donor and the recipient. In many cases, the recipients gave donors flowers, jewellery, or an additional financial gift to uphold the jointly constructed vision of egg donation as ‘reciprocal gift-giving’ in which ‘egg donors help recipients and recipients help donors’ (ibid.: 334). As Ikemoto (2009) remarks, it is indicative and deeply ironic that terms like ‘egg donations’ and ‘egg donors’ are used even though there are clear economic and financial incentives for at least one party of the transaction. The whole ‘egg donation ideology’ is founded on the ‘gift of life’ view and thus the term ‘donor’ is favoured over more neutral terms ‘seller’ or ‘provider’. The egg donors were constantly reminded of the ‘gift’ they were giving the recipients and the whole procedure was filled with strong sentiments. The staff at the egg agencies thus adhered to two objectives at the same time, Almeling (2007: 334) suggests: on the one hand, they told the donors to think of the donation ‘as a job’, but, on the other, they also embedded ‘the women’s responsibility in the “amazing” task of helping others’. This is a delicate balance to maintain and women who sought to make a ‘career’ as egg donors were violating the gift relationship ideology and therefore looked upon with disgust among the staff at the agency (ibid.). Being an egg donor is, thus, the social practice of serving as a altruistic helper, capable of giving the ‘gift of life’, while carefully avoiding any opportunistic demonstrations involving financial concerns that would violate the carefully constructed social relationship. The first thing to learn as an egg donor is that one must not give too much, demonstrating a limit to one’s generosity, otherwise one may risk being castigated as greedy. One
78
Venturing into the Bioeconomy
of the social implications is that while men may get ‘paid for what they do anyway’ (as the popular joke goes) when donating semen, the role of the egg donor is regulated by a variety of social norms. Men’s sperm may be paid for or wasted without any major social consequences, but women’s eggs are sacred things that demand a carefully staged institutional environment to be passed around in the bioeconomy. In general, the tissue economy may be regarded as a form of derivate industry to the biotechnology and pharmaceutical industry. Collecting, categorizing, storing and using human tissue are social processes that are closely bound up with the bioeconomic regime.
Genetics, genomics and pharmacogenomics: the new technoscientific regime Biomedia One of the most significant changes in the bioeconomy in comparison to previous regimes of biopolitics is the technological and social advancement of new media. In the following, the concept of biomedia, advocated by Thacker (2004) will be examined. The concept of media is a complicated one, not to be confused with mass media more generally (see, e.g., Luhmann, 2000; Grindstaff and Turow, 2006); it refers instead to the cultural techniques for storing and circulating data and information. Lisa Gitelman (2006) provides a useful definition of media: I define media as socially realized structures of communication, where structures include both technological forms and their associated protocols, and where communication is a cultural practice, a ritualized collocation of different people on the same mental map, sharing or engaging with popular ontologies of representation. As such, media are unique and complicated historical subjects. Their histories must be social and cultural, not the stories of how one technology leads to another, or of isolated geniuses working their magic on the world. (Ibid.: 7) In a similar vein, Krämer (2006: 93) defines media as ‘[f]irst and foremost cultural techniques that allow one to select, store, and produce data and signals’. In Gitelman’s (2006) definition, ‘communication’ is central to the term of media, but in Krämer’s (2006) understanding, ‘data and signals’ are the elementary forms of information used in media. It is outside of the scope of this chapter to sort out all these theoretical intricacies, but, in media theory, such differences are not insignificant.
The Bioeconomy and the New Regime of Science-based Innovation 79
All cultures and historical periods have used their own forms of media. For instance, the medieval historian Jacques LeGoff (cited by Bowker, 2005: 26) names five distinct periods of ‘collective memory’ in the West: oral transmission, written transmissions with tables or indices, simple file cards, mechanical writing and electronic sequencing. The concept of ‘electronic sequencing’ is largely synonymous with the digital new media of the contemporary period of time. Seeking to bridge the literature on media and that of biopharmaceuticals, Thacker (2004) suggests the term ‘biomedia’ to describe the predominant technology in the present bioeconomic regime. In Thacker’s view, the field of biotech is not best described as a ‘biological field’ but rather as ‘an intersection between bio-sciences and computer sciences’, and more specifically an intersection that is ‘[r]eplicated specifically in the relationships between genetic “codes” and computer “codes”’ (ibid.: 2). That is, the predominant idea in the present bioeconomic regime is that ‘organic codes’ and ‘silicon-based codes’ can be used as similar terms, that is, as informational orders that engender significant effects. Practically, that very idea is manifested in what Thacker calls biomedia, a term that he is at pains to define as accurately as possible: Put briefly, biomedia is an instance in which biological components and processes are technically recontextualized in ways that may be biological or non-biological. Biomedia are novel configurations of biologies and technologies that take us beyond the familiar tropes of technology-as-tool or the human-machine interface. Likewise, biomedia describes an ambivalence that is not reducible to either technophilia (the rhetoric of enabling technology) or technophobia (the ideologies of the technological determinism). Biomedia are particular mediations of the body, optimizations of the biological in which ‘technology’ appears to disappear altogether. With biomedia, the biological body is not hybridised with the machine, as in the use of mechanical prosthetics or artificial organs. Nor is it supplanted by the machine, as in the many science fictional fantasies of ‘uploading’ the mind into the disembodied space of the computer. In fact, we can say that biomedia had no body anxiety, if by this we mean the will to transcend the base contingencies of ‘the meat’ in favor of virtual spaces. (Ibid.: 5–6) In the use of biomedia, the body is not turned into codes and signified by a technical vocabulary; instead, the biological body must remain fully ‘biological’ while at the same time it is expressed in qualitatively
80
Venturing into the Bioeconomy
different terms. Thacker (2004) thus suggests that biomedia are capable of translating the biological body into new vocabularies without excluding its organic features. This delicate balance between binary opposition (e.g., technophilia/technophobia) is repeated over and over in Thacker’s text: Biomedia is not the ‘computerization’ of biology. Biomedia is not the ‘digitalization’ of the material world. Such techno-determinist narratives have been part of the discourse of cyberculture for some time, and, despite the integration of computer technology with biotechnology, biomedia establishes more complex, more ambivalent relations than those enframed by technological-determinist views. (Ibid.: 7) What biomedia contribute, at the bottom line, is to position the biological body as a medium per se: ‘[T]he body as seen in biotech research generates its technicity from within; its quality of being a medium comes first and foremost from its internal organization and functioning’ (ibid.: 10). The biological ‘components and processes’ are thus examined and ‘technically recontextualized’ in the use of biomedia (ibid.: 11). Biomedia is here, then, both a concept (recontextualizing the biological domain) and the technology for doing so (e.g., bioinformatics tools) that are ‘[t]ightly interwoven into a situation, an instance, a “corporealization”’ (ibid.: 13). What Thacker is really concerned with is to avoid positioning biomedia as some kind of contrived theoretical category such as the cyborg, a term introduced in the social sciences by Donna Haraway (1991) in the mid-1980s and today widely used as an analytical model, still too much based on science-fiction thinking, a few anecdotal examples, and post and trans-human discourses (see, e.g., Hansen, 2006; Milburn, 2004; Hayles, 1999; Ansell Pearson, 1997) that Thacker has little use for, but to examine biomedia as an actual practice of establishing new technological configurations in which the biological body can ‘constantly surpass itself’ (Thacker, 2004: 14). Biomedia encode the biological body into new material substrates. Thacker’s (2004) term ‘biomedia’ thus fulfils Mulder’s (2006) demands for qualified media analysis: The point of media is not to remediate something old into something new or to make something new reappear in something old, but to allow the old and the new to hybridise into something unprecedented, and then go a step further so that one must reorganize one’s whole
The Bioeconomy and the New Regime of Science-based Innovation 81
internal order in order to process the information streams, thereby becoming something unique and characteristic of oneself and the generation one is part of. (Ibid.: 295, emphasis added) Speaking in more practical terms, ‘biomedia’ is the inclusive term for a range of laboratory practices deployed in the analysis of human tissue and hereditary material, including genetics, genomics, pharmacogenomics, toxicogenomics, high-throughput screening, proteomics, protein crystallography and so forth. Thacker (2004) examines in detail bioinformatics, biochips and so-called bioMEMS (biomicroelectronic mechanical systems), biocomputing, nanomedicine and systems biology. All these technoscientific practices position the biological body in new terms and thereby open up new possibilities and theoretical perspectives. Some of the features and qualities will be addressed in the following section. Pharmacogenomics The single most important factor behind and driver of the bioeconomy is undoubtedly the advance in genetics, genomics, pharmacogenomics and proteomics and other new forms of systematic scientific procedures in investigating the hereditary material in biological organisms. In the pharmaceutical industry, the new opportunities enabled by the technoscientific technique of genomics have been received as the new big step for new drug development. However, the literature on genomics is not ready to endorse genomics out of hand as what will, of necessity, lead to a substantial output of new medicines and drugs. The term and the underlying technoscientific procedures are too diverse to promise such ready-to-use applications. Sunder Rajan (2006: 28), for instance, claims that ‘“genomics” itself . . . is not a stable referent, and its own meaning has evolved over the last few years, from the days of the initial conception of map and sequence of the human genome at the start of the Human Genome Project (HGP) in the late 1980s to today’s postgenomic era subsequent to the completion of the working draft sequence of the human genome’. A similar critique has been articulated on part of the concept of the gene, a term that has shifted in meaning over the course of its history. Keller points to the various meanings and uses of the term ‘gene’ in scientific vocabulary: Techniques and data from sequence analysis have led to the identification not only of split genes but also of repeated genes, overlapping genes, cryptic DNA, antisense transcription, nested genes, and
82
Venturing into the Bioeconomy
multiple promoters (allowing transcription to be initiated at alternative sites and according to variable criteria). All of these variations immeasurably confound the task of defining the gene as a structural unit. (Keller, 2000: 67) Since the gene apparently denotes a variety of entities, it is little wonder that genomics is ‘multiple’ in that ‘[i]t involves an articulation of different scientific perspectives on biological systems, of mathematics and computational biology on the one hand with molecular genetics and cell biology on the other’ (Sunder Rajan, 2006: 28). While genomics, notwithstanding its inconsistencies as poorly unified nature, opens up new scientific opportunities, the latest thrust in the field is proteomics, the analysis of how different gene sequences, so-called single nucleotide polymorphisms (SNPS, commonly pronounced as ‘snips’), are capable of producing different protein. ‘SNPS are single base variations in the genetic code that aid in the discovery of genes variably linked to different traits. SNPS are potentially very valuable markers for diagnostic and therapeutic development, and therefore of great interest to pharmaceutical industry,’ Sunder Rajan writes (ibid.: 50). While ‘the central dogma’ proposes that DNA is transcribed into RNA which in turn produces a specific protein, it is today shown that any DNA may produce different proteins, a fact that makes ‘the central dogma’ problematic and ‘can no longer be sustained’ in Rose’s view (2007: 47). Proteins are composed of various amino acids and are, therefore, the elementary forms of the hereditary material. Rose explains the concept of proteomics: Within the style of thought of contemporary genomics, it is accepted that one coding sequence can be involved in the synthesis of several different proteins, and that one protein can entail the interaction of several distinct coding sequences from different regions of the genome. Hence the focus shifted from the gene to processes of regulation, expression, and transcription (transcriptomics), from the gene to those small variations at the level of a single nucleotide termed Single Nucleotide Polymorphisms (SNPs), and indeed from the gene to the cell and the process for the creation of proteins (proteomics). (Ibid.: 46) In addition, today we know that about 95–7 per cent of the genome is so-called ‘junk DNA’ – sequences of bases that do not comprise the triplets coding for amino acids (Kay, 2000: 2; Rose, 2007: 270). This makes the majority of DNA ineffective and scientists do not fully know
The Bioeconomy and the New Regime of Science-based Innovation 83
why only 3–5 per cent plays a substantial role. One of the consequences of these new findings – that only a small percentage of DNA appears to play a role and that SNPS may produce different proteins – makes genomics a science that is not easily transformed into a new drugproducing apparatus: The kinds of explanations generated in genomics, proteomics, transcriptomics, and cell biology are not simple, linear, and direct causal chains. To use a much abused term, they are ‘complex.’ While causal chains can traced, between a coding sequence and a protein for example, the actual cellular mechanisms involved in the event at different levels, involving a nexus of activations and terminations, cascades, feedback loops, regulatory mechanisms, epigenic processes, interactions with other pathways, and much more. The complexity of such cellular mechanisms, their operations in time (hence directionality, interaction, and feedback) and in space (hence movements, circuits, passage across membranes and between cells, activation of secondary systems) ensure that the relations here, even at the cellular level, are stochastic, open and not closed, and hence probabilistic. (Rose, 2007: 51) In her discussion about the ‘machinery’ versus ‘organicist’ view of nanotechnology, Bensaude-Vincent (2007: 229) emphasizes that, in the latter view, biological systems are not conceived of as strictly obeying some central and underlying programme or code, as in the ‘DNA as script’ (i.e., ‘the central dogma’) perspective. Instead, Bensaude-Vincent says, one must think of the biological system in terms of relations, passages and interactions – that is, in terms of being self-regulating and emergent systems rather than mechanical systems determined by underlying programmes possibly capable of being decoded in informational terms: The key to success in the living organism does not lie within the building blocks engineered so as to concentrate all the instructions and information needed to operate the machine. Rather, biology teaches us that success comes with improving the art of mixing heterogeneous components. Consequently the focus is less on the ultimate components of matter than on the relations between them. Interfaces and surfaces are crucial because they determine the properties of the components of composite materials and how they work together . . . Biology does not provide a model of highly concentrated
84
Venturing into the Bioeconomy
information as suggested by Feynman’s [physicist Richard Feynman] famous talk: it is a model of interaction and composition. (Ibid.) A similar view of the biological organism as a dynamic and adaptable system is advocated by Oyama (2000), who is highly critical of the genecentric image of biological system (see also Lewontin, 2000). Advocating the term ‘ontogeny’ and suggesting that biological systems evolve along multiple pathways, Oyama (2000: 41) hopes to ‘[d]iscard traditional unilinear conceptions of development and the expectation of relatively simple continuity that often accompanies them’. She continues: If we are interested in information and instructions, we need to look not only at the genes but also to the various states of organisms and the ways one state is transmuted into the next. Potential is probably more usefully conceived of as a property (if it can be thought of as a property at all) . . . of the phenotype, not the genotype. It is the phenotype that can be altered or not, inducted to develop in certain directions or not; its potential changes as each interaction with the environment alters its sensitivies. (Ibid.: 42) For Oyama (ibid.: 44), the ‘central dogma’ of genetics represents ‘[a]n untenable doctrine of one-way flow of developmental “information” from the nucleus to the phenotype’, that is, a linear and close to deterministic view of how the genotype regulate the phenotype. Instead, Oyama is using the terms ‘nature’ and ‘nurture’ to advance her theory of ontogeny: I propose the following reconceptualization, in which genes and environments are parts of a developmental system that produces phenotypic natures: 1. Nature is not transmitted but constructed. An organism’s nature – the characteristics that define it at a given time – is not genotypic (a genetic program or plan causing development) but phenotypic (a product of development). Because phenotypes change, natures are not static but transient, and because each genotype has a norm of reaction, it may give rise to multiple natures. 2. Nurture (developmental interaction at all levels) is as crucial to typical characters as to atypical ones, as formative of universal characters as of variable ones, as basic to stable characters as to labile ones.
The Bioeconomy and the New Regime of Science-based Innovation 85
3. Nature and nurture are therefore not alternative sources of form and causal power. Rather, nature is the product of the processes that are the developmental interactions we call nurture. At the same time, that phenotypic nature is a developmental resource for subsequent interactions. An organization’s nature is simply its form and function. Because nature is phenotypic, it depends on developmental context as profoundly and intimately as it does on the genome. To identify nature with that genome, then, is to miss the full developmental story in much the same way that performationist explanations have always done. 4. Evolution is thus the derivational history of developmental systems. (Ibid.: 48–9) In this ontogeny framework, activity not stasis and relation not autonomy are central to the conception of both the genotype and the phenotype (ibid.: 84); biological systems are defined on the basis of their processual and relational capacities, not their underlying informational content, at least not defined in terms of stable mathematical sequences. Oyama thus defends an image of biological systems that radically breaks with the central dogma of genetics and advocates an analytical model that takes into account adaptation and emergence. At the bottom line, Oyama is critical of essentialist theories of biological systems, assuming that there are ‘universal underlying natures’ that regulate all visible differences in biological systems. In Oyama’s view, the exposure to various conditions strongly shapes and forms the biological system and there are, therefore, no possibilities for determinist theories of biological systems. Notwithstanding these conceptual and theoretical debates in the field of the life sciences, the outcome from pharmacogenomics has to date been rather limited. While there are indications of successful personalized medicines produced on basis of genomics research, at least in the case of Japan (Sowa, 2006), empirical studies suggests that practitioners, researchers in pharmaceutical companies, are relatively disappointed about the output. For instance, ‘I have to say that I don’t think pharmacogenetics is at the moment playing any part in, certainly, clinical practice,’ one clinical researcher (cited by Hedgecoe, 2006: 728) argues. Shostak (2005), studying the field of toxicogenomics, similarly points at the sense among practising scientists of having the philosophers’s stone in reach but being unable to make proper use of it: What does the data mean? That’s the big question. There is so much data. It’s like being given the Encyclopedia Britannica and
86
Venturing into the Bioeconomy
ten seconds to find an answer . . . You know the answer is out there somewhere, but you have to learn the rules or what volume to go to, and you have to learn the rule within that volume. Where do you look it up? And you have to learn the rules for not only reading what’s there, but understanding and interpreting. (Toxiocogenomics researcher, cited in ibid.: 384) The data does not, as it has never done, ‘speak for itself’. Instead, researchers are amassing huge amounts of data that needs to be attended to and that potentially says something important about the organisms and their responses to various chemical compounds. Calvert (2007) here makes an important distinction, advocated by Griffith (2001), between ‘information about genes’ and ‘information encoded in genes’: ‘Information about genes’ . . . is information in the very simple sense of a particular strand of DNA having a particular sequence (ATTTG, for example) . . . information ‘encoded in genes’ . . . [is] that the information possessed by the genetic material will tell us something significant, for example, about the phenotype of the organism. This more loaded notion of information is being drawn upon when we hear talk of the gene sequence providing ‘the blueprint’ for the organisms. (Calvert, 2007: 217–18) Just because we may know something of information ‘about genes’ we cannot easily predict what kind of effects these sequences have on the phenotype – that is, genetic sequences and malfunctions of the body (the disposition for certain diseases) are at best loosely coupled. However, knowledge about genomics is growing quickly and it may be, in the near future, more detailed relations between ‘information about genes’ and ‘information encoded in genes’ are explored. In general, genomics and pharmacogenomics are possible to conceive of as a great cartography endeavour where new pieces are added to the puzzle. However, today the effects from using the scientific procedures appear to be smaller than their alleged importance. New drug development in the new economic regime New drug development is a central activity in the contemporary bioeconomy and accounts for substantial economic turnover. Today, the average time for taking a new drug to market is approximately 12–15 years, with a cost of about a1 billion (Outlook 2010). The pharmaceutical industry had its ‘golden age’ in the decades after the World War
The Bioeconomy and the New Regime of Science-based Innovation 87
II and until the 1980s. After that, regulations – formulated in terms of so-called ‘good clinical practice’ – have become more detailed, pharmaceutical industry representatives claim, and the targets and indications have become much more complex and complicated to handle. In the last 15 years, the pharmaceutical industry has been characterized by significant mergers and acquisitions and major actors such as Pfizer or GlaxoSmithKline have grown, acquiring smaller biotech and clinical or contract research organizations (so-called CROs), biotech companies and contract laboratories. Moreover, the increase in demand for solid and robust evidence of the clinical effects of tested candidate drugs is making pharmaceutical companies look for new markets for findings patients. Shah (2006: 4) gives some examples: Today, although Americans have on average more than ten prescriptions every year, less than one in twenty are willing to take part in the clinical trials that separate the dangerous drugs from the lifesaving ones. Less than 4 percent of cancer patients, who generally have the most to gain from new experimental treatments, volunteer for experimental drug trials, a rate industry insiders deride as ‘appallingly low.’ Petryna points at the growth in clinical trials in the industry: The number of people participating in and required for pharmaceutical clinical trials has grown enormously since the early 1990s. The number of clinical trail investigators conducting multinational drug research in low-income settings increased sixteenfold, and the average annual growth rate of privately funded US clinical trials recruiting subjects is projected to double in 2007. (Petryna, 2006: 33) This growth in clinical trials is shrinking the available pool of human subject in the West suitable for clinical trials (Petryna, 2009; Fischer, 2009). This lack of ‘qualified patients’ is gradually becoming a major challenge for the industry (Drennan, 2002). While the industry has historically relied on volunteers such as students and inmates, the problem today is that too much medication is being consumed and therefore potentially interfering with the drugs subject to the clinical trial. In brief, the American population is using too many drugs and, therefore, ‘treatment saturation’ is increasingly making America a poor ground for clinical trials (Petryna, 2006: 37). Fortunately for the
88
Venturing into the Bioeconomy
pharmaceutical industry, there are poorer countries in Eastern Europe, Latin America and India hosting populations not endowed with the economic resources required to consume many drugs and these populations are increasingly used when testing new drugs. Petryna (ibid.: 41) is here talking about ‘treatment naivété’, ‘the widespread absence of treatment for common and uncommon diseases’, as one of the principal conditions that are attractive when testing new drugs. ‘Treatment-naive populations are considered “incredibly valuable” because they do not have any background medication (medications present in the patient’s body at the time of the trail), or any medication, for that matter, that might confuse the results of the trail,’ Petryna writes (ibid.). Especially major metropolitan areas such as São Paolo in Brazil or Lima in Peru are popular with the pharmaceutical companies because the vast population in one geographically limited area ‘reduces travel costs’. In India, CROs market the ‘human resources’ to entice the major pharmaceutical companies to locate their clinical research work in the country. The Indian CRO iGate Clinical Research lists their ‘main reasons’ for coming to India as: 40 million asthmatic patients 34 million diabetic patients 8–10 million people HIV positive 8 million epileptic patients 3 million cancer patients >2 million cardiac-related deaths 1.5 million patients with Alzheimer’s disease. (Prasad, 2009: 5) ‘These characteristics of the Indian population,’ Prasad comments (ibid.: 6), ‘which were for long considered hindrance to India’s development and, not to forget, a blot in the healthcare of citizens, have become “assets.” They have come to constitute a human capital with starkly different characteristics from, say, the software engineer who has become the iconic Indian human resource.’ Petryna’s (2006, 2009) and Prasad’s (2009) account of growing globalization is indicative of how the pharmaceutical industry takes advantage of the lack of consumption of drugs in poorer parts of the world. Whether this is ethical or not may be up for discussion (see, e.g., Fischer, 2009), but pharmaceuticals comply with international directions and regulations. However, for more critical analysts, the globalization of clinical trials may be an eminent example of neo-colonial practice. At the same time as financially unattractive drugs are dropped from the agenda (Angell, 2004; Lexchin,
The Bioeconomy and the New Regime of Science-based Innovation 89
2006; Lybecker, 2006; Brody, 2007), the world’s poor population, unable to serve the role of end-users and consumers, is still capable of contributing as a testing ground for new medicine. One of the first things to learn from Petryna’s (2006) study is that pharmaceutical companies do not, in the first place, produce new drugs to cure pressing diseases but rather develop drugs that are financially viable and have promising market prospects.8 That is why a variety of new drugs target so-called ‘lifestyle-related illnesses’, such as Type 2 diabetes and obesity-related illnesses. In many cases new drug development does not even start from the disease and its identified target, on the contrary, the drugs precede the disease – that is, an illness is defined in terms of to what medicine it responds. In some cases, such as in the neurosciences and when identifying therapies for the central nervous system, there are few opportunities for acting differently simply because there is no established etiology or shared and coherent analytical framework that connects illness, target and therapy. Lakoff (2006), studying the use of psychopharmacological drugs in Argentina, underlines this counterintuitive relationship: Illness comes gradually to be defined in terms to what it ‘responds’. The goal of linking drug directly to diagnosis draws together a variety of projects among professionals, researchers and administrators to craft new techniques of representation and intervention. These projects range from diagnostic standardization and the generalization of clinical protocols to drug development and molecular genetics. This constellation of heterogeneous elements is joined together by a strategic logic I call ‘pharmaceutical reason’. The term ‘pharmaceutical reason’ refers to the underlying rationale of drug intervention in the new biomedical psychiatry: that targeted drug treatment will restore the subject to a normal condition of cognition, affect, or volition. (Ibid.: 7) In addition to Lakoff’s concept of pharmaceutical reason, DeGrandpre (2006) uses the term ‘pharmacologicalism’ to denote the idea that drugs must be understood and examined strictly on a technoscientific basis – that is, no single drug is affected by, for instance, expectations of the patient (e.g., the placebo effect or the social lives of drug addicts surrounding the drug use): A key supposition of pharmacologicalism is that pharmacological potentialities contained within the drug’s chemical structure
90
Venturing into the Bioeconomy
determine drug outcomes in the body, the brain, and behavior. Accordingly, nonpharmacological factors play little role, whether in the realm of the mind or of the world of society and culture . . . As a result, pharmacologicalism dictates that the moral status of a drug exists as a purely scientific question that can be documented and classified once and for all, not as a social one that must be considered and reconsidered across time and places. Society, culture and history can be ignored. (Ibid.: 27) Pharmacologicalism is, thus, an ideology locating any meaningful examination of a drug in a realm devoid of social influences and concerns. As DeGrandpre suggests, referencing a great number of clinical studies in the field of psychopharmacology, uses of drugs are inherently social in nature and no drugs can be understood within a strict stimuli–response framework. Instead, there is a social life of drugs and drug using is a social practice that needs to be understood as what Taylor (2005) calls a materialized practice, a social practice involving both material and social and cultural components. In the case if bipolar disorder – today a reasonably ‘stabilized’ disease with an enacted etiology and prescribed therapies – a joint analytical framework has been established over the course of decades (Lakoff, 2006). However, in the field of psychological disorders, there has been a substantial amount of disagreement and controversy. For instance, in a study published in 1972, one third of American psychiatrists diagnosed a sample of patients as suffering from schizophrenia while none of the British psychiatrists did so. It was suggested in the study that for Americans the term ‘schizophrenia’ was a general term for serious mental illness, while the British used the term in a more specific manner. In addition, it was 20 times as common to diagnose manic depression in British hospitals than in American hospitals – ‘In the United States, clinicians simply did not “see” manic depression,’ Lakoff (ibid.: 29) contends. The study concluded that there was a need for a standardization of terms and of disease definitions to make ‘disciplinary communication’ possible (ibid.: 35). In general, psychopharmacological drugs have been suspected of being related to various social changes and the anxieties such changes induce. For instance, one sales representative points at these curious contingencies, potentially undermining the ‘scientific’ nature of this category of drugs: In the seventies you had the Cold War, and a heightened sense of tension and nervousness – so Valium, an antidepressant drug, sold
The Bioeconomy and the New Regime of Science-based Innovation 91
well. Then in the eighties with the phenomenon of the yuppies and their emphasis on career success, the drugs of choice were anxiolytics. In the nineties antidepressants became popular, for two reasons: first, there were those who had failed to meet their expectations in the eighties and so they were depressed. But pharmaceutical marketing strategies also had to do with it. (Pharmaceutical sales representative, cited in ibid.: 153) It thus appears as if psychopharmacological drugs are following a peculiar kind of fashion-cycle, which it is possible to observe in a range of domains and fields. A third aspect of new drug development is the ubiquitous sponsoring of practising medical doctors and researchers, a routine that has been highlighted quite recently because it is, at times, claimed to pose a threat to both the medical doctors’ status as autonomous professionals and the ideology of ‘value-free’ science in the Humboldtian University tradition. In order to handle this concern, a strict separation between ‘rational pharmacology’ and ‘drug promotion’ has been suggested. The problem is that these two processes are not that easily separated from one another: ‘Pharmaceutical companies are producers not only of . . . [drug products] but also of knowledge about their safety and efficacy, and their gifts to doctors to travel to conferences and workshops provide access to the latest expertise. The fortress that is supposed to guard against the crude logic of profit – biomedical expertise – is itself ensconced in the market,’ Lakoff says (ibid.: 140). The production of the drug and its promotion seems to be entangled. In addition, the relationship between academic researchers and pharmaceutical companies seems to be enfolded in other ways. In a study of the Tufts University professor Sheldon Krimsky published in 1996, in 34 per cent of 789 biomedical papers published by university scientists in Massachusetts, at least one of the authors benefited directly commercially from the results reported. These authors either held a patent or were an officer or an advisor of a biotech company using the results. What is of particular interest is that none of the articles disclosed the financial interests of the authors (Andrews and Nelkin, 2001: 59). In addition, the ghostwriting of scientific articles submitted for publication in academic journals and the inclusion of ‘honorary authors’ in author lists is more widespread than one might expect. Mirowski and van Horn (2005: 528) make references to bibliometric studies of journal articles published in the field: ‘In the aggregate, 19% of the papers had evidence of honorary authors, 11% had evidence of ghost authors, and 2% seemed to possess both . . . [in another study] 39% of the reviews had evidence
92
Venturing into the Bioeconomy
of honorary authorship, while 9% had evidence of ghost authorship.’ Pharmaceutical companies commission scientific texts that are endorsed by credible researchers, thereby legitimizing a certain substance, drug, or analytical research method: ‘[G]hostwriting is no longer occurring only in peripheral journals and affecting only review articles. It happens in the most prestigious journals in therapeutics, and it probably happens preferentially for papers reporting randomized trials and other data-driven papers’ (Healy, 2006: 72). In addition, there have been embarrassing examples of scientists reporting different data to the US Food and Drug Administration than that appearing in the journal articles – clear examples of corrupted use of empirical data (Mirowski and van Horn, 2005). With widespread use of ghostwritten scientific texts and other ‘rhetorical devices’ comes a major legitimation problem in the sciences, traditionally granting much importance to the individual account of the scientist. Developing new drugs The first thing to notice when it comes to new drug development is the substantial amount of resources that is demanded when developing new drugs. Gassman and Reepmeyer (2005: 235) report that in 1976, the cost for developing a drug was US$54 million; in 1987 the cost has grown to US$231 million and in 1991 the total cost was about US$280 million. The pharmaceutical industry is, therefore, one of the industries investing the most resources in R&D. For instance, Jones (2000: 342) shows that, in the case of the UK, two pharmaceutical companies, GlaxoSmithKline and AstraZeneca, accounted for more than 25 per cent of all business expenditure on R&D in the country in 1998. In general, the pharmaceutical industry invests as much as 15 per cent of turnover in R&D (see Table 2.1 below). However, as the costs for developing new drugs soar, the market has also grown substantially; between 1970 and 2002 the market for
Table 2.1 International R&D intensities, 1998 Sectors
UK (%)
International (%)
Pharmaceuticals Software/IT Chemicals Electronics Engineering
15.0 4.9 1.7 3.2 1.6
13.5 13.6 6.1 5.3 3.3
Source: Adapted from Jones, 2000: 348.
The Bioeconomy and the New Regime of Science-based Innovation 93
pharmaceuticals grew at a rate of 11.1 per cent annually, reaching more than US$400 billion in 2002 (Gassman and Reepmeyer, 2005: 237). The lion’s share of this turnover derived from so-called blockbuster drugs (Gassman and Reepmeyer, 2005: 237). For some companies, such as Pfizer, about 80 per cent of the sales derive from the total sales of eight blockbuster products, while other companies (e.g., Bristol-Myers Squibb, Novartis or Aventis) are reported to have more ‘balanced’ portfolios (ibid.). In the beginning of the new millennium, pharmaceutical companies have been relatively poor performers in terms of the production of new chemical entities (NCEs)9 (ibid.: 236) and consequently there are few truly innovative drugs being launched in the market. Barry (2005) has argued persuasively that what pharmaceutical companies are, in fact, doing is to ‘inform’ molecules with informational content that allows for an evaluation of the therapeutic effects of the new chemical entity: Pharmaceutical companies do not produce bare molecules – structures of carbon, hydrogen, oxygen and other elements – isolated from their environment. Rather, they produce a multitude of informed molecules, including multiple informational and material forms of the same molecule. Pharmaceutical companies do not just sell information, nor do they sell material objects (drug molecules). The molecules produced by pharmaceutical companies are more or less purified, but they are also enriched by pharmaceutical companies through laboratory practice. The molecules produced by a pharmaceutical company are already part of a rich informational material environment, even before they are consumed. This environment includes, for example data about potency, metabolism and toxicity and information regarding the intellectual property rights associated with different molecules. (Ibid.: 58) That very process of ‘informing the molecule’ and providing detailed documentation of how this is accomplished is constitutive of the new drug development process. The starting point is to identify a chemical compound that has the qualities sought. Identifying such molecules, Nightingale (1998) suggests, emerges in the form of ‘number reduction’: ‘[T]he job of the medicinal chemist is one of number reduction; there are: 10180 possible drugs, 1018 molecules that are likely to be drug like, 108 compounds that are available in libraries, 103 drugs, only 102 profit making compounds. Drug discovery involves reducing the “molecular space” that profitable drugs will be found in, to a small enough volume
94
Venturing into the Bioeconomy
that empirical testing can take place’ (ibid.: 704). The history of synthesis chemistry is therefore the history of the molecules ‘invented’ or ‘constructed’. Bensaude-Vincent and Stengers (1996: 255) say that about some 10 million different molecules have been invented since the beginning of synthesis chemistry. However, ‘for one substance used by the pharmaceutical industry, nearly ten thousand have been tested and declared without intrinsic or commercial value’. The metaphor of finding the needle in the haystack thus certainly applies and as a consequence only a small fraction of all compounds finally make their way to the pharmacist’s shelf: Typically, for each successful drug that made it to the market, the firm began with roughly 10,000 starting compounds. Of these, only 1,000 would make it to the more extensive in vitro trains (i.e., outside living organisms in settings such as a test tube), of which 20 would be tested even more extensively in vivo (i.e., in the body of a living organism such as a mouse) before 10 or fewer compounds made it to human trials. (Thomke and Kuemmerle, 2002: 622) In new drug development work, the informing of molecules is, then, not proper scientific work in terms of providing ‘facts’ that exist independently of ‘social processes’ (Hara, 2003: 7) but is, rather, constituted as a series of selections and choices under the influence of strategic and tactic objectives and market conditions and under the regime of ‘doable problems’. New drug development, in other words, does not aim to make contributions to science in the first place but does this often unintentionally as the research work unfolds. Hara points at the heterogeneous resources being mobilized in the new drug development process: [T]he process of drug discovery and development can be regarded as involving heterogeneous elements including: (1) human actors such as chemists, pharmacologists, toxicologists, different functions in the company, corporate managers, academics, doctors, patients, government officers, politicians, activists and the general public; (2) non-human entities such as drugs, materials, instruments and facilities; and (3) institutional and structural factors such as strategies, organizational linkages, human networks, organizational capabilities, funds, markets, regulations, sciences and clinical trials. (Ibid.: 32)
The Bioeconomy and the New Regime of Science-based Innovation 95
Seen from this point of view, the new drug development process mobilizes an assemblage of resources and activities and is constituted as what Hara calls heterogeneous engineering: [T]he shaping of drugs is the process of heterogeneous engineering. Various human actors, non-human entities and institutional and structural factors are involved in the process. In some cases, we can see interpretative flexibility about the properties of compound, a diversity of candidate drugs and different closure mechanisms . . . In addition, actors are not isolated from wider and quite stable social relationships. Institutions and structures such as organizational structures, organizational capabilities, corporate strategies, regulatory systems, clinical trials, patent systems, production economies and market structures affect the process of shaping drugs. (Ibid.: 182) While the assemblage set up in new drug development was relatively stable over a significant period of time, more recent scientific changes in the field of genomics and pharmacogenomics have strongly affected new drug development work. Drews (2000) suggests that the advent of techniques such as genomic sciences, rapid DNA sequencing, combinatorial chemistry, cell-based assays and automated high-throughput screening (HTS) has led to ‘a new concept’ of drug discovery. While the old concept relied on the conversation and critical exchange of ideas between chemists and biologists, today there is an orientation towards what Drews calls ‘the magic of large numbers’. Drews (ibid.: 1962) is rather sceptical regarding the benefit of the new regime of new drug development: ‘So far, this several hundredfold increase in the number of raw data has not yet resulted in a commensurate increase in research productivity. As measured by the number of new compounds entering the market place, the top 50 companies in the pharmaceutical industry collectively have not improved their productivity during the 1990s.’ However, in the literature, there are several studies of how the new genomics possibilities are received in pharmaceutical companies. Gassman and Reepmeyer (2005: 239) suggest rather positively that genomic technologies will enable the identification of between 3,000 and 10,000 new drug targets and that the shift to genomics research may open up for more narrowly focused medicines with ‘higher therapeutic value’ for the targeted population in comparison to the previous mass-customized drugs. Pharmacogenomics – the use of genomics in pharmaceutical endeavours – is here the integration of biochemistry and annotated knowledge of genes, proteins and single nucleotide
96
Venturing into the Bioeconomy
polymorphisms into the process of developing new drugs. The idea of ‘personalized medicine’ is one of the more broadly targeted objectives with the new technologies, the ability to construct specific therapies for specific populations sharing some structure (e.g., certain SNPs) in the hereditary material. While personalized medicine may sound like science fiction, there are today examples of drugs targeting specific ethnic groups being successfully develop and launched. Hedgecoe and Martin (2003) outline the history of pharmacogenomics research: Coming into the 1990s, a number of new technologies such as polymerase chain reaction and high-throughput screening gave scientists greater understanding of genetic variation and increased the interest in pharmacogenetic studies. In addition to these technical developments, there were also ideological changes which, in the wake of the Human Genome Project, started to restructure medicine in terms of genetics . . . Perhaps most importantly, pharmacogenetics finally aroused the interest of the new genetic technologies with a focus on drug discovery and development. Around this time a new term began to be used to describe the discipline: Pharmacogenomics. (Ibid.: 333) The very idea of pharmacogenomics is to associate ‘genetic markers for drug response’ and the genes that are directly involved in the development of different forms of pathology (ibid.: 337); if a strong correlation exists between a genetic marker and a drug response or a disease prognosis can be demonstrated, this will have a therapeutic and clinical value even though the link between the two may be poorly understood. The aim of pharmacogenomics is thus to establish solid statistical associations between markers and responses to specific drugs. It is then little wonder that the new regime of new drug development – just like the bioeconomy more generally – is becoming embedded in the analysis of forms of (bio)information. In summary, new drug development is the industrial large-scale production of informed matter. While the NCE and the biological responses in the organism used to be the object of investigation in the in vitro clinical trials, today the relationship between the NCE and the organism is interpenetrated by the bioinformational structure of the hereditary material of the organism. The structure of the molecule is therefore more closely associated with the genotype than with the phenotype of the organism. If the biological organism plays a role in today’s new drug development, it is in the form of an embodiment of
The Bioeconomy and the New Regime of Science-based Innovation 97
bioinformation to be carefully examined and related to the selected NCE. The pharmacogenomics shift in focus has rendered biological organisms an informational structure.
Summary and conclusion The bioeconomic regime aims to exploit know-how and expertise in the life sciences in terms of producing new therapies and research frameworks enabling new scientific practices. The bioeconomy is by no means a homogeneous framework but includes a variety of practices demonstrating their own idiosyncratic histories and trajectories of development. The enormous growth of life science know-how since World War II, and specifically the period since the mid-1970s, has paved the way for a highly sophisticated framework of analyses of biological systems and the human body. To date, the major pharmaceutical companies have provided a series of life-saving and life-improving drugs and therapies, and the biotechnology industry has contributed with a range of ‘upstream’ research methodologies that have been largely influential in research work (e.g., in the field of genomics and post-genomics research methodologies). In the university setting, new domains of expertise are constantly brought into the department structure and these research areas are, in many cases, making important contributions to the industry. There is, however, still a shortage of studies of how these different categories of life science professionals regard the advancement of their fields of expertise and their opportunities for introducing new analytical frameworks in their day-to-day work. In the three coming chapters, empirical studies of the pharmaceutical industry, biotechnology companies and life science university researchers will demonstrate how new analytical framework and new research methodologies are influencing the daily work in these organizations.
Notes 1. An article published in the trade journal Sales Management in 1949 pointed out the difficulties with the new medium, demanding not only auditory attention of the listener but also the eyesight of the viewer: ‘Radio is an unqualified success during the daytime hours. To a large extent its popularity rests squarely on those factors which may be an insuperable obstacle to video. Women can cook, clean, bake and engage in all the varied mystic rites of the homemaker while keeping a sharp ear out for the last agonies of the radio dramas. Television, alas for the business side of the enterprise, will share the spotlight with no other activities’ (Jules Nathan, ‘Who Will Watch Daytime Television?’, Sales Management, 1 April 1949, cited in Boddy, 2004: 51).
98
Venturing into the Bioeconomy
2. To avoid conceptual confusion, in the following, the term ‘biopolitics’ is used as an ideological term derived from the discourse on how to govern, control and monitor life in modern society. Foucault’s key term here is ‘governance’. The term ‘bioeconomy’, on the other hand, is conceived of as a predominant regime of accumulation in the contemporary economic system. Adhering to a body of work in economics (Aglietta, 1979; Boyer, 1988; Freeman and Perez, 1988) emphasizing the relationship between the ‘regime of accumulation’ (i.e., production, value-adding activities) and the ‘mode of regulation’ (the system of practices, norms, regulations, etc., controlling and regulating the regime of accumulation), the bioeconomic regime of accumulation rests on both specific supporting ideologies and systems of regulation. Expressed differently, biopolitics is the discursive formation that precedes the bioeconomic regime of accumulation, but biopolitics is by no means separated from the advancement and accomplishments of the bioeconomy; the two concepts are rather recursively interrelated without it being possible to reduce to one another. One may say that the two concepts are operating on two individual planes, the plane of symbolism and theoretical workings (biopolitics) and the plane of material production (bioeconomy). 3. A class of compounds typically used as antidepressants (e.g., Cipramil). 4. A phenotype is any observable characteristic or trait of an organism, such as its morphology, development, biochemical or physiological properties, behaviour and products of behaviour. Phenotypes result from the expression of an organism’s genes as well as the influence of environmental factors and the interactions between the two. Whereas the genotype of an organism is the inherited instructions it carries within its genetic code. Not all organisms with the same genotype look or act the same way because appearance and behaviour are modified by environmental and developmental conditions. Similarly, not all organisms that look alike necessarily have the same genotype (Churchill, 1974). 5. Anderson and Nelkin (2001: 32) suggest that the likelihood that a newborn infant will ever need his or her umbilical cord blood ‘is less than one in 20,000’. Yet, there are commercial opportunities to provide this kind of service in the American market, largely derived from a combination of hopes for potential scientific breakthroughs and concern for the newborn child. 6. In addition, the standards for acceptance of organ donors is constantly negotiated. Since there is a endemic shortage of organs, medical authorities seem to lower their standards, and now accept both older and less healthy donors than previously. For instance, one organ transplantation coordinator working in a Mid-Western university hospital testified to these changes: ‘We’ve changed the criteria in the last year. [There’s] no [upper] age [limit, for example] . . . as more and more people are added to the list and more and more people are dying every day, because of the lack of organs, the transplant surgeons are getting more and more liberal with the criteria they will accept . . . [for us today the] only contraindication is HIV/AIDS’ (cited by Sharp, 2006: 64). The ‘political economy of cadavers’ in the contemporary tissue economy clearly plays a role in determining what an adequate donor is and what life histories can and cannot be tolerated. 7. These kinds of concerns, seemingly irrational given the qualities of the strict biological entities and processes involved, are also observable in
The Bioeconomy and the New Regime of Science-based Innovation 99 organ donations, where even though the organ recipients are thankful for the ‘organ gifts’ they at times ‘[w]orry about gender, ethnicity, skin color, personality and social status of their donors, and many believe that their mode of being-in-the-world is radically changed after a transplant, thanks to the power diffusing from the organs they have received’ (Lock, 2001: 72). For instance, even a surgeon interviewed by Lock, working in the field of medicine and intimately knowing the human physiology, was concerned about organ donations from prisoners on ‘death row’, not so much because of the ethics of procuring organs from convicts and the juridical and moral difficulties when treating prisoners as repositories of organs, but because, as the surgeon said, ‘no one wants the heart of a killer’ in their body. Even after (official) death of the organ donors, organs have a social life that needs to be accounted for. 8. Busfield (2006: 302) emphasizes this point: ‘Leading companies’ R&D typically focuses on substances that could be used to treat the health problems faced by the richer counties rather than on infectious diseases in developing countries. In 2003 the best sellers globally by revenue were two cholesterol-lowering statins, and anti-psychotic and a drug to reduce blood pressure … The competitive environment of the industry also means that companies frequently concentrate on finding a similar product to a competitor’s, but one that is sufficiently different that it can be patented – so called “me-toos”’. A study of approval by the US Food and Drug Administration between 1989 and 2000 showed that approvals for new drugs consisted of relatively small proportion of all approvals, with only 35 of applications related to new chemical entities.’ Bakan (2005: 49) makes a similar argument: ‘[T]he 80 percent of the world’s population that lives in developing countries represents only 20 percent of the global market for drugs. (The entire African continent represents only 1.3 percent of the world market.) Conversely, the 20 percent of the world’s population who live in North America, Europe, and Japan constitute 80 percent of the world market. Predictably, of the 1,400 new drugs developed between 1975 and 1999, only 13 were designed to treat or prevent tropical diseases and 3 to treat tuberculosis. In the year 2000, no drugs were being developed to treat tuberculosis, compared to 8 for impotence or erectile dysfunctions and 7 for baldness. Developing drugs to deal with personality disorders in family pets seems to have higher priority than controlling diseases that kill millions of human beings each year.’ In a similar manner, Rose (2007: 261, n. 1) reports: ‘Of 1,393 new chemical entities brought to market between 1975 and 1999, only 16 were for tropical diseases and tuberculosis. There was a 13-fold greater chance of a drug being brought to market for central-nervous-system disorder or cancer than for a neglected disease.’ 9. The American Food and Drug Administration define new chemical entities as ‘those products representing new chemical structures never previously available to treat a particular disease’ (Cardinal, 2001: 20).
3 Innovation Work in a Major Pharmaceutical Company
Introduction The principal sites for the previous bioeconomic regimes have been medical schools at research universities and pharmaceutical companies. The relationship between these two institutional settings has been intimate and complex, adhering to different institutional pressures and standards; universities have benefited from the funding of basic and applied research provided by the pharmaceutical industry, while the pharmaceutical industry has turned to universities for advice and help and as the principal site for recruitment (Swann, 1988). For some policymakers, the two spheres should preferably be kept apart, but, in practice and on a societal level, the flow back and forth of financial resources and knowledge has been beneficial for the growth of know-how in the biomedical domain. This does not make the relationship between universities and pharmaceutical companies uncomplicated or devoid of practical concerns. On the contrary, in the contemporary bioeconomy, biological know-how, tissue and other material resources accrue extensive economic value and consequently (as suggested in the last chapter) the relationship between the context of discovery and the context of application is becoming more problematic. In the period of the last 15 years, the pharmaceutical industry has endured a long downturn in research output, causing much concern in the industry. What has been the most puzzling is that this decreasing return on investment in R&D happens in a period of swift advancement of the life sciences: The innovation crisis of the pharmaceutical industry is occurring in the midst of a new golden age of scientific discovery. If large 100
Innovation Work in a Major Pharmaceutical Company 101
companies could organize innovation networks to harness scientific discovery of biotechnology companies and academic institutions, and combine it with their own development expertise, they might be able to reverse the forces that are undermining their research model; that is, they might be able to lower their costs and increase their outputs. (Munos, 2009: 865) This inability to fully exploit new scientific opportunities has called for an attempt at rejuvenating the industry’s ‘creative edge’ and, as Garnier (2008) discussed in a Harvard Business Review article: [T]he leaders of major corporations in some industries, including pharmaceuticals and electronics, have incorrectly assumed that R&D was scalable, could be industrialized, and could be driven by detailed metrics (scorecards) and automation. The grand result: a loss of personal accountability, transparency, and the passion of scientists in discovery and development. (Ibid.: 72) Also Munos identifies the ceaseless strive to ‘processify’ (Sundgren and Styhre, 2007) – virtually all elements in the new drug development work are structured into a prescribed sequences of practices in a standardized project management model: During the past couple of decades, there has been a methodological attempt to codify every facet of the drug business into sophisticated processes, in an effort to reduce the variances and increase the predictability. This has produced a false sense of control over all aspects of the pharmaceutical enterprise, including innovation. (Munos, 2009: 867). The pharmaceutical industry, and especially ‘big pharma’ (the major multinational corporations), is facing a real challenge in terms of uprooting their established project management models when rebuilding their ‘R&D engines.’ This chapter reports empirical material from a study of a major multinational pharmaceutical company working in a wide variety of therapeutic areas. The study suggests that the scientists operating in this setting believe the traditional blockbuster model of new drug development, based on traditional wet lab in vivo biology research, is gradually rearticulating into a more bio-computational model where the vision of ‘personalized medicine’ – drugs developed for smaller categories of patients sharing some characteristics on both the level of the genotype
102
Venturing into the Bioeconomy
and the phenotype – is the principal driver for the new decades. Being at a crossroads, the pharmaceutical industry thus seeks to align the traditional and new technologies and practices into a new regime of new drug development. This transition is not unproblematic but induces a series of discussions and controversies – for instance, regarding the role of shared theoretical frameworks guiding and structuring the research work. The bioconomy is leading to a number of changes in the domain of biomedical research – the increase in biotechnology companies being perhaps the most salient example – but major multinational pharmaceutical companies will arguably play a key role, even in the future. Being able to reap the so-called ‘first-mover’ advantages and accumulating the financial capital necessary for orchestrating the technology shift from in vivo to in silico research1 (or any other conceivable change in perspective), the pharmaceutical industry represents a player in the bioconomy that is capable of setting the agenda and imposing standards for the development of new drugs. Therefore, the hype in biotechnology companies and the interests for university–industry collaborations need to be taken cum grano salis; the large pharmaceutical companies still account for the majority of the cash-flow and profits in the bioeconomy.
The new drug development process The company PharmaCorp (not the company’s real name) is a major international pharmaceutical company engaged in the research, development, manufacture and marketing of prescription pharmaceuticals and the supply of health care services. The company is one of the world’s leading pharmaceutical companies with health care sales exceeding US$20 billion and leading sales positions in many therapeutic areas. The company operates in many countries and employs more than 50,000 workers. Each working day, PharmaCorp spends several million US dollars on discovering and developing new medicines. The research process in discovery For the pharmaceutical industry, the discovery of a new drug presents an enormous scientific challenge, and consists essentially of the identification of new molecules or compounds. Ideally, the latter will become drugs that act in new ways upon biological targets specific to the diseases requiring new therapeutic approaches. The drug discovery (pre-clinical) process can be divided into five stages (Sams-Dodd, 2005 ), separated by milestones to indicate significant progress, according to
Innovation Work in a Major Pharmaceutical Company 103
Figure 3.1. Moving from one phase to the next depends upon meeting different criteria. It normally takes three to five years to produce a CD. Target identification and validation The identification of therapeutic targets requires knowledge of a disease’s etiology (the study of the causes of a disease) and the biological systems (e.g., the nervous system, the cardio-vascular system, or the respiratory system) associated with it. The duration of this phase may range from several months to several years. Target identification attempts to find (normally) proteins, whose modulation might inhibit or reverse disease progression. The role of target validation is to demonstrate the functional role and biological relevance of the potential target in the disease phenotype (that is, the physical manifestation of the organism such as cells, structures, organs or reflexes and behaviours; anything that is part of the observable structure, function or behaviour of a living organism). Target validation facilitates the identification and timely progression of lead molecules to provide effective improvement of diseases and, at the same time, it helps reduce the risk of failures from incorrect biological hypothesis. In many instances, however, drug targets are newly discovered and thus their full biological role is not known. This demands constant updates of the connectivity of a target throughout the lifecycle of a drug discovery project. Hit and lead generation Once the therapeutic target has been identified, scientists must then find one or more leads (e.g., chemical compounds or molecules) that interact with the therapeutic target so as to induce the desired therapeutic effect. In order to discover the compounds whose pharmacological properties are likely to have the required therapeutic effects, researchers must test a large variety of them on one or more targets. The term ‘hit’ refers to when a compound has sufficient activity to warrant it being a candidate for clinical studies, providing it meets toxicity and other peripheral requirements. Many pharmaceutical companies have large
Discovery research
Target identification
Hit identification
Lead identification
Lead optimization
Figure 3.1 The drug discovery research process
CD nomination
Development (phase 1−4)
104
Venturing into the Bioeconomy
libraries of synthetic or natural compounds, ready to be tested. To test the chosen compounds in large numbers, scientists use an entirely automated process known as high-throughput screening (HTS). In general, of the thousands of compounds tested, barely 1 per cent will qualify for further and more probing analysis. An important task is to ensure that the chosen compounds have the desired therapeutic effect on the target and to check relative toxicity bioavailability in vivo on animals. Lead optimization Lead optimization is defined as that activity required to optimize a screening hit to a pre-clinical candidate. The purpose of this stage is to optimize the molecules or compounds that demonstrate the potential to be transformed into drugs, retaining only a small number of them for the next stages. To optimize these molecules, scientists use very advanced techniques. For example, data allow the medical chemists to modify the structure of the selected molecules or compounds, if necessary, by screening, thereby creating structural analogues. The creation of hundreds, possibly thousands, of analogues, is aimed at, for example, improving the effectiveness, diminishing the toxicity or increasing the organism’s absorption of the drug. This phase requires close collaboration between biologists and chemists, who form a feedback loop. In this phase biologists test the biological properties of compounds on biological systems while the chemists optimize the chemical structure of these compounds in the light of information obtained by the biologists. This optimization stage aims to develop new substances that are more effective than known compounds. The latter are then subjected to a specific evaluation involving broader biological tests such as preliminary toxicology, computer-aided drug design, in vitro and in vivo studies which aim to plan for testing in man. CD nomination The development potential of a candidate molecule depends essentially on its capacity to be administered to humans and show therapeutic effectiveness, with an acceptable level of side effects (Hara, 2003). Before testing candidate molecules on humans in clinical trials (development), scientists must show that the candidate drug (CD) does not present an unacceptable level of risk, given the expected therapeutic benefit. Regulatory authorities require pharmaceutical companies to demonstrate the safety of the drug for humans and to prove that the therapeutic advantages of the compound greatly outweigh any associated undesirable side effects (e.g., migraine or high blood pressure in the
Innovation Work in a Major Pharmaceutical Company 105
case of cancer treatment). These studies are conducted in conformity with the rules of the regulatory bodies. During this stage scientists (e.g., biochemists, pharmacologists and toxicologists) continue to evaluate the pharmaco-kinetic, pharmaco-dynamic (i.e., how the drug affects the body and how the drug is affected by the body, respectively) and toxicological properties of the compound in vitro and in vivo (on animals). Development and clinical trials If the CD is found safe, an application (investigation of a new drug, IND) is filed with drug regulatory authorities and ethical committees to obtain approval for testing on humans. After authorities approve the IND, clinical studies can begin. The required three-part clinical trials process (the clinical research programme continues after the product’s launch – commonly named phase 4 – by collecting data from outcome research and epidemiology data from patients; this might lead to new indications for the product), which judges the efficacy and safety of potential treatment, is a major undertaking. After completion of phase 3 studies, the final documentation can be compiled and submitted to the appropriate national regulatory agencies (e.g., the FDA) for review (new drug application, NDA; Hullman, 2000 ). After approval, the product can be marketed. Adverse effects are followed meticulously through all clinical phases and after approval of the drug for launch. In entire new drug development, the discovery process is the most complex and unpredictable and involves many factors that could influence the successful outcome (Zivin, 2000 ). To conclude, the discovery organization is accountable for the drug developing projects in the first five stages, after which accountability transfers to development organization. However, discovery involvement does not end at CD nomination, but must partner the development organization into the sixth stage – the ‘proof of principle’ testing phase (containing pre-clinical development and initial clinical testing), aimed at the successful delivery of each drug project. As suggested, the literature on scientific and laboratory work, pointing at the interrelationships and intersections between technology, tools, theoretical frameworks, practices, narrative skills and political savoir-faire may be useful when understanding how science-based innovation takes place within organizations competing on open markets, thus relying not so much on the scientific liberties of free investigation into the ultimate matters of organisms but on the capacity to manage and organize a totality of resources providing drugs that demonstrate both adequate therapeutic effects and market viability.
106
Venturing into the Bioeconomy
Setting the stage from 2008 to 2009: great cash-flow, but many concerns More than many other industries, the pharmaceutical industry tends to be, to use Holmberg, Salzer-Mörling and Strannegård’s (2002) apt phrase, ‘stuck in the future’. Since new drug development times are huge and the costs massive, a substantial cash-flow today may easily be spent on failed new drug development, thus creating a complicated financial situation further down the road. In PharmaCorp, a number of blockbuster drugs, especially a series of bestselling and truly gastro intestinal medicines, created a situation where the company was making substantial profits at the same time as there were discussions about how to handle the uncertain future and the ‘pipeline’ of new drugs. Having endured a few setbacks in late phases, the company was in dire need for some success, if not yet financially at least to build a new self-confidence in the firm. Studies conducted in the period of 2008–9 testified to a sense of frustration among the co-workers regarding the lack of risk-taking in the company and the general concern regarding decision-making. For instance, when interviewing researchers in the development organization, running the clinical trials, there was a certain degree of frustration in some quarters: These ‘late-phases setbacks’ we have endured – they have not been caused by the clinical organization, right, but they were caused by the data and information generated by the clinical organization. It may be that the discovery organization came up the wrong idea initially but that didn’t prove until the clinical trials. Therefore, there has been a strong focus on the clinical organization during the last years. That is why the governance structure has become much more rigorous. So, sure, there is a certain suspicion regarding how we work in the later clinical phases. (Pre-clinical Team Leader, Southville Site) These ‘late-phase setbacks’ had led to, the interviewees claimed, a certain anxiety regarding ‘decision-making’. Decisions that should have been taken at a lower level in the organizations easily migrated up to the executive tiers. One of the clinical team leaders addressed this issue: As we have been given larger and more complex challenges in the pharmaceutical industry and since we have started to fail our projects, there is a certain decision-making anxiety in the organization. Now, we have learned that every single decision may have
Innovation Work in a Major Pharmaceutical Company 107
significant consequences . . . Decisions, normally taken on a certain level, tend to migrate up the organizational hierarchy because no one is willing to make the decision. There’s a fear of failing, quite simple. (Pre-clinical Team Leader, Westtown Site) In everyday work, this slowing down and obscuring of the decisionmaking process strongly affected the work in the clinical trials: Everything is so incredibly much more complicated. We used to talk about ‘empowerment’, but, in the decision-making process, I think we are not entitled to make decisions on our own. They are taken away from us. All decisions are made higher and higher up the hierarchy. Things get so slow. They only make things more and more complicated. (Clinical Team Member, Westtown Site) In addition, the demands for information to provide to decision-making bodies in the organization were at times poorly defined. One of the medical advisors, a medical doctor having the authority over and responsibility for the medical issues in the clinical trials, was critical of how he was informed regarding what information to provide, and used a geometrical metaphor when airing his frustration: They set the frames rather narrowly and they tell you that you need to pass through all the gates. If I think the gates are in the form of a ‘square’ and your project has the shape of a ‘circle’, then how can I possible pass through the gate? ‘That’s your concern,’ they tell you. But they should have told me in the first place that they would only accept ‘square’ projects. (Medical Advisor, Southville Site) In order to cope with this new situation, the clinical trail teams were planning for different scenarios. Needless to say, this added to the workload of the clinical team workers. One of the clinical team leaders at the Southville site explained how they dealt with uncertainty: We work . . . with different scenarios, different paths to reach the goal of the clinical programme, with different degrees of speed, cost, risk and decision-making quality. If we chose the one path, there will be more risk, it costs less and it is faster. If we chose the other path, the one that we would actually favour, then we are talking about a longer clinical programme, more patients, a much more robust basis for decisions. Then we bring our scenarios to the Executive Project
108
Venturing into the Bioeconomy
Team and in most cases, they follow our recommendations. (Preclinical Team Leader, Southville Site) A similar approach was taken by the clinical team leader in the Westtown site: It may be that we have formulated a package of studies: ‘This is how we would like the first phase to be done’ and everyone is sharing this idea. All of a sudden, someone has been doing some thinking and then they want us to add another study or change a bit in the design and then everything needs to be done from the start and we need to ask for a new budget. (Pre-clinical Team Leader, Westtown Site) The ambition to minimize risk-taking thus led to decision-making anxieties, a migration of decisions and the decision-making authority, originally enacted for the clinical teams, undermined as new decisions emerged from the executive tiers. This caused much frustration among the clinical trial team members: first because they could not attain proper answers to questions, thereby inhibiting the work from proceeding as planned; second, because they believed their role and identity as experts and knowledge workers were implicitly called into question as their decisions became subject to scrutiny. This rather complicated situation was explained by some clinical team members as ‘lack of leadership skills’ and ‘risk-aversion’. For instance, one of the medical advisors claimed that it would be very complicated for managers higher up in the organization to make decisions on basis of adequate information when monitoring large portfolios of candidate drugs in different stages of the process: I think that the decisions made higher up are based on poor information. That can be the case for us too, that is probably the case for all levels. But I think that higher up, they are supposed to make so many decisions and therefore they do not have the time to delve into the details. (Medical Advisor, Southville Site) This kind of critique was very clearly articulated by one of the clinical team members at the Westtown site, a team that had endured a long period awaiting decisions regarding how to proceed with their work: I think they treat us quite disrespectfully because, if you submit a question to someone accountable for decisions above us, then, that
Innovation Work in a Major Pharmaceutical Company 109
person may disappear for a week and there is no proper answer whatsoever. That makes our time schedules even tighter. At the same time, the TA [Therapeutic Area] organization does not want to change our milestones where we are supposed to make the decisions regarding how to proceed . . . Then we need to hold on to the schedule and things get really squeezed. Those who have to pay the price are those doing the actual work, the study leaders and the administrators . . . In the very end, they are given a minimal amount of time to do their work because people higher up have been loafing around. (Preclinical Team Member, Westtown Site) For the clinical team member, the poor decision-making procedures were not only indicative of risk-aversion and an anxiety for failing to deliver new drugs, but also, ultimately, a form of disqualification of the clinical team members’ competence and commitment to the task. Having substantial organizational tenure and experience from all sorts of organizational changes, such a position was intolerable: They think they can make it more effective and cut costs . . . I can live with that. But I cannot accept that they show a distrust for the project because we have been decision-makers during all periods. We have been managing quite a few activities . . . They mustn’t tell us they are taking the studies away from us and put them in another model ‘to save some money’, because no one uses as few resources as we do. You cannot even compare with the USA and the UK . . . We are so committed to our work and if they tell us to do something – we do it right away. (Pre-clinical Team Member, Westtown Site) One of her colleagues, another female clinical team member, a data management specialist, expressed herself in similar terms: At times they think that we will work faster only if they measure what we do . . . There is too much time dedicated to such activities rather than dealing with what is actually helping us work faster. It may be that it is not through measuring but through thinking one step ahead [that helps us]. It is always easier to identify the symptoms than to change what is the cause . . . The end does not always justify the means. (Pre-clinical Team Member, Westtown Site) Rather than imposing yet another management control technique, she advised top management to return some of the decision-making
110
Venturing into the Bioeconomy
authority to the clinical project teams and to further simplify the decision-making process. Today, the decision-making authority is shared between the drug development organization and the so-called ‘therapeutic’ areas, departments having the authority over a specific class or family of drugs. ‘There is too much politics involved . . . between different stakeholders. There appear to be different agendas,’ a clinical team member at the Westtown site argued. Even though the interview material partially mirrors day-to-day concerns and ongoing debates and controversies, the study indicates that PharmaCorp is in a situation wherein some candidate drugs must succeed to safeguard further activities. This sense of urgency has a number of consequences – for instance, the centralization of decision-making authority. Again, the recurrent theme of ‘management versus science’ is brought into discussion. Rather than seeing management as supporting the day-to-day work in the organization, the interlocutors tend to think of it as interfering with the work, complicating it and rendering things more drawn out that necessary. In an unpublished manuscript, written by one of the retiring synthesis chemists, with the title ‘The View from Beyond: Rants from a Retiring Person’, and circulating among the synthesis chemists in the discovery organization, this view of the management cadre was salient: Following the merger between [Company 1] and [Company 2] we have seen a mind-boggling expansion of [the] managerial class. This has necessitated a whole new lexicon of titles. Interestingly, the favoured change was to director and not to the more apposite supervisor. Inevitably there people need the support of a host of associate directors; for all I know there may be assistant associate directors. What do all these people do? It is easier to say what they don’t do. They don’t contribute directly to any drug discovery programme. Instead they populate various committees, networks and focus groups. They make unjustified pronouncements on the requirements for drug discovery and they provide the review panels with various ‘milestones’ that delimit the drug discovery process. Being unable to enjoy the fruits of past accomplishments and the substantial cash-flow generated, all co-workers in PharmaCorp were aware that they had to deliver new drugs to the market. Keeping in mind the general slowdown in new drug output in the industry – virtually all major pharmaceutical companies appeared to suffer from the same
Innovation Work in a Major Pharmaceutical Company 111
inability to turn increased investment in new drug development into new drugs – the scope of the challenge seemed daunting at times. The general verdict, both in the discovery and the development organizations, was that tighter control of the operations, less time for creative thinking, a stronger emphasis on processes and quantitative measures, had been the predominant approach to fill the pipeline with new candidate drugs. Many of the interlocutors with substantial organization tenure had a sense of nostalgia for the period up until the early 1990s when the company was still reasonably small and intellectual and theoretical interests dominated the day-to-day work.
Coping with uncertainty in new drug discovery: epistemic objects and the culture of prediction Epistemic objects and experimental systems In this section, the literature on what Hans-Jörg Rheinberger (1997) calls epistemic things and Karin Knorr Cetina (1999) later on names epistemic objects (two terms used interchangeably in this chapter) will be discussed. The concept of epistemic things denotes the fluid and fluxing nature of the object of investigation in scientific work, but the term has been used in organization theory to examine a variety of organizational objects, entities and activities, including drawings and machines (Bechky, 2003: 729), projects (Hodgson and Cicmil, 2007: 437), visual representations in architectural work (Ewenstein and Whyte, 2007, 2009), a meteorology simulation programme (Sundberg, 2009), or ‘a molecule, a production system, a disease or a social problem’ (Miettinen and Virkkunen, 2005: 438). In this chapter, the concept is used in the more restricted meaning of the term, as part of what Rheinberger (1997) calls an ‘experimental system’. Even though it is possible and often highly productive to adopt a specific concept and locate it in a new setting (Weick, 1989; Czarniawska, 2003), it is questionable if the concept of epistemic things could be used as broadly as suggested by, for instance, Miettinen and Virkkunen (2005). An epistemic object could be many things, but a ‘social problem’ is arguably a term too diverse and manifold to be fully consonant with Rheinberger’s (1997) definition. Notwithstanding such theoretical musings, this chapter reports a study of the so-called drug discovery phase in PharmaCorp. In the early phase of new drug development, specific molecules are synthesized and examined in terms of their ability to affect the target, a receptor such as a protein or an enzyme, without inducing undesirable and harmful
112
Venturing into the Bioeconomy
toxicological effects on part of the individual. Before any clinical trials on humans can be organized, the molecule needs to be carefully examined and explored in terms of its metabolic, toxicological and pharmacokinetic properties. Such examinations are organized as a combination of in vitro (‘in the test tube’), in vivo (‘in the organism’) and in silico (‘in the computer’) applications, each helping to gradually reveal the image of the molecule and its properties. In these very early phases, the molecule demonstrates a range of qualities as being an epistemic thing: the molecule’s structure and characteristics are only partially known; the scientific procedures continually provide new data that may or may not be of significant value; the experimental system in which the epistemic thing is located has a recursive relationship with the epistemic thing – they are mutually constitutive. Following Rheinberger (1997, 1998), researchers in the early new drug development phases unfold the properties of the molecule as they explore it, but this knowledge of the object of enquiry is always insufficient and sketchy. In comparison to much technology-based innovation work, the science-based innovation work demands a much higher degree of recognition of uncertainty and ambiguities; researchers may not know very much about the properties of a molecule but yet they have to work on basis of the information and the techniques they have in their possession. They develop what Fine (2007) calls ‘cultures of prediction’, scientific communities legitimizing their work on the ability to predict outcomes on the basis of rather limited information. Working with epistemic things While the literature on innovation work (Dougherty, 1999; Dodgson, 2000; Fagerberg et al., 2005) offers taxonomies and the morphology of innovation work, it only occasionally provides more detailed insights into the day-to-day work. The body of literature commonly addressed under the label science and technology studies (STS) is helpful in fleshing out the matter of innovation and/or scientific endeavours (Jasanoff et al., 1995; Fuller, 2007; Hackett et al., 2008). As shown in many STS works, the laboratory is the primary topos for scientific activities; it is here nature is recreated under controlled conditions and it is here the scientist brings together a variety of tools, procedures, equipment, laboratory animals, tissue, materials, or whatever resources they need to mobilize to accomplish their work (Knorr Cetina, 1995; Pickering, 1995; Fujimura, 1996). The principal objective of the skilled laboratory scientists is to ‘make things work’ (Lynch, 1985; Nutch, 1996), to make the entire apparatus constructed produce what it is expected and
Innovation Work in a Major Pharmaceutical Company 113
anticipated to produce, namely scientific data that could be translated into inscriptions, hypotheses, theories and, eventually, in some cases, facts making claims to truth (Latour and Woolgar, 1979). ‘[S]cientists exhibit scientific skills not only through their theoretical sophistication and experimental elegance, but also with their practical ability when handling research equipment and instruments,’ Nutch (1996: 216) emphasizes (see also Barley and Bechky, 1994). While the production of theories and other activities ‘downstream’ are a central activity for any scientist, we are here more concerned about the early phases, the capacity of ‘making things work’. ‘[T]he gap between elegant concepts and successful experimental systems was every scientist’s demon,’ Rabinow writes (1996: 93). ‘“Making it work” is . . . a kind of skilled work, but one that is never fully under control,’ Lynch notes (1985: 115). The capacity to bridge the theories and the experimental system is far from trivial even though the common-sense image of the scientific laboratory is one wherein the very equipment is never a concern but runs all by itself, as by magic. The great French scientist and Nobel Prize laureate François Jacob (cited in Rheinberger, 2003: 315) is critical of such simplistic images of scientific practice and speaks here about ‘day science’ versus ‘night science’; while day science is the official and formal account of successful and legitimate scientific work, night science is what precedes day science and must remain hidden for the public gaze: ‘[N]ight science wanders blind. It hesitates, stumbles, recoils, sweats, wakes with a start. Doubting everything, it is forever trying to find itself, question itself, pull itself together. Night science is a sort of workshop of the possible where what will become the building material of science is worked out,’ Rheinberger writes (2003: 315). While ‘day science’ is neatly structured and intelligible, ‘night science’ is riddled by anxieties and practical concerns, dealing with what is ‘in-the-making’. One of the principal challenges in any scientific laboratory work is, in Rheinberger’s (1997, 1998) parlance, the setting up and running of the equipment, to construct an ‘experimental system’. Rheinberger (1998: 285) suggests that we are witnessing a move away from Kuhn’s (1962) emphasis on ‘science-as-theory’, stressing the verification of scientific results in terms of theoretical articulations, to a ‘post-Kuhnian engagement with science as experimentation’. Among the ‘post-Kuhnian’ works, Rheinberger (1998) counts Ian Hacking’s Representing and Intervening (1983), a text emphasizing the everyday procedures and practices in scientific communities. Another important theorist in this tradition is Ludwik Fleck, whose theories of how scientific facts are produced were first published in the 1930s but essentially
114
Venturing into the Bioeconomy
forgotten until Thomas S. Kuhn rediscovered Fleck’s work in the 1950s and 1960s. Rheinberger (1998) credits Fleck for not only showing that scientific work (e.g., experimentation) leads to answers, but that scientific output also strongly shapes the questions to ask. ‘An experimental system’, Rheinberger (1998: 288) says, ‘is a device to materialize questions. It cogenerates, so to speak, the phenomena or material entities and the concepts they come to embody.’ Rheinberger (ibid.: 291) offers a metaphor to explicate his idea: ‘An experimental system can be compared to a labyrinth whose walls, in the course of being erected, simultaneously blind and guide the experimenter.’ In the following, Rheinberger’s two central concepts, those of the experimental system and the epistemic thing, will be examined. For Rheinberger (1997: 28), an experimental system is the ‘smallest integral working units of research’. The experimental system is a system of ‘[m]anipulations designed to give unknown answers to questions that the experimenters themselves are not yet able clearly to ask’. The experimental system, including both laboratory equipment and operative theories and theorems, both embodied in the equipment and ‘additional’ to the equipment (Bachelard, 1934), are not simply ‘experimental devices’ that generates answers, but are also the ‘vehicle for materializing questions’; they ‘inextricably cogenerate the phenomena of material entities and the concepts they come to embody’, Rheinberger says (ibid.). There is, thus, a recursive relationship between, on the one hand, the experimental system, and, on the other, the output; they are not wholly separated but are mutually constitutive – new or unexpected scientific output has implications for how the experimental system is organized (Bachelard, 1934). In many cases, this condition is unproblematic, but Roth (2009) emphasizes the ‘radical uncertainty’ inherent to the aggregate theory/empirical data/laboratory technology. Roth suggests that there are cases where there is a difference between what Suchman (2007) calls ‘plans’ (intended actions) and ‘situated actions’ (actual practices) inasmuch as scientists eventually find out that ‘they have not done what they had intended, believed, and said to have done’ (Roth, 2009: 314). Roth explains: Their [scientists’] task is difficult because they have no criteria for evaluating their outcome independently of their actions. The scientists . . . therefore are in a chicken-and egg situation – that is, one of radical uncertainty – wherein, evaluating their actions, they have to draw on the outcomes of these actions but, for evaluating the outcomes, they have to rely on their actions. (Ibid.: 315)
Innovation Work in a Major Pharmaceutical Company 115
To cope with this ‘radical uncertainty’, both practically and emotionally, scientists tend to systematically question their own empirical results but in many cases it is the very practices rather than the experimental system per se that is doubted: Experienced practitioners may question their observational actions, doubting what they see, but they normally take actions for granted in the sense that they take them as aligned with the goals’ intentions that had brought them forth. If an action has not realized its goals, it is reproduced often with a slight modification (researchers try again, implying it will work this time). (Ibid.: 329) In some cases, scientists may end up in a situation where the plans and situated actions are disentangled and they then have to make up their minds and decide whether they have in fact done what they ‘intended, believed, and said to have done’ or not, and such a point of decision is a critical point where entire experimental systems may be abandoned or substantially reconfigured. Roth suggests that scientists may end up in a double-bind situation where they either have to doubt the empirical data produced or to doubt the experimental system, but they cannot put both in doubt at the same time without undermining their scientific pursuits. Speaking in Barley and Bechky’s (1994) terms, studying laboratory technicians, the experimental system may tolerate a certain amount of mistakes (poor or unskilled handling of the experimental system), or malfunctions (concerns regarding the functioning of the equipment and the laboratory technology and their capacity to produce the intended outcomes), but there cannot be too many enigmas (unexpected outcomes or anomalies that cannot be explained by reference to mistakes or malfunctions), without threatening the legitimacy of the experimental system. The experimental system is a fragile apparatus gradually stabilized through the successful alignment of theory/ laboratory technology/empirical data. Even though the experimental system is ‘the smallest unit’ in scientific endeavours, it is not isolated from the external world. Instead, it is a hybrid construction including local, social, technical, institutional, instrumental and epistemic elements, and it does not comply with macro-level disciplinary, academic, or national boundaries of science policy and research programmes (Rheinberger, 1997: 34). The experimental system is thus fundamentally open to external influences. The principal output from such experimental systems are not theories, theorems, scientific models, or facts, but epistemic things, the
116
Venturing into the Bioeconomy
scientific entity preceding all such formalized scientific contributions. Epistemic things are material entities – e.g., physical structures, chemical reactions, biological functions – that constitute ‘objects of enquiry’. Epistemic things have the characteristic of a ‘irreducible vagueness’. This vagueness is inevitable because epistemic objects embody what is not-yet-known, and must therefore be a malleable and flexible entity, capable of accommodating new scientific evidence or experimental data. ‘Scientific objects have the precarious status of being absent in their experimental presence; they are not simply hidden things to be brought into light through sophisticated manipulations,’ Rheinberger says (ibid.: 28). When epistemic objects are further developed or stabilized – that is, new scientific evidence is accommodated, previously made observations are forged to new ones – the epistemic thing is thus what gradually evolves over time. Knorr Cetina (2001) characterizes epistemic objects as follows: Objects of knowledge appear to have the capacity to unfold infinitely. They are more like open drawers filled with folders extending indefinitely into the depth of the dark closet. Since epistemic objects are always in the process of being materially defined, they continually acquire new properties and change the one they have. But this also means that objects of knowledge can never be fully attained, that they are, if you wish, never quite themselves. (Ibid.: 181). Knorr Cetina (ibid.: 182) addresses the vagueness and fluidity of epistemic objects, its ‘changing, unfolding character’, as a lack of what she calls objectivity, the capacity to demonstrate a ‘completeness of being’ and a coherent ‘identity’. She continues: ‘The lack in completeness of being is crucial: objects of knowledge in many fields have material instantiations, but they must simultaneously be conceived of as unfolding structures of absences: as things that continually “explode” and “mutate” into something else, and that are as much defined by what they are not (but will, at some point have become) than by what they are.’ For Rheinberger (1997, 1998) and Knorr Cetina (2001), the experimental system must advance its object of enquiry slowly and under the influence of uncertainty and incomplete knowledge. As a consequence, the object of enquiry produced, the epistemic thing or epistemic object, must also be located in a zone of incompleteness, serve as what is paradoxically capable of both embodying working hypotheses and experimental data, yet serve to accommodate new such data as it is produced, both locally and globally.
Innovation Work in a Major Pharmaceutical Company 117
In scientific communities, experimental systems are sheltered by scientific ideologies justifying such lack of completeness on the basis of the virtues and prestige of ‘basic research’. In the pharmaceutical industry, or any other industry relying on science-based innovation, there is less patience and tolerance regarding such ‘blue sky stuff’ (Ramirez and Tylecote, 2004). Instead, managerial objectives and financial performances demand that scientific work should as soon as possible result in contributions to the innovation work process. Unfortunately, molecules and biological processes lack the capacity to respond to such expectations and laboratory scientists in pharmaceutical industry must do their best to predict and anticipate how molecules and biological systems may interact in order to produce new candidate drugs. The concept of ‘cultures of prediction’ advocated by Fine (2007) in his study of meteorologists is here applicable: what the new drug development researchers are expected to be capable of providing is adequate and credible predictions of how the molecules subject to enquiry will behave under certain conditions (e.g., pharmaco-kinetics and biotransformational conditions) in the biological organisms, in laboratory animals and, eventually, in humans. Making predictions Scientists are developing, under the influence of disciplinary contingencies and idiosyncrasies (no scientific field or sub-discipline is like another), what Fine (2007) calls ‘cultures of prediction’, scientific communities legitimizing their work on the ability to predict outcomes on the basis of either limited or uncertain information through using advanced ‘technologies of prediction’. In August Comte’s account of the sciences, his positivist ideal of science, prediction is at the very heart of what it means to be ‘scientific’. ‘From science comes prevision; from prevision comes action,’ Comte declared (1975: 88). The goals of science are (1) prediction, or (2) understanding, Dubin argued more recently (1969: 9). Various studies of scientific communities testify to the strong identification between science and ‘exactness’, expressed in the capacity to make predictions. ‘[T]he idea of “exactness,”’ Sommerlund (2006: 918) writes, ‘seems to be so deeply embedded in the way the researchers regard their own work that it has become synonymous with “science,” their comments was not “that’s not very exact,” but rather “that’s not very scientific”,’ Faulkner (2007), studying engineers, claims that the educational grounding in mathematics and science provides engineers with a professional identity based on the ability to handle material and predictable phenomenon ‘[g]overned by the laws of
118
Venturing into the Bioeconomy
nature, backed up by a faith in cause-and-effects reasoning’ (ibid.: 337). Exactness and the possibility of prediction are principal scientific virtues, constituting professional ideologies and identities. In the case of meteorologists, they are engaging in what Fine calls ‘future work’, the prediction of ‘what-will-come’. In their work to forecast weather, meteorologists need to control four ‘elements’ in the work: (1) empirical data, (2) a theoretical model, grounded in a knowledge discipline, allowing extrapolation, (3) the ability to ‘historicize experience’; that is, the ability to make legitimate claim about the ‘similarity between past and present’ (ibid.: 101) and (4) institutions legitimating the prediction. Fine explains the role of empirical data: First, the predictor must acquire empirical data, using a variety of technological devices, constituting a base from which extrapolation is possible. The collection of data results from institutional policies, resource allocation, and technological choices. These data are not transparent and must be translated and managed to become useful for the forecaster. (Ibid.) Second, the predictor requires a theory that is capable of turning the data into credible predictions: ‘Theories serve as a routine basis from which current data are extrapolated. They bring scientific legitimacy to the task of forecasting, suggesting a tested and proven basis for prediction,’ Fine says (ibid.). The third element is a bit more complicated; even though all weather conditions are specific, demonstrating their own idiosyncrasies, predicting meteorologists ‘[b]ase their forecasts on the primacy of authentic experience’ (ibid.) – that is, they claim that what will eventually happen is possible to predict on the basis of the past. Such an epistemic assumption allows the meteorologists to draw on their intuition and tacit knowledge, a set of resources that are highly valuable since the data is always ambiguous and the weather may always change, thereby undermining the value of the prediction. Finally, a prediction must be legitimized by institutions. This legitimating of the prediction does not affect the work per se, but strongly determines whether the prediction (i.e., the forecast) will be taken as valid. Fine (ibid.: 102) is here speaking about three forms of legitimating a prediction: ‘One situated within the domain of specialized knowledge (occupational legitimation), the second is tied to the institutional structure (organizational legitimation), and the third is linked to impression management (presentational legitimation).’ For instance, when a general practitioner articulates a diagnosis, he or she draws on the occupational
Innovation Work in a Major Pharmaceutical Company 119
legitimation of the medical discipline; when a representative of, say, the White House or a national central bank makes an announcement, it is the organizational legitimation that makes the statement credible. Finally, when a politician on the campaign trail makes a statement in the form of a prediction, it is largely based on presentational legitimation, the ability to convey a message or a vision as credibly as possible. Fine argues that the predictions made by the meteorologists, the forecasts, draw on all these three forms of legitimation. They represent a specific field of expertise and a scientific discipline, credible organizations and their forecasts are always carefully articulated statements, conveying a sense of rigour yet being open to contingencies. Similar to the meteorologists, the scientists in new drug development work undertake a form of ‘future-work’, predicting how molecules interact with biological systems such as organisms. However, even though the four elements outlined by Fine are in place and part of the operational procedures, prediction is a complex matter, always being conducted under the influence of uncertainty: ‘The dark heart of prediction is defining, controlling, and presenting uncertainty as confident knowledge. To forecast is to strip uncertainty, responding to the demands for surety, eschewing ambiguity,’ Fine argues (ibid., emphasis in original). He continues: ‘Observational technologies are not transparent windows to the world. Data are ambiguous’ (ibid.: 107). One procedure for maintaining the legitimacy of the ‘community of judgement’ of the meteorologists is to engage in a specific form of rhetoric which is at the same time scientific and literary; it is scientific in terms of using rather unambiguous words such as ‘cloudy’, sunny’, ‘precipitation’ and so forth, in a neutral manner that makes meteorologists’ speech appear almost ‘rhetoric-free’ (ibid.: 154). At the same time, it is literary because expressions like ‘mostly sunny’ and ‘chances of showers’ bear different connotations and have different meanings over the year. For instance, in the Chicago metropolitan area, having typical inland climate with cold winters and hot summers, ‘cold’ means different things in the summer and the winter. The public and various organizations taking advantage of the meteorology services mostly learn to interpret the official forecasts. The problem is when the weather is dramatic and tornadoes and thunderstorms become subject to formal warnings. Meteorologists are concerned about issuing warnings when it is not necessary, while at the same time they are strongly criticized when failing to predict, for instance, deadly tornadoes (a weather phenomenon quite complicated to predict). What is of central importance is that meteorologists are capable stripping their language of uncertainty: ‘Don’t know’ is not an option. As one meteorologist points
120
Venturing into the Bioeconomy
out . . . ‘You’ve got to put something out, but often we don’t have a lot of confidence in it. Maybe 20 percent. You have to put a face on’ (ibid.: 131). Therefore, in summary, ‘cultures of prediction’ engage in complex endeavours, drawing on multiple sources of knowledge and data, capable of ‘making things work’ on the basis of their capacity to bridge and bond various resources under the influence of uncertainty and ambiguity. ‘Meteorologists rely on a set of knowledge claims: part experience, part intuition, part subcultural wisdom, and part scientific claims,’ Fine suggests (ibid.: 132). Using the meteorologists’ work as an analogy, the work of the scientists in new drug development shares many conditions with their scientific colleagues in weather forecasting, with one great difference. While meteorologists always get a timely opportunity to evaluate if their predictions were accurate and to adjust their operational models within a span from 12 hours to a few days, the laboratory scientists in new drug development have a lower access to such ‘primary data’. Only after the clinical trials can the scientists learn how accurate their predictions were. In the following, it will be shown that synthesis chemists, biotransformation analysts, computational chemists and computational toxicologists struggle to sort out and present the zillions of data points that are produced within the pharmaceutical industry, a practice that shares many characteristics with the work in what Rheinberger (1997, 1998) calls experimental systems. They work on an epistemic object that continually accommodates new calculated or experiment-based information that shifts both the content and potential of the specific molecule. Molecules are, thus, what Barry (2005) calls informed matter, epistemic objects subject to advanced technoscientific prediction work with the intention of producing new substances that can be included in new drugs; molecules are the operative entity, the epistemic object, of the idiosyncratic cultures of prediction in pharmaceutical industry. Predicting the molecule and informing matter New drug development work is, like most innovation work, structured in to a series of subsequent phases. A major distinction is often made between the early phases where the new chemical compound is identified and further developed, generally called the discovery phase, and the clinical trials where patients test the new drug, the so-called development phase. This study is set in the discovery phase and more specifically in the early phases of lead generation (LG) and lead optimization (LO). In the LG phase, new promising molecules are synthesized and tested. The LG phase delivers so-called series of molecules (including from at least 10–20
Innovation Work in a Major Pharmaceutical Company 121
to about 1,000 interrelated molecules) that are further refined and examined in the LO phase. In the LG and LO phases, biotransformational features of the molecule must be examined – that is, to identify what metabolites are produced in the biological system when the drug is broken down in the metabolism, and what the distributive qualities of the molecule are. For instance, is the molecule lipophilic (or more inclined to interact with fat in the environment, which will, for example, have implications metabolic pathways in the liver) or hydrophilic (i.e., water soluble), therefore being excreted through the renal system? In addition, the toxicological qualities of the molecule need to be calculated or tested in vitro before in vivo tests can be arranged. While new drug development has been subject to trail-and-error procedures, including a reliance on in vivo experiments on laboratory animals, during the last 15 years, much more computer-based media has been used to construct virtual libraries and to predict the ‘behaviour’ (i.e., the pharmaceutical qualities) of the molecule under different conditions. As a consequence, the field of computational chemistry and computational toxicology are new methods for solving chemical problems, and predicting and improving understanding of the linkages in the continuum between the source of a chemical in the environment and adverse outcomes of the molecule. While such new scientific disciplines can take advantage of the massive growth in computer power and the speed of the calculations, the scope of what is called the chemical space (including about 10180 possible molecules) is too large to fully enable an understanding of all molecules identified and synthesized (Nightingale, 1998). For instance, every single molecule has to be examined in terms of its shape and how it moves when interacting with a receptor. In order to deal with such analysis practically, a range of assumptions needs to be made. For instance, it is assumed that only the molecule examined may change its form while the receptor remains fixed. In addition, water molecules, which are widely known to play an important role, are eliminated from the analysis. Consequently, the examination of the interaction between molecule and receptor is based on a simplified, idealized and highly theoretical model, a form of ‘thought experiment’ (Koyré, 1992: 45). Unless staged this way, there would be too much information to take into account. One of the consequences of this need for simplification in the analytical models is that there is, at times, a certain sense among computational chemists and toxicologists of being overwhelmed by the sheer size of the chemical space and the scope of the assignment. Scientists working in the LG and LO phases thus have to work with the tools and techniques and available data at hand to construct epistemic objects that serve their
122
Venturing into the Bioeconomy
purpose as shared ground for further collaboration between the domains of expertise. Sifting through the data One of the major challenges for the pharmaceutical industry is how to handle, practically and analytically, what Thacker (2006) calls a ‘tsunami of data’ being produced in all the analytical procedures. ‘We generate more data than we can handle,’ one of the biotransformation analysts working in the LO phase argued. One of the principal challenges for the industry is to find relevant and reliable methods to help to examine and understand the data. The vice president of the Bioscience department outlined the scope of the challenge for the industry: Everybody knows what the challenges are: the challenges are, for instance, the cost of developing a new compound. It has skyrocketed: $1.8 billion. At times, it takes as long as 14–15 years. And finally, the attrition rate,2 it is very high. We end up many times with compounds which we have spent a lot of time and money to get to phase 2 or phase 3 [full-scale clinical studies] and they actually disintegrate. That means that, perhaps, the time for reductionist approaches, which were operating well in simple diseases, is no longer as we go into more complicated diseases, in particular metabolic diseases. It is very difficult to develop approaches that show the efficacy with appropriate safety you need in the clinic. So one of the key things that we have here, in our part of the world of the value chain, is to actually make some good decisions based on what it is that we want to invest in as new project opportunities. So any approach, like bioinformatics, any approach that can help us make the right choices with the targets with regard to particular validations in the human context, you can make a difference in that attrition rate that we have. (Vice President, Head Bioscience, Discovery) The pharmaceutical industry is thus facing the challenge of grappling with chromic diseases such as metabolic disorders, arguably derived from more complex biological metabolic pathways. At the same time, the vice president argued, the existing medicines on the market could be improved and further refined: The medical needs are not filled. If you look at statins that are highly successful compounds, if you look at the actual data, 75 per cent of the people that take statins to normalize their level . . . still go ahead,
Innovation Work in a Major Pharmaceutical Company 123
and develop cardiovascular disease . . . So there is more need in that area. The same thing can be said for diabetes and kidney disease. But these are complex disorders. They are not simple diseases. (Vice President, Head Bioscience, Discovery) In the operative work, the question was not only what data to use but also how to examine it – ‘piece by piece’ or as being integrated into one single model giving a better prediction of how the organism would respond to the drug: One thing that has been discussed is when we are doing our measurements. In discovery where we work with one molecule at the time – to achieve the adequate degree of sensitivity – we tend to throw away too much information. If we would like to construct a clearer image of how the drug works in the body; that is a pity. It may be that we should not only examine the metabolites from the drug but also the broader picture. (Analytical Chemist, Discovery) The analytical chemist continued: ‘The measurement generates thousands or tens of thousands of data points and to examine this one-by-one; that doesn’t work. That is why we . . . have to look at the whole picture.’ Another factor complicating the work was that not only were substantial amounts of data produced, in many cases the scientists did not fully know what they were looking for: ‘We do not know what we are looking for; we know that we have a parent [English in the original] that is eventually transformed into various things, and then we need to be able to sort out these unknown things fast and safely,’ a biotransformation analyst working in the LO phase explained. The vice president of Bioscience also addresses this lack of comprehensive theories about the functioning of biological systems: These are very heterogeneous disorders. One compound may hit one particular pathway that is relevant for 2 per cent of the patients; it may have absolutely no effect on the other 98 per cent. So stratifying the disease . . . individualized or personalized medicine; that is where the future is. (Vice President, Head Bioscience, Discovery) He continued: There are millions of bits of data and information, but we don’t know what they mean; how are they really interlinked to each
124
Venturing into the Bioeconomy
other? . . . We don’t have enough knowledge to build that ‘in silico human’. It will come, sometime in the future, but I wouldn’t make any decisions using such systems. Today I would rather go with the established [system], an integrated biology system; that is a mouse or a rat. (Vice President, Head Bioscience, Discovery) In order to deal with this problem, the researcher was in need of more adequate analysis methods and more theoretical models enabling hypothesistesting. ‘Looking at the whole picture’ was accomplished though the use of sophisticated software, helping structure the data point into ‘tendencies’ that were more easily grappled with than individual observations. The analytical chemists stressed the importance of the new tools: Such a matrix [a form to present experimental data] could be examined for, like, a week by one single person if we were working in the old manner. With the new methods, we may, with some effort, match the timeline in the chromatogram so we could examine it like a ‘pancake’. If you have a ‘pancake’ from each experimental subject you could create a stack; then we have methods to examine the, say, ten largest differences between these person’s ‘pancakes’. Then you can observe that ‘Alright, here we have something showing that the liver enzyme is affected but that may be caused by consumption of too much alcohol, but here’s something different that we have not observed before. That may have to do with the drug.’ Then you have to continue working. It is not that we push a button and get the full truth, but it is an analytical method scaling down thousands of data points to a few tendencies that can be examined. (Analytical Chemist, Discovery) In the LG and LO phases, the scientists have a number of tools and techniques, such as simulations and modeling, to manage the extensive body of literature, that is, to predict the qualities of the molecule. However, none of these methods were devoid of limitations and assumptions, largely because of the ambiguities regarding the quality of the input data. Today, rather than using one single method, a combination of methods are juxtaposed to create a more integrated image of the studied molecule. One of the computational chemists pointed at this change in perspective: ‘When I started here, then we were all settled to find The Method, the one best method. Now, we have learned that it is very complicated to say what method is best when predicting.’ No matter what method is used, the key to the understanding the molecule – ‘to get a feeling for the molecule’, as one synthesis chemist put it – is comparing
Innovation Work in a Major Pharmaceutical Company 125
the predictions made against experimental data. The analytical chemist emphasized that ‘in order to simulate you always need to test against reality’. The simulation model had to be verified to play any decisive role in the work. Even in the case where the model is verified, it could be that it had been verified against what is called a ‘single-crystal X-ray chromatography’, which is not a fully accurate model of how the substance is actually functioning in the cell. In addition, it may be possible to simulate how flexible molecules interact with fixed receptors, while flexible molecules and flexible receptors is too complicated to simulate. The analytical chemist concluded that there are too many things unknown yet to determine: ‘You need to understand the mechanisms you’re simulating. I don’t think we know all these mechanisms . . . When simulating humans, for instance, I believe there are quite a few mechanisms that we still don’t have a clue about’ (Analytical Chemist, Discovery). In some cases, simulation models could be used rather effectively to sort out promising compounds among a great variety of alternatives. A researcher in the field of arrhythmia had one example of successful use of a simulation model. In the field of arrhythmia, it is, the researcher argued, complicated to predict side effects: Arrhythmia is a target-poor domain while, at the same time – but that goes for all disease areas more or less – rather complex. When you work in ion channels in the heart, it is very, very difficult to predict what kind of end effect you will get when interacting with a specific ion channel. They affect one another in most complex [patterns]. (Senior Principal Scientist, Pharmacology) To cut down the in vivo testing effort, the arrhythmia research team developed an in silico model that could be used in the early phases of the screening: The first thing we did was to construct a rather simple model . . . We procured an in silico model . . . and then he [a colleague] modified it by adding typical channels to the heart, transporters, calcium homeostasis . . . to construct the model. (Senior Principal Scientist, Pharmacology) He continued: We used it as a pre-filter [sic]. All that was proved as ‘bad’ in the in silico model was put aside. What worked in the in silico model was validated
126
Venturing into the Bioeconomy
in the animal model . . . That was successful. We had never been able to handle the screening in a different manner because the in vivo model is quite time consuming. (Senior Principal Scientist, Pharmacology) However, for most of the research, in silico models could be used only in the earliest phases to identify very crude compounds that were unsuitable for further testing. At the end of the day, it is the combination of in vivo research and clinical data that determine the efficacy of a compound. In general, operative theories guiding the day-to-day work were called for (to be addressed below). A similar view of the possibilities of the new techniques was presented by one of the computational toxicologists, emphasizing the amount of data generated: ‘We generate a hundred times as much data as ten years ago. But assume that we are examining data and using data in an adequate manner to make decisions, and that data is generated in a manner that is really useful: those are two completely different things. Information does not of necessity lead to better or faster decisions’ (Principal Scientist, Computational Toxicology). This disruptive view of the relations between data, information and decision-making was further complicated by the problems in determining the validity and quality of data. The principal scientist in computational toxicology claimed that the most widely discussed scientific challenge was how to ‘evaluate the quality of the data’. He explained the importance of this matter: To make simulations, you need to know a bit about the quality of the data, how reliable the data you are working on is. In most cases, we don’t know that. At times, we don’t know the variation in a certain assay . . . Scientifically speaking, data quality and data scope is debated . . . The basic mathematics and the calculations and such things, underlying to everything we do, are more or less solved. We know most of that. (Principal Scientist, Computational Toxicology) The underlying toxicological models were treated as being unproblematic, but filling these models with data enabling predictions without knowing the quality of the data was something quite different. Despite these ambiguities and uncertainties, the work had to continue under the influence of such limitations: At times, we notice that we have to do something with the information [i.e., experimental data], and then we are modelling and simulating notwithstanding the limitations we know are there,
Innovation Work in a Major Pharmaceutical Company 127
but cannot do very much about. We continue anyway. Otherwise we need to return to those running the experiment and ask them to do more tests to be able to do some estimation. That can work if they have the capacity. In other cases, we know we need to separate the ‘good data’ from the ‘so-so data’ and treat them differently . . . A significant part of our work is to provide good stuff that is qualified scientifically and that reflects what the data really says. It is not easily accomplished. (Principal Scientist, Computational Toxicology) Being able to use the data provided thus demanded a certain ability to live with a number of methodological limitations. Continuing the work to produce credible and valid predictions was considered of greater value for the activities than grappling with the concerns regarding the model. The challenges of prediction The main objective with the work conducted in the LG and LO phases was to identify molecules with promising therapeutic qualities and to predict how these molecules interacted with the organism in terms of distribution, its uptake in the body and toxicology. Especially the ‘downside risk’ (i.e., the toxicology of the molecule) was important to predict as accurately as possible. Here, the scientists face a number of challenges that have to be dealt with. The principal scientist in computational toxicology outlined the scope of the challenge: Most toxicological endpoints cannot be modelled very easily. Those that we have a firm mechanical grasp around, there we have enough data to do a proper modelling, I think . . . Liver toxicology could appear through maybe ten to 12 mechanisms and, regarding these individual mechanisms, we do not have enough data. Yet we need to present the data in a credible manner. (Principal Scientist, Computational Toxicology) The problems associated with prediction were, however, not only scientific in character but also organizational or managerial in terms of the parameters measured in in vivo tests, which were rather limited to reduce cost and time. These procedures were by no means implemented to ‘cut corners’ but were enacted as good clinical practice by international bodies: If you examine what we are looking for in animal studies, the list is quite limited, actually. The endpoints we really explore are very
128
Venturing into the Bioeconomy
few . . . We should get more information from the animal studies, quite simply. But it may be that it is not enough. Being permitted to study dosage in humans you need to conduct [successful] safety studies [i.e., toxicology studies] in two species. We usually use rat and dog. But is this enough to fully reflect what happens in humans? Not a chance! How can we deal with that? That is yet another thing to handle. I don’t have the answers . . . What we really measure in the animal studies and sorting out everything regarding these studies is the key to an improvement, anyway. (Principal Scientist, Computational Toxicology) One of the consequences of the lack of adequate data, and the suspicion that data is not valid or otherwise not qualitative enough to allow for predictions, is that there is a need for more clearly articulated theoretical models guiding the research work. One of the principal challenges in new drug discovery is to make predictions of how the molecule will interact with the biological organism on basis of in vitro studies, in ‘reduced biological systems’ such as cell lines. The problem is that it is complicated to make such predictions: ‘Unfortunately, it is not always the case that in vitro corresponds to in vivo,’ one of the computation chemists said. ‘In two of the projects, I have seen no substance [studies in vitro] predicting in vivo,’ a biotransformation analyst in the LO phase admitted. She continued: We have this discussion about what’s called in vitro/in vivo. In vitro are these reduced systems selected where we try to identify data. It shows in many cases that you cannot predict your in vivo situation on basis of in vitro data . . . We have quite extensive discussions in our section . . . What does in vitro give us? If we compare these projects where we have both in vitro and in vivo data, how well are they corresponding? What are the differences and how do we identify the differences? . . . When it comes to tox [toxicology studies] and reactive metabolites, then we need to understand the broader picture because there are so many different things influencing . . . The safety issue is much more about understanding the broad picture. (Biotransformation Analyst, LO) Another problem when seeking to predict the efficacy of the drug was the movement from animal models in the in vivo studies (normally using species like rat, mouse, or dog) to humans. Even though the animal models provided very valuable information, many of the
Innovation Work in a Major Pharmaceutical Company 129
researchers emphasized that it is complicated to predict the efficacy of the drug and its potential side effects on the basis of animal studies: ‘After all, rats and dogs are not the same thing as humans,’ a researcher said. He continued: Of course, it is a major difference between a test tube and a living, functioning rat or dog or primate or whatever it may be . . . [but] side effects related to the pharmacological effect could be totally impossible to predict before you test in humans. We’ll continue to see drugs failing in the early [clinical] phases. (Senior Principal Scientist, Drug Metabolism and Pharmaco-kinetics) For instance, the arrhythmia researcher provided an illustration of this problem. The animal model, based on the inducing of arrhythmia though electro-chemical manipulations of the heart activity, was capable of providing substantial amounts of useful data, but it eventually did fail because of unpredicted side effects: We have had the opportunity to bring a few substances from the animal model into humans . . . We induce arrhythmia over a period of six weeks, and then eight out of ten dogs develop arrhythmia. Even if we switch off the pacemaker, the arrhythmia remains and there we have an outstanding efficacy [on the substance] . . . We also documented the mechanism behind it . . . and we were able to bring the substance into humans. We were able to demonstrate an excellent efficacy for converting arrhythmia and it also occurred in the same plasma and areas of concentration. (Senior Principal Scientist, Pharmacology) These early stages of advancement did, however, end when an insurmountable obstacle occurred; many of the human patients developed ‘flu-like symptoms’ that the research team were never able to explain and that, to date, remain a mystery. The drug was abandoned and the researchers learned that humans may produce responses not observed in the animal models. In summary, both the translation from in vitro or in silico testing to in vivo animal models is uncertain, but also the transfer from in vivo animal models to studies in humans is at times complicated. The inability to predict in vivo outcomes on the basis of in vitro studies further underlines the importance of solid theoretical models guiding the experiments and the modelling and simulations. The
130
Venturing into the Bioeconomy
computational toxicologists stressed the need to understand how biological processes work prior to empirical investigations: Drug discovery is very much dependent on how well the generic disease models work, animal models or models in man . . . These models are the foundation for much progress. The question is if we believe we will be able to identity these models more effectively. That is a tough job. (Principal Scientist, Computational Toxicology) He continues to make a comparison to physics, where theoretical models, unlike in the case of biology or medicine, preceded the experimental situation: If you look at physics, for instance, you notice that they [physicists] often have a reasonably well-formulated theory underlying their experiments. If you look at biology and medicine, there are experiments done and then comes the theory. Then you are in the awkward position of generating substantial amounts of data that cannot be examined at all because you don’t know why this [data] is generated. (Principal Scientist, Computational Toxicology) In comparison, conducting research in physics is ‘easy’, the computational toxicologist claimed; because physicists examine ‘quite simple systems’, while in biology the systems are irreducible and demonstrate what complexity theorists call emergent properties, the capacity to change en route as new conditions emerge. The lack of widely shared theoretical frameworks and computer power inhibit the simulation of such non-linear, emergent properties. The recent advancement in the life sciences in the field of genomics offer some new tools and techniques when examining how molecules interacted with the biological organism. However, in contrast to popular belief and the general media hype regarding these new technologies, the various techniques ending with ‘-omics’ (pharmaco-genomics, toxicogenomics, metabonomics, etc.) and thus addressed as ‘the omics’ in interviews, played a rather peripheral role in the problem-solving. ‘The omics’ did not serve the role of exact and accurate methods for screening large spaces of molecules, but were tools of practical value in problemsolving. According to the computational toxicologist: We can use metabonomics and toxicogenomics in the problem-solving work. When we get stuck with some problem in the animal models or
Innovation Work in a Major Pharmaceutical Company 131
the clinical studies, then we need to explain the mechanisms behind it. In such cases, we can use these broader methods like metabonomics and toxico-genomics . . . Our view on this is that they are tools for problem-solving rather than for screening everything because that [method] has not been capable of offering what was promised ten to 15 years ago. (Principal Scientist, Computational Toxicology) The analytical chemist was also rather unimpressed by the new techniques developed in the life sciences and claimed that the methods used today were more or less the same as in the early 1980s, even though the technology had improved. Rather than the new opportunities for connecting genotype and phenotype and specific substances as suggested in the pharmaco-genomics framework, the absence of new theoretical framework was, again, what played a central role: Mathematics is not developed as fast as nuts and bolts, or electronics for that matter. We use the same methods as in 1980 . . . The mathematics have been developed, so we rotate the solutions in a new manner and we get a more clear image . . . The most important thing is that we have more computer power so we can use the same mathematics on significantly larger data sets . . . the volume [of data] we are examining is much, much larger . . . You may say that we have better technologies but the same tools. (Analytical Chemist, Discovery) One of the synthesis chemists suggested that, in comparison to the biological sciences, medicinal chemistry was ‘less sensitive’ to scientific advancement: ‘The technology used in biology is much more sensitive. “We no longer do like that,” they say. We wouldn’t say that in chemistry . . . Therefore, we are not that vulnerable’ (Synthesis Chemist, Medicinal Chemistry). The mild scepticism regarding the value of the various omics technologies, however, did not suggest that there was no improvement in output. All interviewees emphasized strongly that the use of data and the subsequent accuracy of the estimations and predictions were substantially higher today than, say, ten years ago. An indication of the usefulness of the new methods was the output of new candidate drugs and their qualities, ultimately tested in the clinical trials in the development phase. The computational toxicologist stressed the quality of the output: If you take a look at our pipeline, I think the quality of the substances have been much better. That means that we have more substances in the development phase, awaiting resources, rather than having a large
132
Venturing into the Bioeconomy
group in clinical research, sitting there, waiting for new things to be delivered from discovery. Now, there are too many substances and too few resources in development to process all these substances. That tendency is positive. The clinical teams have a portfolio [of substances] to choose from . . . Whether that will lead to new registrations [of new drugs], no one knows. (Principal Scientist, Computational Toxicology) The experimental system of the LG and LO phases apparently manages to construct and stabilize epistemic objects, molecules being the active component in the substances of the new candidate drugs, that could be fed into the clinical trials. At the same time as many of the scientists emphasized the more detailed use of data and the output of new candidate drugs, they were aware of the general decline of output of new registered drugs and the soaring new drug development costs in the pharmaceutical industry. The synthesis chemist was frustrated about this poor pay-off: ‘Who’s best in class [of the major pharmaceutical companies]? There is no one delivering any new substances’ (Synthesis Chemist, Medicinal Chemistry). He concluded with a somewhat dystopic analogy: ‘It is a bit like Rome [the Roman Empire]. First, things were fantastic, and then it all went to hell.’ New technologies and scientific approaches As suggested above, new drug development is a sophisticated application of state-of-the-art technoscience to produce new medicines under the influence of financial markets and market demands. While basic science may be justified in terms of being based on what economists call a market failure, potentially providing know-how that is socially useful but not yet possible to finance by market activities, pharmaceutical companies are closely related to the world’s global stock markets and financial markets. One of the major concerns for pharmaceutical company executives is how to align the rather shortsighted financial markets with the more long-term time perspectives demanded in new drug development. Since science by definition cannot promise any output ex ante, the dominating strategy among major pharmaceutical companies has been to point at the content of the project portfolio, the number of candidate drugs in the various stages of the new drug development process. This has led to what, at times, is addressed as ‘the numbers game’ in PharmaCorp – the strong emphasis on the very number of molecules synthesized, candidate drugs to select from and drugs in various stages in the clinical trials. However, underlying to these accumulated and rather dry figures, diagrams and prospects is the fluid and
Innovation Work in a Major Pharmaceutical Company 133
fluxing world of the laboratory scientists and the tools, equipments and procedures employed in the work to identify, synthesize and verify the qualities of molecules, eventually serving as the active compound in new candidate drugs. This technoscientific setting is characterized by a restless movement and change as new tools and technologies are constantly invented and brought into the laboratory practices. This adoption of new technologies is, however, never devoid of inertia and resistance; already-invested time, energy and prestige in pre-existing scientific methods serve to delimit the acceptance for new methods. In addition, the costs for evaluating and verifying new methods are substantial. It is not the case that new methods are brought into existing technoscientific frameworks, but new technologies need to be carefully located within the procedures. However, the interviewees emphasized the radical changes in procedures and output during the last ten years. ‘Above all, we have at least doubled the capacity in terms of number of substances passing through an assay, maybe tripled since 2006 . . . A lot has happended in throughput [English in the original],’ a biotransformation analyst working in the LG phase argued. He estimated that that the use of information had increased with a factor of 50 since the late 1990s and during the last three years alone, the number of assays had increased by 20 per cent. ‘We have been under pressure to increase the capacity,’ the biotransformation analyst admitted, making a reference to the strategic objective to ‘front-load the DMPK studies’ (drug metabolism and pharmaco-kinetics) in the company to be able to predict the metabolism of the drug better, and prior to the extremely costly clinical trials. In addition to such organizational changes, improvements in the technologies used and the equipment had made a worthwhile contribution to new drug development work. For instance, mass spectronomy, a standard technology in medicinal chemistry and the DMPK work, was today deemed to be ‘better and faster’ and ‘more detailed’, scientists working with the technology argued. In the case of liquid chromatography, another well-established technology for the identification of metabolites, substances produced in the biological system as the drug is being absorbed by the body, the pumps pushing the substance through a so-called column, a ‘pipe’ densely packed with small pellets, were capable of working with higher pressure, thereby enabling more detailed results, enabling the analysis. In addition, the columns were improved and have higher density than previously: I work with mass spectrometry and liquid chromatography a lot and we have two things limiting us capacity-wise. We use something
134
Venturing into the Bioeconomy
called UPLC [ultra-performance liquid chromatography] . . . That has managed to increase the density of the particles in the column. Chromatography comprises small balls, and the smaller they are the higher the efficiency in the separation we are aiming at. They [the equipment company] were the first to manage to reduce the particles’ structure and pack them. In addition, they have built a system capable of handling these pressures, because the pressures are very high. All this has helped reduce the time for analysis by 100 per cent, from 13 minutes, which was fast for the conventional way in biotransformation, to six minutes. (Biotransformation Analyst, LO) ‘Today, we have an adequate degree of sensitivity in the instruments. That it is not a major concern any more,’ a biotransformation analyst working in the LG phase claimed. ‘Technology-wise, we are at the frontend of bio transformation [science],’ she concluded. In addition to the improvement in the ‘hardware’, new software enabling a computeraided analysis of the chromatograms produced was developed: What is an important and serious thing, is this with data processes, but here we have new software helping us . . . We do not know what we are looking for; we know that we have a parent [emphasis in the original] that is eventually transformed into various things and then we need to be able to sort out these unknown things fast and safely. When we do it in vitro studies, which are quite limited, we can still guess what will happen but it is very time-consuming to identify all these things. We have very low concentrations and there is a lot of background noise intervening, so it is not very evident what is happening. But we use this software, which helps us do all these things very, very fast. It accompanies the test with a ‘blank’ [to calibrate the machine], and everything it might find in the test that is not present in the blank is sorted out as a possible metabolite. Then we need to intervene to determine whether this is the case of not. If we did not have access to that software, we wouldn’t be able to run that many tests. (Biotransformation Analyst, LO) ‘It may take five to ten minutes to do what previously took you 30–60 minutes, so that is quite a difference,’ the biotransformation analyst argued. Another technology that was developed and greatly increased the capacity of the work was the use of omics technologies. For instance,
Innovation Work in a Major Pharmaceutical Company 135
the production of enzymes has been improved with the advancement of the new technologies: There are methods developed for the production of enzymes so you don’t need to isolate them. You no longer need to take a number of livers and mash them and isolate the enzymes but you actually produce them and use them in in vitro systems to do tests. You can make the preparations so pure that you know for sure that there is just this one single enzyme and nothing more. That has changed the perspective quite substantially. (Senior Principal Scientist, Drug Metabolism and Pharmaco-kinetics) However, some of the other researchers claimed that the influence of omics technologies had influenced the new drug development output only marginally, saying that it had ‘Not that large influence . . . When it comes to new drugs on the market then it does not matter. But it has led to an increased understanding of the genetic components in cardiac arrhythmia’ (Senior Principal Scientist, Pharmacology). What was regarded as being particularly unsuccessful was the genomics research conducted in collaboration with universities, aimed at identifying ‘association genes’ (genes correlating with certain disorders): If you examine such whole gene association studies, they have not been capable of delivering what has been expected. These expectations were possibly somewhat exaggerated, but now they are cutting down in this field of expertise, in the omics. Personally, I believe that is totally wrong because we need to use the technologies in a better manner . . . I think it will play a most important role in the future. (Senior Principal Scientist, Pharmacology) By and large, the screening of molecules and their testing against in vitro and in vivo models is a procedure that is becoming increasingly automatized. At the same time, there are few possibilities for ‘machineries’ for new drug development, feeding out new candidates on an industrial basis. What the scientists called for was more time to carefully thinking about what the data actually means and for constructing credible analytical models guiding future research. Some of the interviewees also pointed at the inertia in adopting new techniques and tools in new drug development. Individuals as well as scientific communities invest time, energy and prestige in certain procedures of working, and they show a great deal of resistance when being forced to abandon such
136
Venturing into the Bioeconomy
favoured procedures. One of the analytical chemists addressed this issue in detail: Mathematics takes a long time to develop but it also takes a long time to teach people mathematics; maybe not one single person but an entire population that takes a long time. Once you’ve learned linear regression you are happy with that and use it for the rest of your life – unless you’re interested [in learning new methods], right. (Analytical Chemist, Discovery) He continued: ‘A certain group of people may get this idea of how they work and then they work with that method until they realize that “Here we hit the roof and we can move no further, or we need to work faster.”’ One of the approaches to overcome such incumbent scientific procedures is to hire new people, especially those newly graduated from universities, capable of adopting new working methods and perhaps being formally trained in newly developed techniques: One negative factor is that we are not hiring new people. The best combination is to have new people suggesting ‘let’s do it like this’, but you must also have the experience of older workers telling that ‘it doesn’t work’ or ‘this works very well’ and then there’s a discussion between the two. (Analytical Chemist, Discovery) One of the risks with not hiring new people is, besides the conservation of established ways of working, what the analytical chemist referred to as the ‘change affinity’, the willingness to change the predominant modus operandum. On the other hand, some of the interviewees argued that, with today’s advanced technologies, there was a smaller demand for knowing all the advanced mathematics underlying the scientific calculations and estimations. ‘You don’t even need to be skilled in mathematics any more . . . There is a lot of black box thinking: “Push here and you get your model”,’ one of the computation chemists argued. Fashionable technologies Another factor to take into account when explaining the inertia in new technology adoption is that many of the scientists with significant organization tenure experienced a signifcant degree of ‘techno-hype’ in the 1990s. For instance, high-throughput screening technology (HTS), used to screen large libraries of molecules and detect interesting prospects for the LG and LO phases, was put forth as a major new technology
Innovation Work in a Major Pharmaceutical Company 137
in the 1990s. The experience from using HTS was, unfortunately, disappointing and the new drug development scientists learned the hard way to not be led astray by the promises made by proponents of new scientific technologies. One of the synthesis chemists addressed this topic: All throughout the 1990s, it was solely a matter of pushing through substances . . . We felt that this was all ridiculous. When we received the results, in the 1990s . . . there were thousands being positive, and there was no chance of handling them all. (Synthesis Chemist, Medicinal Chemistry) After using HTS for a long period of time, a new situation has developed: Today, the pendulum has turned back to chemistry or medicinal chemistry and now I work on a really mature project – too mature, I think at times . . . with trombocytes, that is anti-coagulants. It is like a holy cow here, I think at times. On the other hand, being persistent is a virtue. We have noticed that both here and in Southville. We mustn’t do this but we still do because we believe in it. (Synthesis Chemist, Medicinal Chemistry) Just like most other domains of the contemporary society, there is a certain ‘fashion-effect’ that pervades everyday life. Scientists – professional scepticists, if you like – are no less exposed to the institutional pressure to adopt what is regarded as the ‘latest thing’ in their field of expertise. One of the recent trends of buzzwords in the field (discussed in more detail in Chapters 4 and 5) is systems biology, the use of biocomputation methods to examine large data sets in order to understand how data are interrelated. Since the omics technologies are capable of producing massive amounts of data, systems biology holds a promise for the industry in terms of being a tool for structuring and sorting out all the data. The researcher was interested but still mildly sceptical about the promises made regarding the value and use of systems biology. ‘If you look back at genomics . . . one cannot say for sure that systems biology is not a hype of the same kind’ [as HTS], one of the researchers claimed. Another research was more positive but underlined that it may always be possible to find something of interest when examining large data sets: If you add everything [to the analysis], it would be strange if it could not make some impression on new drug development . . . Combining
138
Venturing into the Bioeconomy
not only genomics but also transcriptomics and proteomics in a large complex system would give you a much higher precision if you only look the genomics . . . But to say that a particular gene is connected to a certain disease [is more complicated]. (Senior Principal Scientist, Pharmacology) The vice president of the Bioscience department was, just like many medical researchers in the field of physiology, convinced systems biology is, by and large, a rehabilitation of a longstanding tradition in medicine, marginalized for a number of years when the Human Genome Project, Human Genome Diversity Projects and other ‘Big Science’ projects in the field of genomics have dominated the field at the expense of more integrative view of the biological organism: Systems biology is a fancy term for something we have been doing forever. What you are doing in in vivo studies, that’s systems biology . . . That is the most reliable way we currently use to be able to make decisions as we move forward a compound from one stage to another . . . What the difference is here, at least in my mind . . . it has been able to look at processes in a sort of a neural network-like system where you can have an ‘in silico animal’. This does not exist today, right. So systems biology is a very poor, in my mind, approximation of an in vivo model. (Vice President, Head Bioscience, Discovery) He continued: What they are doing, they are using current knowledge to create networks in a sophisticated computer system and try to understand what pathways are turned on in one or another direction and reach conclusions – that is, in my mind, very naive, because there are many other types of interactions that we don’t know of; if we don’t know it is not in there . . . We’re far from actually using systems biology, that is evident from the last ten years. There has been a lot of profiling, a lot of omics, proteomics, genomics . . . totally worthless! Nothing came out of that stuff! (Vice President, Head Bioscience, Discovery) Other concepts and terms that have stuck in the industry are personalized or individualized medicine and the promise of emerging stem cell research. Personalized medicine was generally regarded as a thing to be developed in a distant future, but there was a shared belief among the interviewees that drugs could discriminate between different ethnic
Innovation Work in a Major Pharmaceutical Company 139
groups. For instance, some drugs have been proven to work for Asian population groups while Caucasians are not demonstrating an adequate response. For example, the lung cancer medicine Iressa™, developed by AstraZeneca, was not approved in the USA and Europe but was approved in Japan. The researchers thought that, in the future, there may be medicines being developed for specific ethnic groups if the industry manages to establish new drug development models that can handle such targeted drugs effectively. ‘We need to be able to focus smaller products that are still valuable for a smaller population and still pay off. It would be a good thing if we could develop a new drug without the costs being $2 billion,’ one researcher contended. Another tendency in the industry is to move ‘upstream’ to prevention therapies. For instance, in the field of heart arrhythmia, there is a sense that most targets have been explored and that the best way to accomplish better therapies is to move into preventive medicine. The arrhythmia researcher accounted for the rationale of this strategic change: There are many components. One principal driver is the research on arrhythmia over the last five to ten years, which can be summarized as ‘AF begets AF’. Arrhythmia per se affects the heart both electrically and structurally and the disease becomes chronic. If you could intervene into the process and prevent this remodelling process, both structurally and electrically in the heart, then there are possibilities for both primary and secondary prevention . . . A lot of research suggests that it is a most attractive way to treat patients. Another driver is that we have this feeling that we have emptied all the targets regarding treatments. (Senior Principal Scientist, Pharmacology) Both pharmaceutical industry representatives and external commentators remark that, these days, drugs as well as medicine more generally target healthy persons to prevent future illnesses. For instance, a person suffering from obesity lives a normal life and may be, by definition, healthy, but since research shows that there is a strong causal link between obesity and a range of metabolic disorders and cardiovascular diseases; obese persons may become subject to preventive care. The reasons for pharmaceutical companies moving in this direction is not primarily, as some critics would argue, for financial reasons but because some fields of research have proven to identify so-called ‘nondrugable targets’ and therefore contributions to preventive medicine may be more effective for both the pharmaceutical company and the patients.
140
Venturing into the Bioeconomy
Regarding stem cell research, ‘the area that provides the biggest hopes for the future’, as one of the interviewees put it, there is some expectation regarding what may come out of that field of research. In general, the pharmaceutical industry representatives did not expect to see any therapies where stem cells were brought into the human body to repair, for instance, a damaged liver, but thought the entire field of research may provide important basic research in terms of enhancing the understanding of biological systems, which in turn may enable new target identification or a better insight into the functioning of the biological pathways. However, stem cell research remains rather obscure for much new drug development work, essentially operating within the small-molecules drug model used for quite some time in the industry. However, adhering to some general ‘trickle-down theory’, breakthroughs in stem cell research would eventually have implications for new drug development. Managing and organizing new drug development work Given the concerns regarding the possibilities for predicting and selecting molecules for new candidate drugs, the very organization of new drug development is of great importance for the outcome. Being staged as a series of interrelated but sequential stages or phases, the organization of new drug development demanded a minimized loss of information between the various phases. Among other things, the new drug development process had been subject to various total quality management analyses, most recently a so-called Lean Sigma project aimed at streamlining the activities. Reducing lead times in the new drug development process was, in general, emphasized as a key to competitive advantage: ‘We work with these shorter cycle times that are fashionable nowadays. Important data to the projects are to be delivered within ten days so that the project can get their data back much faster,’ a biotransformation analyst working in the LO phase said. The so-called ‘ten-days rule’ has affected the work in the department ‘very much’ she claimed, giving a sense of being much controlled. In general, the laboratory scientists had a belief that they were not only struggling with sorting out and understanding the chemical space, but they also had to endure an endless flow of managerial policies and directives. For some, this was largely an indispensable ‘part of the game’, while for others this was a more annoying factor interfering with what was regarded as ‘valueadding work’. The dominant doctrine in the industry has been what was derogatorily addressed as ‘the numbers game’ among scientists, the idea that, in order to bring one successful drug to the market, thousands
Innovation Work in a Major Pharmaceutical Company 141
of molecules and candidate drugs need to be screened. ‘We work in a situation where quantity is more important than quality in new drug development,’ the senior principal scientist in pharmacology remarked. The emphasis on quantitative output and fast delivery did, however, somewhat paradoxically make the decision-making at the executive level even more complicated because of the increased number of opportunities. One of the biotransformation analysts emphasized this point as a partial explanation for the failures to deliver new innovative drugs to the market: ‘It is more and more complicated to make decisions. Everything we measure may prove to be negative for the substance. Seven or eight years ago, when we did not measure as much, things looked much better. It was easier to deliver chemical series a while into the LO phase’ (Biotransformation Analyst, LG). Another issue that was addressed was the size of the company, making the distances between different departments too long for to collaborate closely: ‘This is one of the great dilemmas for this kind of company: The distances are vast. There are no natural networks,’ a biotransformation analyst, working in the LO phase, argued. She continued: ‘If there is one single thing I think the company should work on, it is to improve the collaborations between the departments . . . It is not the case that it works very badly – it works reasonably well – but it is very much about the individuals.’ One of the synthesis chemists addressed the same issue in somewhat more critical terms: I am concerned about these large units . . . I think we are seeing little more than policies. It is quite rarely that our moral is boosted, making us enjoy work more. That is what you observe, these organizations – they die! They are totally preoccupied with policies. When did we have a meeting about the enjoyment of work or creativity? When did that happen? . . . What you hear from top management is either ‘How can we possibly save money?’ or ‘How can we possibly make x more molecules?’ . . . It is this hype about ‘making six CDs [candidate drugs]’ . . . and if we fail to do so, it is a catastrophe . . . There is no discussion about the projects from a scientific point of view, but only this numbers game, because that is easily handled – ‘they should do six and, we do two, and they did only one’. (Synthesis Chemist, Medicinal Chemistry) A close collaboration between the departments was critical for effective identification of promising new molecules. For instance, when the synthesis chemists in the medicinal chemistry department identified a
142
Venturing into the Bioeconomy
molecule they believed could have the properties sought, they wanted to have certain data verified by the DMPK department as soon as possible: ‘Medicinal chemistry, we are so dependent on others. We have the molecules and we would like to get answers from DMPK as soon as possible . . . They [the departments] are so big they start to live a life of their own’ (Synthesis Chemist, Medicinal Chemistry). Besides the concerns regarding the sheer size of the firm and the functionally organized departments, there were organizational cultures and formal directives preventing an open and direct communication, the synthesis chemist argued: ‘It is a tragedy it has to be like this. That is a signal that we are too big. We can no longer speak to one another. We cannot. I cannot drop by in your office, if you worked in DMPK, to ask for a favour. You may say ‘hey’ perhaps, but I cannot ask ‘could you, please…’ [help me with this and that].’ For the synthesis chemist, large units paired with the emphasis on quantitative output created a situation where creativity and motivation gradually evaporate. At the same time as the size of the firm was addressed as a major concern for both PharmaCorp and the industry in general, having all the resources and competencies inhouse was regarded a prerequisite for long-term competitive advantage in the industry. ‘I think we need to have a group of people that have a rather broad but also deep competence to get all these parts together,’ the biotransformation analyst argued. The researchers also complained they had to administrate and take care of an increasing number of activities outside of what they thought of as value-adding work: ‘In general, you could say that the time available for the core activities is constantly shrinking every single year for various reasons: supporting activities, reporting. That is bothering me very much but I think it is hopeless to do anything about it,’ the senior principal scientist in pharmacology said. In order to recover the ‘creative edge’ of the site, a stronger emphasis on ‘creative thinking’ was called for: We need to reinforce the creativity and terminate this ‘numbers game’ and the matrix system we are using. If you look at the situation in arrhythmia, in principle everyone engaging in laboratory work is fully occupied to 110–120 per cent of their time to deliver screen data to feed all these assignments we are given. This model derives from some Andersen Consulting model introduced many years ago . . . I lose all my patience when I think about it! We need to get rid of that. We also need to get rid of the process model for developing new drugs because that kills all creativity. They have skilled experts running
Innovation Work in a Major Pharmaceutical Company 143
assays and then they deliver the data and then they start with the next campaign. They don’t seem to look at the data and note ‘This looks odd!’ . . . We need to create small, cross-functional teams working closely together. We need to improve to focus on quality and not on quantity. (Senior Principal Scientist, Pharmacology) In other words, the issue of leadership was advocated as one of the major challenges for both the industry and the specific site. Leadership Several of the interviewees thought that the leadership had, over time, changed in focus from scientific to managerial objectives and interests, and that the performances of the leaders were rather unimpressive at times. Having some experience from late setbacks in the development phases, the company and its managers demonstrated a risk-averse attitude that at times was criticized: money rather the science was the number one priority for managers, some of the interviewees argued. The increased emphasis on managerial control and various ‘quality models’ also intervened in the scientific work, they thought. One of the analytical chemists addressed this issue: I think they [managers] want to work on what we have been assigned to do. ‘We have this task and then we have Lean Sigma and one mustn’t digress from this path’ . . . I believe that is effective in the short-term perspective, but I also think we need a certain share of that other thing, to do new things. Otherwise, we become a conservative company and someone else, smaller and not as well organized in terms of controlling the activities, will make the new advancements. (Analytical Chemist, Discovery) The analytical chemist exemplified the view of the biostatisticians in the development organization and their unwillingness to make things more ‘messy’ than necessary when ‘billions of dollars’ are at stake, an attitude leading to much potentially interesting data being poorly explored: To use the data that is already available, I believe no one is against that. The conflict is more with development. There is a substantial fear . . . Among their statisticians, there are some actively preventing the use of such methods [use of more data sets] . . . At the bottom line, it is a financial matter. When registering a drug, there are
144
Venturing into the Bioeconomy
billions at stake; if you believe that it will be a big drug, then you lose billions every year the drug is not on the market and that generates a tremendous conservatism among these people handling all this. They take on the role as the fiscal authority, they become the police. (Analytical Chemist, Discovery) A researcher in the pharmacology department also addressed these topics: We try all the time to make people more effective and make them accomplish more with less. What I have experienced quite strongly the last few years is that the time for sitting down, to think and reflect – you may take that time, but it gives you a bad conscience even though that is what we’re living off. I think that is very serious. (Senior Principal Scientist, Drug Metabolism and Pharmacokinetics) He continued: It feels like we are using this ‘conveyor belt principle’, that people are just engaging in what they are hired to do and do not care too much about thinking on their own, because the people on the next level are the ones taking care of that . . . I am very concerned about that. (Senior Principal Scientist, Drug Metabolism and Pharmacokinetics) The general sense of losing the ‘creative edge’ was a persistent theme in the interviews: someone smaller and more creative may easily undermine the position of the major pharmaceutical companies, this story suggested. One of the consequences from ‘initiatives’ such as Lean Sigma (i.e., improvement projects aimed at enhancing efficiency and effectiveness) was that ‘you get a certain control over the organization but you offer less space for individuals to move in the direction they believe in’, an analytical chemist argued. One of the factors that need to be taking into account in all new drug development, the analytical chemist argued, is the residual factor of luck. Unfortunately, as he continued, ‘Constructing process-diagrams [a Lean Sigma technique], and such things, do not really promote luck.’ He continued: ‘We do more of that than a few years ago . . . But I also know our leaders say they want us to “spend more time in the laboratory” and that is what matters, so we have these two opposing forces.’ However, what mattered by the end
Innovation Work in a Major Pharmaceutical Company 145
of the day was not the ability to excel in Lean Sigma activities but to think in new and creative terms: What is important is time for creativity. They [managers] are afraid that people are not doing what they are supposed to do . . . That they don’t get enough ‘bang for the buck’, if you want to put it that way. People are not efficient enough and they have too high salaries for doing other things than they are hired to do. (Senior Principal Scientist, Drug Metabolism and Pharmaco-kinetics) Rather than nourishing creative thinking in a scientific setting, top management engaged in a variety of cost-cutting pursuits, further increasing the burden on the researchers to deliver more with less: There is no end to this ‘efficiency talk’. It is not a case of ‘Once we’re at this level, everything will be okay’ – it is always this talk about cutting down this many per cent every year, and continuing like that. (Senior Principal Scientist, Drug Metabolism and Pharmacokinetics) A biotransformation analyst working in the LO phase also addressed leadership practices as a major issue for the long-term competitiveness of the firm, suggesting that it should be ‘evaluated better than it is to accomplish an improvement’. Based on her experience from working in a number of projects led by different project leaders, she thought the differences in leadership practice were a concern, partly because of the substantial differences in what she was expected to contribute: ‘Some project leaders withhold information, making decisions on their own, run things on their own, are more individualist than involving the rest of the team,’ the Biotransformation analyst argued. She thought that there was ‘a significant degree of prestige, or fear’ among the project leaders, preventing them from leading the project effectively. Another theme addressed by some of the interviewees was the clash between the Swedish culture at their site and British or American management traditions, arguably emphasizing more hierarchical relations than in the Swedish setting. ‘There are more and more foreign managers and they love hierarchies and control and power and beautiful business cards,’ the synthesis chemist claimed. This ‘new managerial class’ was not, of necessity, trained in the life sciences but could have an engineering or business school background, thereby further reinforcing the divergent views between laboratory scientists and the managerial quarters. During
146
Venturing into the Bioeconomy
a significant period of time, there has been an emphasis on implementing a variety of managerial tools and routines that would enable the transparency or the operations. Brought into the company under the auspices of ‘rationalization’, these methods were regarded as being easily understood but nevertheless poorly capturing the underlying complexity of the operations: ‘Quite often, people want to simplify because that is rational and you can get it into an Excel table [the spread-sheet in the Microsoft program Excel]. It is ‘Excel-ified’ in a curious way; all of a sudden there are ones and zeros. People [e.g., managers] like that: ‘Green light, red light”’ (Synthesis Chemist, Medicinal Chemistry). Other interviewees thought that the very project organization, divided into stages demarcated by milestones and toll-gates, were leading to a loss of information: ‘It is too much divided into parts: one team works until MS 1 [Milestone 1] and delivers one package and then you deliver it to the next team, working to MS 2 [Milestone 2], and they do exactly what they are expected to do and nothing more. In all these handovers [English in the original], we lose a lot of information,’ a computation chemist argued. In addition, the frantic desire to move ahead with the project excluded the analysis of what potentially interesting data might mean for the molecule explored or the biological organisms: ‘It is not too often we examine what goes wrong’ (Computation Chemist, PharmaCorp). Much of the critique surfacing regarding the working conditions and the policies enacted in the company is perhaps derived from the very tough economic conditions at present in the industry, summarized by the vice president of the Bioscience department: ‘We’re under pressure, This place is a . . . business. You have to feed the pipeline and make some decisions to move things forward.’ Operating under uncertainty increases the propensity to use managerial models that promise to increase transparency. Hence, the ‘Excel-ification’ of research and the use of Lean Sigma practices. Professional ideologies and identities: where science meets the market Professional identities emerge at the crossroads of many different social, economic and cultural fields; they are, like all identities, a composite including many different components, a blend of collectively enacted beliefs and personal convictions. In the pharmaceutical industry, one of the domains where the intersection between science and the capitalist economy is most salient, professional identities are in most cases characterized by a care for the scientific work combined with
Innovation Work in a Major Pharmaceutical Company 147
a pragmatic understanding of the trade-offs, choices and selections needed when bringing a new drug to the market. However, at the same time as the interlocutors claimed they were aware and understood how things worked in the executive quarters, they also tended to deplore the increased emphasis on managerial issues, on hierarchical organization forms, on policies and guidelines, on management control initiatives such as Lean Sigma, and other things sorted under the label ‘management’. For many of the interlocutors, the management of the organization was some kind of additional and artificial supplement to the core of the activities, the scientific endeavours to produce new therapies, that had over time moved from the periphery to the centre and, today, play a more important role than ever. ‘It is the money that rules and not always the research,’ a biotransformation analyst argued, suggesting that, no matter how detailed the scientific evidence she and her colleagues were capable of producing, by the end of the day it may amount to little unless financial objectives are met. At the same time, management not only resided in some remote place of the organization, happening behind closed doors, with little insight from the lower levels; management also appeared in the level of everyday work, in the leadership of the research projects. Here, management was a less mysterious practice and it was also regarded as something that could make a major difference. ‘The soft things [matter]. We cannot blame the machines,’ a synthesis chemist argued, pointing at the everyday work procedures when explaining successes and failures. One of the ‘soft things’ called for was the ability to raise motivation and to create a situation where people could develop intellectually and not only serve as producers of data points and aggregated and neatly packaged information. The analytical chemist addressed the issue of motivation as being of central importance for long-term competitive advantage: You need to have a few fun things going on and some of them won’t necessarily ‘succeed’, if you want to put it like that. You do not get the same creativity if everyone is thinking Lean Sigma in an organization. You’ll get too little creativity and too much goal-orientation, and then these new things won’t happen in the same manner. (Analytical Chemist, Discovery) The expression ‘a few fun things’ implies the development of some new analytical models, some new research project, or any other scientific work that the co-workers regarded as intellectually challenging. Running these kinds of projects while sheltering from demands for
148
Venturing into the Bioeconomy
immediate pay-back was a viable recipe for enhanced motivation, the interlocutors thought. On a more practical level, the analytical chemist called for more intellectually stimulating discussions face-to-face with colleagues: ‘These people you meet, they need to have a discussion at times, to sit down and speak for an hour or so’ (Analytical Chemist, Discovery). Motivation is imperative here for sustainable competitive advantage: ‘I believe that unless it is fun to do this [develop new drugs], it won’t happen’ (Analytical Chemist, Discovery). He continued: What advances science and development is the desire to do so, and if you lack that desire then much won’t happen . . . if you enjoy what you are doing, then you may do it so much better . . . It also matters if people in the close proximity have fun because we are social animals. (Analytical Chemist, Discovery) The computational chemist here called for someone to take the lead, a ‘champion’ with the capacity to motivate colleagues and managers: ‘We need more champions, people who are passionate about what they are doing – they may be specialists or generalists.’ Besides being scientists and not managers, a group strongly dependent on motivation to accomplish their work, the interlocutors identified with their specific skills and previous experiences. For instance, the synthesis chemists argued that one of the principal qualities for the practising synthesis chemist was to appreciate what they called the ‘cooking’, the mixing and blending of substances and solutions in the pursuit in developing a ‘feeling for the molecule’: ‘You need to enjoy and appreciate “the cooking”, or whatever you like to call it, to dare mixing and blending, to do something curious you have not done before. [But] many times, we are using finished recipes: “do this molecule like this” . . . Some kind of innovative spirit, perhaps’ (Synthesis Chemist, Medicinal Chemistry). The scientists favoured the metaphor of a puzzle when addressing their work; small pieces were identified and brought together into a more coherent picture that was important to understand. Expressions like ‘seeing the broader picture’ or ‘understanding the relationships’ were used to capture this sense of being part of a major investigation of the elementary forms of life. As specialists in their field of expertise, the scientists were at times frustrated over the inability to show the connection between their work and the outcomes. The relationship between their entrenched expertise and output in terms of registered drugs is far from direct or linear, and living with the idea that they might not experience any successfully registration
Innovation Work in a Major Pharmaceutical Company 149
of a new drug was part of their work experience. ‘The hard thing is that it takes so long time before you notice the effects. In ten years’ time, you may see the effect, but then people have forgotten what you did,’ a computational chemist argued. Also, under the new regime of management, anxious to establish transparency in all activities, skills and expertise were to be formalized and put in writing. For some of the interlocutors, such a project is futile because expertise cannot be reduced to a set of propositions, and much time and energy had been invested in rather blunt ‘knowledge management’ projects. An analytical chemist referred to one such experience and testified to the limitations of such an approach: An expert cannot cough up all his or her knowledge at once . . . but it is derived from the context. If one thing happens, then I may say ‘that was interesting’ but if I had babbled about everything that led up to such an observation, it would have been too boring. (Analytical Chemist, Discovery) Largely consonant with more theoretical accounts of expert knowledge (e.g., Dreyfus and Dreyfus, 2005), the analytical chemist thought that the ambition to translate expert knowledge into data bases, instructive manuals, checklists and so forth was indicative of the relatively poor understanding of how expertise works in its actual setting among some decision-makers. In summary, the professional identities, couched in professional ideologies emphasizing the value of scientific procedures and the will to make a contribution to society, underlined the complicated relationship between management and science, the need for motivation and ‘having a bit of fun’, and the irreducible nature of expertise (again in conflict with ‘management initiatives’). The professional identity of the pharmaceutical scientists is that of a Homo faber, the creating man, very much in conflict with the Homo oeconomicus, the opportunity-seeking man of the management doctrine they tended to see as a necessary evil in their life world. Professional identities help individuals cope with both practical assignments at hand and to orient and direct themselves in the external world. Having a firm belief in what one is capable of doing and inscribing at least a minimal amount of meaning in that work is, of necessity, central to long-term survival as a professional in a domain of expertise. While the professional identity of scientists may be strong and constitutive of individual identities, the new drug development work is too fragile and fuzzy to fully allow for such identification.
150
Venturing into the Bioeconomy
As pointed out numerous times, it is very complicated to predict what new molecules or even candidate drugs may end up on the shelves in pharmacies. At the same time, it is important that the scientists do not conceive of themselves just as scientists, but also as contributors to the long and uncertain new drug development work. Without such identifications, there are few chances of integrating the various fields of expertise into what François Jacob spoke of as ‘a machine for making the future’ (cited in Rheinberger, 1998: 288).
Summary and conclusion The study of PharmaCorp suggests that technology drives innovation in the contemporary bioeconomy, while theory lags behind. New advanced genomic and post-genomic approaches provide scientists with a significant amount of data that needs to be sorted out and structured into models and theories of biological system. The research procedure is also subject to automatization and a higher degree of throughput; more data is produced in shorter periods of time, further accentuating the demand for more sophisticated theories that may help narrow down the research efforts. Torgersen (2009), addressing systems biology, here, for instance, talks about an ‘almost taylorist procedure of knowledge accumulation’. Leaving the traditional wet lab in vivo biology tradition behind, both the production and analysis of biological data is a matter of automatization and an increased reliance on bioinformatics approaches. This whole situation, where new approaches are introduced without being accompanied by fully articulated theoretical models and frameworks, is worrying for the scientists with experience of the high-throughput screening hype, where a ‘throw things to the wall and see what sticks’ approach was widely regarded as the future of new drug development. For some of the sceptics, the new bioinformatics and biocomputational approaches are little more than advanced data-mining procedures, arguably too premature to be able to lead to any new innovative therapies. The general frustration over failed late clinical trials and the inability to produce new therapies has also further reinforced the financial focus of the pharmaceutical industry. What some of the scientists referred to as the ‘numbers game’, the breaking down of performance measures into individual or departmental levels, was widely regarded as an act of desperation having little significance for the final output and the overall performance of the company. As a consequence, some of the scientists expressed their nostalgia for a successful past where there was more focus on the scientific questions and the day-to-day value-adding
Innovation Work in a Major Pharmaceutical Company 151
practices in the firm. However, the unprecedented speed of introduction of new medical technologies and scientific frameworks, such as genomics and systems biology, inevitably lead to some changes in the industry and most of the scientists recognized the need for change in order to remain competitive. A new regime of new drug development is in the making, but some quite substantial issues are yet to be resolved.
Notes 1. ‘In silico’ is an expression used to mean ‘performed on computer or via computer simulation’. The phrase was coined in the late 1980s as an analogy to the Latin phrases ‘in vivo’ and ‘in vitro’, which are commonly used in biology and refer to experiments done in living organisms and outside of living organisms, respectively. 2. Attrition rate, or the failure rate, relates to the percentage of drug projects that are terminated as they pass through the new development process largely due to safety and efficacy criteria.
4 The Craft of Research in Biotech Companies
Introduction In contrast to the major multinational pharmaceutical companies, with their roots in the medieval pharmacies where skilled pharmacists could mix their own potions, the biotechnology industry is a more direct outgrowth from university research and more specifically the disciplines of microbiology and biomedicine. Being an industry more or less founded in the 1970s in the US, in the Boston and San Francisco Bay regions, the biotechnology industry has been subject to extensive coverage in both the financial press and in more scholarly settings. Serving the role of being a knowledge-intensive industry par preférénce, biotechnology companies have been treated with great patience as they have failed to deliver either desirable bottom line results or adequate therapies. Like perhaps no other industry, the biotechnology industry has been able to operate for substantial periods of time without being expected to make any major breakthroughs. Today, as the bioeconomy is becoming increasingly fragmented and more research endeavours are taking place in network organization forms, including universities, pharmaceutical companies, biotechnology companies and other industry organizations, the biotechnology companies are increasingly playing the role of sites of expertise that can be tapped and exploited by the major pharmaceutical companies. Biotechnology companies are often smaller, more dynamic and more flexible than the large-scale companies in the pharmaceutical arena, thus being able to appropriate and further develop new technologies and techniques. When Kary B. Mullis, working at the Californian biotechnology company Cetus was awarded the Nobel prize in chemistry in 1993 for the invention of the polymerase chain reaction (PCR), now a staple method in genomics 152
The Craft of Research in Biotech Companies 153
research, this was perhaps the single most important moment for the biotechnology industry, placing the industry once and for all among the more prestigious research universities as a principal producer of scientific knowledge and methods. Today, there is a complex and manifold exchange and collaboration between biotechnology companies, pharmaceutical companies and research universities, all contributing in their own ways to the bioeconomy. In this chapter, some empirical materials regarding the venturing into the bioeconomy of the biotechnology industry will be reported. Similarly to the last chapter, the research findings are introduced thematically, emphasizing the challenges and potential of various research technologies and analytical approaches.
The emergence and growth of the biotech sector Like perhaps no other sector of the economy, the biotech industry has been portrayed as being indicative of the future to come. Unlike domains like nanotechnology, essentially propelled by visions and hopes for the outcomes for venturing on the micro and nano levels, the biotech industry is already making contributions to contemporary society. Great book titles such as Jeremy Rifkin’s The Biotech Century: Harnessing the Gene and Remaking the World (1998) and Richard W. Oliver’s The Coming Biotech Age (2000) are indicative of the great hopes for a future strongly shaped by biotech. In 2000, Enriques and Goldberg (97) declared in Harvard Business Review that ‘advances in genetic engineering will not only have dramatic implications for people and society, they will reshape vast sectors of the world economy’. While Enriques and Goldberg scarcely veil their enthusiasm for the potentiality in all things biotech, only a few years later, Gary Pisano (2006: 5) did not hesitate to say that ‘[t]he economic performance of the sector overall has been disappointing by any objective standard’. Besides a few (about ten) commercially successful companies, including Amgen (accounting for the majority of the operative profits in the sector) and Genentech, the majority of the companies reported red performance figures. Being equally sceptical about the biotech hype, Hopkins et al. (2007: 578) claim that ‘biotechnology has had little impact on primary care medicine’. Both the financial performance and the effects on health care and therapies have been called into question. On the other hand, the contributions from biotech to basic and applied science remain undisputed. As Mirowski and van Horn (2005) emphasize, the main contribution from biotech companies has been to develop and refine upstream
154
Venturing into the Bioeconomy
technologies and methodologies for the life sciences; consumer markets were never targeted in the first place: What is beyond dispute is that some of the earliest breakthroughs in genetic research were processes or entities that enabled genetic manipulation: The Cohen-Boyer recombinant DNA technologies of Genentech; the polymerase chain reaction (PCR) controlled by Hoffman–La Roche; and the Harvard Oncomouse – none of which were downstream products aimed at a consumer market. Therefore, some of the earliest money made from biotechnology was in the area of ‘research tools,’ rather than fully-fledged therapies. (Ibid.: 524) Nevertheless, the biotechnology industry is haunted by the stigma of being ‘hyped’. Before examining the biotech industry in greater detail, some operative definitions need to be discussed. The OECD offers a formal definition of biotech: ‘The application of S&T [science and technology] to living organisms as well as parts, products, and models thereof, to alter living or non-living materials for the production of knowledge, goods, and services’ (cited in Dahlander and McKelvey, 2005: 410). Zucker and Darby (1997 ) speak of biotechnology in the following terms: [B]iotechnology . . . mean[s] the revolutionary breakthroughs in life sciences over the last two decades including especially, the use of recombinant DNA to create living organisms and their cellular, subcellular, and molecular components as a basis for producing both therapeutics and targets of testing and developing therapeutics. Recent developments focus structural biology, combinatorial chemistry, and gene therapy. (Ibid.: 432) In Thacker’s (2004: 2) view, the biotech industry is a hybrid between the bio-sciences and the computer sciences: ‘[W]e can describe biotech not as an exclusively “biological” field, but as an intersection between bio-sciences and computer sciences, and intersection that is replicated specifically in the relationships between genetic “codes” and computer “codes.”’ Thacker thus suggests that biotech is not solely venturing into the elementary processes of biological organisms and reduced biological systems, but that such biological systems are treated also as informational entities, entities whose elementary properties and processes can be described in informational terms. Elsewhere, Thacker
The Craft of Research in Biotech Companies 155
(2006) says that, in the biotech industry, biology plays the role of a ‘source material’: • [In the biotech industry] Biology is the motive force, the method, the medium. Biology is what drives production. Biology is the source material. • Biology is the process of production. Biology is not replaced by machinery, but it replaces machines. Biology is the technology. • Biology is the product, the endpoint, and the aim. Biology does not aim to produce a material good or a service, but, above all, more biology. (Ibid.: 201) Sunder Rajan (2006) emphasizes the connections between the traditional pharmaceutical industry, derived from the mining and dying industry and the use of chemistry as an applied science, and the biotech industry, a more recent development largely produced as a by-product of university research, initially in the San Francisco Bay area, at University of California, San Francisco and Stanford University in the mid-1970s: Biotech and pharmaceutical companies represent two quite distinct arms of the drug development enterprise. They have evolved at different historical moments, have engaged for the most part in quite distinct science, and tend to occupy different locations in the drug development market terrain. (Ibid.: 21) Just like pharmaceutical industry, what Sunder Rajan (ibid.: 42) speaks of as ‘corporate biotech’ is a form of ‘high-tech capitalism’ based on innovation, production and the centrality of information. The capacity to wed know-how in the various fields of the life sciences and biology with traditional capitalist or organizational and managerial procedures produces what Waldby (2002: 310) refers to as biovalue, ‘[t]he yield of vitality produced by the biotechnical reformulation of living processes’. Biovalue is thereafter transformed into economic value as scientific findings are transformed into marketable drugs or therapies. However, as, for instance, Pisano (2006) and Hopkins et al. (2007) have suggested, the transformation of knowledge in the field of life sciences into commodities is by no means a linear or trivial process. In 1990, American authorities announced that they would sponsor the human genome-mapping programme, eventually named HUGO, expected to be finished by 2005. Significant hopes that the mapping
156
Venturing into the Bioeconomy
of the human genome would lead to radical innovations and a better understanding of many biological processes were articulated, but as the HUGO projects were terminated and a variety of scientific procedures have been developed, the field of genomics has to date only modestly contributed to the output from the major pharmaceutical industry. ‘The genome sequence has a far greater capacity to mislead than it has to illuminate,’ Higgs argued (2004, cited in Hopkins et al., 2007: 583). Rather than being the ‘book of life’ (Rabinow, 1996), the human genome revealed yet another important component in the elementary biological processes, that of the production of proteins. While the human genome includes between 25,000 and 30,000 genes, the number of proteins encoded by these genes is somewhere between 1 and 20 million and many genes encode more than one protein (Pisano, 2006: 34). The reductionist methods implied in the genome-mapping programme have consequently been criticized, and today more holistic and integrative concepts such as systems biology have been discussed as alternative analytical models. In addition, major pharmaceutical companies have come to realize that the number of so-called ‘druggable targets’ – that is, the proteins that small-molecules drugs bind to in order to moderate disease processes – may be rather modest, around 600–1,500 targets (Hopkins et al., 2007: 572). One of the principal challenges for the biotech industry is that there are weak possibilities, in comparison to technology-based firms and other high-tech sectors, to engage in organizational learning, to accumulate know-how and expertise over time: [T]he conditions that allow it to work well in those sectors [other high-tech sectors] – codified technology, modular designs and standard platforms, and well-delineated intellectual properties – are often lacking in biotechnology. As a result of the system of innovation, the biotechnology sector has evolved an anatomy – small, specialized firms, integrated by means of alliances, etc. – that, while doing certain things well (e.g., generating many experiments, encouraging risk-taking, learning through imitation), falls short in other areas (integration, learning from experience). (Pisano, 2006: 156) For instance, Pisano asks, did Amgen, by far the most financially successful biotech company, pursue the right strategies? For sure, they did something right, but would that be a recipe for a successful future? That is more complicated to tell. Hopkins et al. (2007: 584) say that ‘[i]t is hard not to conclude that many of the widely held expectations about
The Craft of Research in Biotech Companies 157
the impact of biotechnology are over-optimistic’. At the same time, they admit that, while the poor financial performance and the relatively modest output in terms of new therapies should not veil the qualitative contributions that are actually made: Quantitative declines in productivity may hid very real qualitative improvements, as the pharmaceutical industry tackles increasingly difficult diseases . . . This is intuitive when we consider the nature of the industry’s shift from infectious to chronic diseases. Many of the successes of the golden age (such as the sulphonamides, penicillin, and other antibiotics) were drugs that targeted invading (exogenous) organisms. The restoration of balance to a biological system composed of endogenous components or subsystems is an entirely different operational principle. (Ibid., emphasis in the original) Nightingale and Mahdi (2006) address the same concern: It does not follow that radical improvements in scientific research will lead to revolutionary improvements in productivity, because bottlenecks remain within the drug discovery process, particularly at the target validation stage. As a result, qualitative improvements in research capability do not necessarily translate into quantitative improvements in output. (Ibid.: 75) Still, the impressions are mixed: on the one hand, biotechnologies have been brought into the new drug development activities in major pharmaceutical companies and have ‘[b]roadened the scope of the technological options available to drug developers at a time when the industry is addressing quantitatively more complex medical problems’ (Hopkins et al., 2007: 584); on the other hand, Hopkins et al. (ibid.: 582) say that ‘in traditional small-molecule drug development there is little evidence to date that platform technologies such as pharmacogenetics and toxicogenomics have had a significant impact’. However, given that the biotech industry has grown from being a few dozen pioneering firms in the early 1980s in the US into a very large and well-financed global industry in less than 30 years (ibid.: 580), one should perhaps not be too ready to write off biotech as a major industry for the future, even though the expectations should accommodate the difficulties of scientific endeavours. Rothaermel and Thursby (2007) demonstrate, for instance, that in the period 1980–90, the average number of biotech patents generated per year was 3.97,
158
Venturing into the Bioeconomy
and in the following decade (1991–2000), this figure has risen to 10.97, accounting for a ‘statistically significant increase of about 275% (p<0.001)’ (ibid.: 842). Speaking from a managerial and organization theory perspective, what has been of particular interest for researchers is the fact that biotech is largely a network-based industry. Some researchers suggest that biotechnology represents an entire new business model (Casper, 2000) or a new science-based innovation regime (Coriat et al 2003) while others have sought to explore the biotech industry in terms of being based on knowledge networks (Powell, 1998; Owen-Smith and Powell, 2004), alliances (Rothaermel and Deeds, 2004) and, more particularly, collaborations with pharmaceutical companies (Schweizer, 2005), or research universities (Zucker et al., 2002; Jong, 2006). Yet another corpus of literature examines biotech as grounded in social networks (Liebeskind et al., 1996; Oliver, 2004) and the ability to transform social capital into financial capital (Maurer and Ebers, 2007; Gopalakrishnan et al., 2008; Durand et al., 2008). Other studies emphasize the central role of patent law and regulations (Smith, 2001) and underline the rather weak relationship between intellectual resources and market value (Nesta and Savotti, 2006). Thomas and Acuña-Narvaez (2006) and Rothaermel and Thursby (2007) discuss the relationship between the advances in adjacent scientific fields such as nanotechnology and the biotech industry. By and large, in this rather diverse literature, there is a complex image of the biotech industry emerging, riddled by inconsistencies and difficulties, to reap the benefits of accumulated intellectual capital. Two dominant themes are, however, discussed in the literature. First, the biotech industry is a fundamentally network-based industry, serving as an eminent example of what is today called the network organization, consisting of geographically co-located companies constituting clusters, networks of firms, in, for instance, the Boston, San Francisco Bay and San Diego regions in the US and around Cambridge in the UK and Munich in Bavaria, Germany. In these biotech regions or clusters, major pharmaceutical companies and research universities serve as hubs connecting the nodes – the biotech companies and other firms and organizations connected to the network (e.g., venture capitalists). Powell et al. (2005), a research group studying the emergence and growth of the biopharmaceutical field longitudinally since the early 1990s, emphasize the changing nature of the industry where new entrants and incumbent firms constantly change positions and jointly collaborate. Using the metaphor of the dance hall, where the music, the dancers and the crowd on the dance floor may shift over time, at times
The Craft of Research in Biotech Companies 159
swiftly and in other cases more slowly, the life sciences demonstrate such continuous reconfigurations: [B]oth the music and the dancers shift over time. The early dances are dominated by large multinationals and first-generation biotech firms, collaborating to the tune of commercialisation of the lead products of the younger firms, with the bigger pharmaceutical firms garnering the lion’s share of the revenues. Research progress, strongly supported by the stable presence of the NIH [the US National Institute of Health], attract new participants to the dance and also enables incumbent biotech firms to deepen their product development pipelines and become less tethered to the giant pharmaceutical companies. (Ibid.: 1188) Since the knowledge of elementary processes of life, life on the molecular and on cellular levels, is still relatively cursory, it is complicated to predict what scientific breakthroughs will come in the future and how well such potential breakthroughs will translate into therapies and financial revenues; consequently neither money, market power, nor innovative ideas dominate over the other. Therefore, balancing various actors, interests and forms of know-how and expertise appears to be a viable strategy for long-term competitiveness in the field: Neither money nor market power – not even the sheer force of novel ideas – dominates the field. Rather those organizations with diverse portfolios of well-connected collaborations are found in the most cohesive, central positions and have the largest hand in shaping the evolution of the field. This is a field in which the shadow of the future is long, as much remains to be learned about the functional aspects of molecular biology and genomics. (Ibid.: 1187) Thus, the biopharmaceutical industry and the field of the life sciences are characterized by a continuous exchange of know-how and capital in a network form. Second, the individual biotech company is portrayed as differing substantially from the regular major pharmaceutical company; biotech firms are smaller, more risk-averse and characterized by an enterprising and entrepreneurial culture. For instance Schweizer (2005: 1954) says: ‘By contrast [to biotechnology firms], pharmaceutical companies are characterized by formal structures, high levels of hierarchy, and long and slow decision-making processes. Overall, they do not have the same strong
160
Venturing into the Bioeconomy
(ownership) incentives in place and are more risk-averse than biotech companies?’ However, the major pharmaceutical companies play a very important role in the ‘ecology’ (see Hannan and Freeman, 1989) in terms of being knowledge-brokers and having the capacity and the financial resources in-house to bring the research findings from biotech companies to the market. According to Gassman and Reepmeyer (2005): Today’s pharmaceutical R&D is no longer a stand-alone activity by single companies, but can rather be defined by complex a web of inter-firm agreements and alliances that link the complementary assets of one firm to another. Pharmaceutical companies form the nodes in large-scale scientific networks that include biotech firms as well as universities. (Ibid.: 235) In summary, then, biotechnology companies have been consistently portrayed as representing the latest movement towards a ‘post-industrial’ and ‘knowledge-based’ economy, relying on science as a principal organization resource capable of producing sustainable competitive advantage. Biotechnology companies are thus seen as being capable of producing new and pioneering science-based innovations that could be acquired by major pharmaceutical companies and thereafter be brought to the market. As suggested, for instance, by Pisano (2006), neither the scientific (in terms of new therapies in health care) nor the financial performance have been overtly impressive, with a few ‘star companies’ such as Amgen and Genentech as notable exceptions, and the biotech industry has substantial expectations to live up to. However, the know-how, techniques and research methods developed in the biotechnology industry need to be taken into account when evaluating its performance. The industry is less than 30 years old and, as suggested by analysts of the economics of innovation, it may take 40 to 60 years before a new technology is fully adopted and used to its full potential. In that time perspective, biotech is still in its infancy and one must not be too ready to jump to the conclusion that the industry has made only meagre contributions.
Biotechnology entrepreneurs and the potentiality of the life sciences Introducing the biotechnology entrepreneurs The study includes five biotechnology companies operating in different domains of the industry. The first company is a small company where two persons work to develop bio-computation methods for analyzing
The Craft of Research in Biotech Companies 161
larger data sets, services being sold to major pharmaceutical companies and smaller biotech companies to be used in the development of new therapies. The second company employs one single entrepreneur working to develop a therapy for migraine in collaboration with academic researchers. The third company is a larger company started in the first years of the new millennium as a spin-off from a successful research group in the field of the central nervous system (CNS) at the local medical school. The company employs about 30 people and is fully owned by a Scandinavian biotechnology company. The company works in developing therapies for CNS-related diseases. The forth company works in the field of stem cell research and employs about 50 people. The company was established in 2002 as a spin-off from research activities at the local medical school. The fifth company was founded in 2006, based on previous experiences from working with universitygenerated research findings, leading to the licensing of a molecule to a major pharmaceutical company. This fifth company works in the field of autoimmune diseases, such as multiple sclerosis and rheumatism, and works in a small-molecules research framework. One the of the entrepreneurs developed a business concept centred around bio-computation analysis, serving to provide ‘concrete calculation methods to create knowledge within new drug development and research’ (Computational Chemist, Consultant). The calculation methods were then used to enable an ‘understanding of causality’, in turn increasing the predictability in biological systems. Another entrepreneur ran a small company working in the field of the central nervous system (CNS), seeking to produce new medication for migraine. Spotting a significant market for this kind of therapy, this entrepreneur did not start by seeking applications in the life sciences, but in the market, focused on demand: ‘I’d rather start in the other end: Where is the market? Where is the customer? Because that is where I am going’ (Biotech Entrepreneur 2). Operating in a field with only a limited amount of therapies, the entrepreneur managed to mobilize financiers and partners primarily on the basis of his enthusiasm: ‘When I tell people about my little company, they can only judge me on my enthusiasm. They look into my eyes and go ‘He’ll be able to make it’ . . . they have little else to base their judgment on . . . The next step is market and clients and then, of course, patent and intellectual property rights’ (Biotech Entrepreneur 2). Another biotechnology entrepreneur represented a rather well-established, medium-sized biotechnology with two sites, one in Sweden and one in Denmark, operating in the field of CNS and neurodegenerative diseases. The company was hoping to be able to produce
162
Venturing into the Bioeconomy
relevant therapies in their targeted domain within a few years. All the biotechnology entrepreneurs had experience of working in major pharmaceutical companies for long periods during their careers and still had quite extensive collaborations with the pharmaceutical industry. They also shared a series of major concerns regarding the future of the industry and entertained ideas on how to handle all the challenges. The need for theory and validated models One of the standing concerns for the entrepreneurs was how to construct and validate models of biological systems, enabling an understanding of the diseases that were targeted. It was a major historical event when various molecular sciences were introduced, making claims that the traditional in vivo pharmacology models were outmoded, leading to substantial revision of the work procedures in large pharmaceutical companies. Biotech entrepreneur 2 emphasized the substantial influence of genomics research in the 1990s: When molecular science emerged, the receptor pharmacology disappeared somewhat. All became ‘science’. You should examine genes and gene expressions . . . There was no one with the expertise in receptor pharmacology any more . . . There is a tendency to simplify . . . methods are developed that mirror the physiology so incredibly little that the [risk] is 90 per cent that you end up wrong. (Biotech Entrepreneur 2) The idea that genomics research would speed up the process and render traditional models obsolete was widely endorsed in the 1990s. However, the idea that one could make immediate connections between genes and diseases without taking into account the larger biological systems, certainly appealing from an economic and financial point of view, was a gross simplification. Biotech entrepreneur 1 recounted a situation where he has part of a review board to select promising molecules to further refine and eventually became aware that there was no clear idea of what the target was for the particular disease; without a very detailed and sophisticated model of the biological system, including clear targets and an understanding of the biological pathways involved, there are few opportunities for developing adequate therapies. ‘Here we have a bunch of very talented people, selecting a substance, and they do not even know where the bull’s eye is,’ Biotech entrepreneur 1 recalled. The new omics technologies, examining various processes at the cellular level, are producing substantial amounts of data. But such data certainly
The Craft of Research in Biotech Companies 163
does not speak for itself but must be interpreted and understood within the framework of a biological system. Biotech entrepreneur 1 pointed at the scope of this challenge: In technical and analytical terms, we are introducing a substantial amount of noise in the system when omics is used. The classic omics experiment is to use a urine sample or a plasma sample and then you use spectrometric methods to examine the sample. In spectrometry, you get a signal in every shift or wavelength and then you create variables. I have seen omics sets with 45,000 variables, tables with 45,000 columns. Of course, not all of this data in the columns is correlated to the phenomenon you are studying, and even less is useful information. There you have a substantial problem . . . If you do 20 tests on the same material [i.e., sample] one is significantly separated from the others out of pure randomness. If you do 45,000 tests; I think that I calculated that about 2,500 variables or something like that, correlate against the phenomenon you are studying – that is, ‘Does the drug have an effect or not?’ If you have all these, positively or negatively but nevertheless significantly correlated variables . . . which of the 2,500 should be eliminated? Omics gives us a full coverage and brings in all the complexity, but noise is a technical problem. (Biotech Entrepreneur 1) In addition, Biotech entrepreneur 1 thought that major scientific projects like HUGO, the Human Genome Project, had been a distraction for the industry, leading to investment in training and technologies that led nowhere – ‘a lot was promised and the methods were adopted much too early, before it was understood how they could possibly make a contribution’, he contended. ‘As I see it,’ Biotech entrepreneur 1 continued, ‘the gene is farther from the physiology than the RNA, which is farther from the physiology than the protein is,’ thereby suggesting that the most promising omics technology was proteomics, potentially playing a role in the future. The ‘gene-hunting’ project initiated as a consequence of HUGO did not really lead to the development of any new drugs, often because the empirical data was inconsistent and complicated to use when making decisions: ‘In many cases, the [results] are like “among the sick, 27 per cent had this mutation and only 10 per cent in the control group”, and then there’s an association. But it is clear that not everyone being sick had this mutation and the even healthy persons may have it. It is like that quite often’ (Biotech Entrepreneur 1). ‘There is faith in the methods,’ the CEO of Biotech company 5 remarked, ‘[but] the biology
164
Venturing into the Bioeconomy
is much more complicated than you tend to believe.’ The key question for the life sciences concerned with developing new drugs is then how to ‘translate the body of work that omics by definition is generating to information and preferably also knowledge’ (Biotech Entrepreneur 1). That is, omics should be merely seen as a sophisticated means for production of data; without any advanced analytical approaches, all this data remains more or less unintelligible. Biotech entrepreneur 2, working in the field of CNS, had a clear idea of how to align omics technologies and the more conventional methods for developing drugs: Step Number One: develop a functional screening model the old-fashioned way . . . a cell-based screen model measuring the function . . . Step Number Two: this is an in vivo model. Since we work with a disorder in the brain [migraine], we need to be able to give the substance to an animal and observe its behaviour and take out the brain to measure the concentration in the exact place where the substance is operating, and measure an effect . . . What they do not do [in major pharmaceutical companies] is measure in the brain where the substance is operating; they mash the entire brain. If you work with receptor interaction, that makes things more complicated. (Biotech Entrepreneur 2) The very key to this work was to construct models of biological systems that could help predict how new chemical entities interact with targets and biological pathways had the capacity to produce credible data. All the entrepreneurs strongly emphasized the empirical nature of new drug development. One of the interviewees addressed the need for validating systems biology models before they can play any active role in predicting outcomes: You cannot just adopt these methods [systems biology models] because they are not validated. They are very interesting methods, capable of generating much knowledge and speeding up the new drug development process in the future, when they have matured and the results have been interpreted, helping us eliminate the white spots on the map and giving us a better framework for understanding things. But this does not help you in the desperate situation to sift out molecules for new drugs. You need to measure relevant systems based on as much relevant data as possible, relating it to what is already known and find something to indicate you are doing the
The Craft of Research in Biotech Companies 165
right thing; not in a single test tube or two test tubes, but in the total [system]. (Senior Scientist, Biotech Company 3) What the interviewee points out is that one must not believe that models of biological systems can be constructed solely on the basis of large data sets and mathematical models and algorithms, producing models ‘bottom-up’ so to speak. Instead, such models are constructed on the basis of theories, operational hypotheses, the data generated and the gradual construction of disease models. No model of biological systems could be constructed ex nihilo but has some starting point, some theoretical model of how the biological system is constructed. In Biotech company 4, a company providing stem cell therapies and experimental systems, the interviewees also strongly underlined the importance of validation and the need for getting the company’s stem cells validated by the predominant systems: ‘Our own core activity is developing cell types that may be used as in vitro tools for pharmaceutical development in the first place,’ the research director argued (Biotech Company 4). ‘We do have a product line: we sell cardiac muscle cells, we sell liver cells,’ he continued. It was of great importance to have these cells validated by others: ‘Others must validate what we have developed. We make sure that our cells are part of the testing systems being used when developing the next generation of tox models,’ the research director concluded (Biotech Company 4). A synthesis chemist at Biotech company 3, underlined the importance of the in vivo approach for being able to understand the area of interest. This emphasis on in vivo models demanded, however, a bit of explanation and justification in the present period, preoccupied with genomic and post-genomic approaches. If we hadn’t been doing in vivo [studies], then we would not have been able to catch these molecules, I’d say. So the base is in vivo. We have struggled quite persistently to maintain this idea. In the interactions with the major companies, we have faced tremendous difficulties when presenting these substances. Then, the first questions are naturally ‘What are the mechanisms?’, ‘What actions are going on?’, ‘How do the molecules bind?’ and so forth. We have struggled to maintain these old concepts to use live animals and then use screening models. (Synthesis Chemist, Biotech Company 3) When asked if there is a ‘rather strong focus on in vivo models’, the synthesis chemist replied, ‘Yes, totally. Only that.’ The in vivo studies
166
Venturing into the Bioeconomy
were operationalized as rather sophisticated animal models, based on the production of extensive data sets: We have been using a number of different animal models that we really trust. The first is, of course, a regular rat – quite simple – where you can do dose–response studies and get a feeling for how the rats experience the drug. That is the first toxicology study. We can observe quite fast if the animals are not very well or if they are even dying. (Synthesis Chemist, Biotech Company 3) He continued: We measure virtually everything there is to measure. Like the rat’s behaviour – and that is like 100 variables – during a one-hour period of time, in a fine net of light rays within which they constantly move. We have quite sophisticated computer programs analyzing how they behave . . . So we can get a quantitative measurement. Then we do, of course, dissect the brain to measure a few parameters. [You examine] different regions of the brain and different variables to understand how they are affected. And here we also have this dose–response giving us additional support for the hypothesis that there is something going on. (Synthesis Chemist, Biotech Company 3) The main reason for relying on in vivo models in the field of CNS was that the brain, human as well as animal, contains many compensatory mechanisms. A major concern for the life sciences is, then, that biological systems are emergent systems, systems that may respond to external changes or disequilibrium through very advanced mechanisms ensuring homeostasis: Then there is this whole thing with emergent phenomena that there are quite a few things that you really do not know if you look at an isolated system. Not until you examine the relationships, might you see the effects. They have take away the classic pharmacology, the in vivo pharmacology, where you examine how a disease operates and construct disease models. This takes its time and must be given the time needed. You should focus on those therapeutic areas you are interested in until the disease model is validated, a functioning disease model. Then you may use of the arsenal, measuring many molecules, sift them out and test them, because you have a proper validated system that in the end has a predictive capacity regarding
The Craft of Research in Biotech Companies 167
Phase 2 results. As it is now, there is no time for that; you have a genome hypothesis and you try to test it. You don’t have the time to develop any in vivo disease model, but just race on . . . the attrition rate is gigantic because the relevant systems have not been examined to check that this is not going to produce any effects or will give too many side effects; there is no view of the totality of the system. (Senior Scientist, Biotech Company 3) In addition to the very complexity of the biological systems, indicated by the recent contributions of, for example, omics technologies, the molecules that are sought today are more complicated than previously, leading to further challenges in new drug development activities: ‘The structures we examine today, they are more and more complicated, have more and more hydrophobic characteristics. The receptor mechanisms we study become more complicated,’ Biotech entrepreneur 2 says. The CEO of Biotech company 3 also stressed the need for thinking in new terms in new drug development. Traditionally, small molecules have been the principal domain for the major pharmaceutical companies. These small molecules have been developed to aim at one target (e.g., a receptor in a cell), which, in turn, affects a disease or medical condition. Since small molecules tend to demonstrate higher affinity than, say, biologics (larger molecules with higher weight), they are favoured by the research methods being widely used. The difficulty of this model is that small molecules are more pharmacologically complicated to use in drugs because of, for instance, high hydrophilicity – the tendency of the molecule to bind to water molecules. In other words, the enacted research model favours clear-cut results but those results favour substances that may eventually demonstrate lower efficacy in clinical trials. According to the CEO, there is something fundamentally wrong in this research model. The concern is also that there are fewer biomarkers identified connected to specific targets and, without such clear unified and linear models for the sequence molecule–target–disease, it is complicated to know how to proceed in the research work. At Biotech company 3, they developed a research method where the molecule not only connected to one target but to a series of targets, which affected the entire CNS from different angles. The CEO used the metaphor of a stone being thrown into a still water, leaving waves on the surface of a pond long after it has sunk to the bottom. A molecule may not have to find the one single target – ‘the key to the keyholes’ – but may affect the CNS in manifold and indirect ways. The CEO claimed that Phase 3 studies in clinical research in the field of CNS (e.g., Parkinson’s disease
168
Venturing into the Bioeconomy
or Huntington’s disease) ‘are a graveyard’ because these studies often fail to demonstrate adequate efficacy. Rather than seeking to work harder to overcome these difficulties – that is, seeking to find ‘new and better’ molecules – it was important, the CEO argued, to reformulate the entire operative research framework. In the model developed at Biotech company 3, traditional in vivo animal models, multivariate analyses and systems biology approaches were combined into a research model that has proved to operate as anticipated. In the ten years the company has operated, about 800 molecules have been developed and investigated and five have been brought into the clinical research phase. Of these five molecules, none have yet failed on the basis of safety, toxicity, or low efficacy standards. That is, the precision of the research model is higher than in the regular and more conventional pharmaceutical industry. ‘Unless you have adequate analysis methods, the information is lost in sea of data,’ the CEO said. The CEO suggested that the general claim that ‘it takes a million molecules to produce a drug’ needs to be disputed. Instead, one can reach that one single drug on the basis of a much smaller number of molecules if an adequate research model is constructed. The main challenge for the company was to convince both the regulatory bodies and investors that their model, which is not ‘predicative’ in the conventional sense of the term, is the right way to go. However, in the field of CNS, an extremely complex neural network characterized by properties of emergence, there is a need for new approaches. In beginning of February 2010, Biotech company 3 announced that its clinical research on a therapy for Huntington’s disease could demonstrate adequate results, indicating that their model producing 800 rather than a million molecules to choose from enabled good precision when selecting candidate drugs. In summary, there is a great deal of scepticism regarding the abandonment of the traditional in vivo pharmacology new drug development model and the widespread embrace of the genomics and post-genomics technologies (e.g., omics technologies). The incubator director of the university hospital science park emphasized the importance of the traditional approach to develop new therapies: ‘I would like to say that the pharmaceutical project and the companies we’ve seen passing here, they are all based on a rather conventional way of working. That is, you work in a specific field and you discover a peptide or a smallmolecule substance that you bring further.’ Rather than abandoning the traditional models and their emphases on full disease models based on verified data, the omics technologies should be treated as a means of providing more data points and more qualified data as input to the
The Craft of Research in Biotech Companies 169
construction of disease models. According to the biotechnology entrepreneurs, the hype in these new technologies made decision-makers confuse the ends and the means; omics are the means and the ends are, still, models of how diseases interact with the biological system. New scientific approaches Bio-computation and bio-informatics The period from the mid-1990s has been one of unprecedented scientific advancement but these new opportunities have not yet translated into radical new therapies. The new approaches in the life sciences are still highly promising and the entrepreneurs were themselves proponents of new methods as long as they complemented traditional, empirically validated models for new drug development. One of the interviewees was critical about the standard operating procedures for filing and storing data in the pharmaceutical industry – in short, its data management routines: A long-term project could last for several years. To create a rather limited set of images and texts that gathers all the data collected is the objective. Two years after the project is abandoned or some key person has moved, then it shouldn’t be an insurmountable task to review that information, but no one does that . . . My experience is that it won’t happen, you never return to old data presented in Excel-spreadsheets or in a data base. The knowledge that has been generated becomes anecdotal. (Biotech Entrepreneur 1) Data management is a field where routines could be improved. Of the totality of the data generated, only a smaller subset is actually used. As a consequence, the pharmaceutical industry has become a data generation machine with only limited capacity in terms of filtering the data. The rather recent idea of bio-informatics and bio-computation is, therefore, one of the most promising approaches in the industry, potentially helping co-workers and executives to see structures and relationships in the massive data sets generated. However, again, there is a need for some shared, enacted model for the analysis of the data. One of the interviewees addressed this idea: Bio-computation and modelling and such things are great for idea generation and also for the understanding, but what you are talking about is this kind of simulation of complex systems which are very interesting and really effective if you know all of the system and all
170
Venturing into the Bioeconomy
its parameters and where you can measure all the parameters in an actual condition. You need to be able to calibrate the simulation, but when operating on the basis of biological systems, you may know something like 50 per cent and there are quite a few things that are not yet discovered, a lot of biological pathways that you don’t know exist and, therefore, those parameters are missing, and you try to adopt it to some sub-system. Quite often, when you construct these systems, you have a problem calibrating the parameters because they have very poor measures. Where you should invest the resources is finding out how to measure, how things really works, because if you can measure [a biological system] then you may get a fair estimate of the entry parameters in a model and that may help you interpret the data much better. That would be like an analysis tool, a visualization of what you measure. (Senior Scientist, Biotech Company 3) The CEO of Biotech company 5, working in the field of autoimmune diseases such as rheumatism and multiple sclerosis, reminded us that all bio-computation models are constructed on available and verified clinical data and, since there are many biological pathways and mechanisms that are not fully known, a reliance on these bio-computation and bioinformatics models would imply a form of lock-in effect: We have these discussions quite frequently . . . All kinds of systems biology tools, they are actually built on previous know-how, and then it is complicated to find something new, something different, because you won’t find it there . . . You’ll only find what is already known. I believe the niche for making new inventions [Swedish, uppfinningar] – that is to discover new things, what is unexpected. That is the small company’s niche. (CEO, Biotech Company 5) This does not imply that these bio-computation models are not useful in the search for new molecules. On the contrary, they are helpful in reducing the chemical space, but they need to be complemented by professional expertise. The CEO of Biotech company 5 provided an example of how the synthesis chemist’s experience was combined with bio-computation approaches: We learned something interesting when we designed our new molecules. We were using a computer program giving us suggestions on what molecules to synthesize to make use of the most information. We talked to [a consultant] and he said, ‘Do as the computer program
The Craft of Research in Biotech Companies 171
tells you to do, but add a reasonable number of substances that you select on the basis of synthesis chemistry intuition. Then you take into account both your own know-how and chance, you may say’ . . . it is pretty much like that in all kinds of research: when you work with a system for a long period of time, you get a certain feel for it. At the same time, you need to add a certain factor of chance. (CEO, Biotech Company 5) The research director in Biotech company 4, a stem cell company, had a firm belief in the future of in silico models, even though he emphasized their limitations and the principal domain of use as being making qualified predictions: ‘I believe a lot in in silico models, but they are like economic research – they can only prove what is right and wrong in hindsight. We cannot predict this all-predictive model but we can make the model better and better’ (Research Director, Biotech Company 4). His colleague explained the procedures used: We use quite a bit of bio-informatics . . . To understand how a stem cell becomes a liver cell there are remarkably complex processes that we need to learn to understand, so we use bio-informatics tools. Then we have the next level were we use the stem cells . . . we use carcinogene chemicals on stem cell-derived hepacytes and try to study what happens to be able to predict what will happen in the next phase. We build these models where we can test unknown substances. Then we examine the method with global gene expression, with global metabolomics analysis and that [generates] terrifying amounts of data. (Researcher, Biotech Company 4) At least in the field of stem cell research and stem cell therapies, there were some uses of in silico models. When there are quite a few things the researcher does not fully know or has only vague ideas about, it is complicated to construct fullfledged models of biological systems on the basis of bio-computation approaches. The computation chemist claimed to provide methods for constructing a model of how the biological systems developed over time, as a trajectory: ‘If you conduct a series of experiments . . . [you acquire] an image of complexity. What I add is that we need to understand a trajectory over time. If we do the experiment again, how does the [biological] system develop over time?’ (Biotech Entrepreneur 1). The concern about the practical value of the bio-computation models beyond ‘idea-generation’, however, did not suggest that the entrepreneurs did
172
Venturing into the Bioeconomy
not see any value in the genomics and post-genomics method being produced. The concern was that there is not a direct, straightforward and unambiguous path between the data, no matter how seemingly exciting or intriguing, and specific therapies with adequate efficacy. As the computational chemists put it, ‘covering everything’ is not enough because having data about ‘everything’ occurring in a biological system under certain conditions and period of time is not sufficient for understanding the relationships between components and mechanisms in the biological system: There’s no doubt the assembly of the cell matters. We have cells that have DNA, RNA, proteins. The components are the same and yet some are neuron cells and some are support cells and some are stem cells and so forth . . . In order to understand the level of complexity demanded, omics are good techniques for covering everything [Swedish, helgardera]. But then you need to use that full coverage in the analysis, or you need to use it to identify biomarkers or the most important parameters for understanding the phenomenon you are studying at an adequate level of complexity. So there is no self-justification in using omics. If you have a good biomarker explaining your field of research, solving your problem, then less is more [English in the original] unless there is a need for more [data]. (Biotech Entrepreneur 1) At the bottom line, the computation chemist claimed, there is a concern regarding understanding the ‘theory of science’ in the industry, the basic and generic understanding of how facts and knowledge could possibly be produced and rendered legitimate in the contemporary period and in the regime of FDA and other regulatory bodies: I think there is a lack of understanding for a theory of science in the organization. More should be known about the elementary forms of scientific thinking and how to organize such matters, and what the strategy of the firm actually consists of. (Biotech Entrepreneur 1) This criticism also extended into the field of academic research, where researchers were more concerned about publishing data to maintain their careers than making more substantial contributions: My critique against the research published in this field during the last few years is based on a philosophy of science: how do you conduct an experiment? Are data handled in an adequate manner? I think
The Craft of Research in Biotech Companies 173
much is what I sometimes call ‘p-optimal’ – publication-optimal [research]. And that is a pity . . . Computers, technology and data bases, that is probably not what is setting the boundaries today. The experiments are quite expensive. We need to learn how to handle large data sets in a good manner. (Biotech Entrepreneur 1) Taken together, there is a need to rush into the next technology or analytical approach, but also to actively maintain a scholarly attitude, including a healthy scepticism towards new approaches without rejecting them prematurely. Combining in vivo, in vitro and in silico models In Biotech company 3, the synthesis chemists did not think that the advancement of the life sciences and new scientific approaches had affected the company’s research policy very much: I would say that the development has not affected us at all, other than giving us additional credit that we are doing the right thing. We’re at the front line, I’d say. So we have resisted this in vitro tide that rolled in, when targets were what mattered; we continued to stick our heads in the sand and to run in vivo models here. But then we have developed further inasmuch as we try to collect as much information as possible from every experiment and work with multivariate analysis methods. It is not the case that we select a few parameters among thousands of parameters, saying ‘this is what matters and this is what needs to be affected in a certain direction’, but we look at the patterns as a totality of the effects that the drug induces. (Synthesis Chemist, Biotech Company 3) The advancement lies in the use of multivariate models in the systems biology vein, but the basis for the entire research approach is still the in vivo animal models. The synthesis chemist was highly critical of what he referred to as the ‘target-centric approach’ in major pharmaceutical companies. This approach assumes that there is a quite unilinear connection between illness, target (often a receptor in a cell or a protein) and molecule. The synthesis chemists did not believe in what was, in his view, such a simplistic model by which to understand biological systems: ‘We do not use a univariate strategy. The target-centric [approach] is univariate all the way! It examines only one thing at the time. But it can measure the affinity in a variety of receptors’ (Synthesis Chemist, Biotech Company 3). Instead, he advocated multivariate analysis
174
Venturing into the Bioeconomy
methods, taking into account most seemingly relevant variables in the early stages: ‘If you do a multivariate analysis of 10,000 parameters, you quite quickly reveal which of these are just junk and which really do have the predicative quality you are looking for’ (Synthesis Chemist, Biotech Company 3). The difficulty with the target-centric approach was especially accentuated in the field of CNS research. The synthesis chemist provided a detailed account of the difficulties regarding reductionist approaches in this field of research, exemplifying with the research on therapies for Huntington’s disease, Parkinson’s disease and schizophrenia operating on the so-called D2-receptor: We work with dopamine and the D2-receptor that have been involved for quite a number of years in both Parkinson’s and schizophrenia. [It is treated with] D2 antagonists. The concern is that it binds so hard that is cuts off all signals and then you get Parkinsonism [a medical condition], its opposite, closing down or inducing too little activity in the dopamine system and then there is a rigidity – you get that among patients as well – and they cannot move . . . it is a condition called catalepsy. (Synthesis Chemist, Biotech Company 3) Therapies have been designed to make a D2 antagonist bind to the D2 receptor without binding too hard. Given that the D2 receptor seems to play a key role, it may be that a knock-out of the genes regulating the D2 receptor may offer opportunities for understanding the biological pathways in the central nervous system. Unfortunately, biological systems are more complex than that: When it comes to the brain, there are too many compensatory mechanisms. Then you cannot pick a target and declare ‘this is what’s wrong’, because you can bet your money on the fact that, as soon as you start to intervene in the brain, there will be new things happening. That’s why the knock-out studies [the elimination of certain genes in the genome] have failed when it comes to the CNS field . . . If you take the classic example of the D2 receptor and a D2 antagonist and close it down completely, then they get rigid and you get Parkinsonism. Everybody in the field knows that. It doesn’t matter what antagonist you use – the result is the same. Then, a knock-out of the D2 receptor would lead to the same thing, that the animal becomes immovable. Out of the experiment comes a perfectly normal rat but with the difference that now the ‘D2s’ [the D2 antagonists] no longer have effect. The brain has compensated
The Craft of Research in Biotech Companies 175
for the absence of D2 and finds other pathways. (Synthesis Chemist, Biotech Company 3) The compensatory mechanisms in the brain here undermine the genomics approach, and, ipso facto, disqualify the illness–target–molecule approach that serves as the ideal typical research model in much new drug development work. Contrary to this approach, Biotech company 3 started not with targets but with constructing a model of the illness and in vivo animal models, potentially enabling an understanding of the biological pathways involved in certain neurodegenerative diseases such as Parkinson’s and Huntington’s disease. The synthesis chemist explained: What I think is important is the great difference that we do not start with a target or an illness problem like ‘the D2 receptor does not function properly’, but in our approach we have an in vivo model that we use. Then we try to figure out what does not work no matter what mechanisms precede the illness. When you find your substances, then you can identify targets and try to explain what is, in fact, happening. And thereafter you may be in a position to advance further hypotheses. (Synthesis Chemist, Biotech Company 3) The new analytical approaches could potentially lead to significant insights, serving as the foundation for the development of new therapies, but they cannot be treated as if they were – similar to the rhetoric regarding the genome – keys to the secret of life. The key to life, the romantic and highly elusive idea that one single entity (‘the key’) could open up the box or the door leading to the brave new world of a full understanding of biological systems, is thus a misleading metaphor in the life science setting. While the genome largely rested on such fairytale poetics (portraying the genome as ‘the book of life’ or the ‘key of life’), succumbing to what Alfred Whitehead (1925) once called ‘the fallacy of misplaced concreteness’, there is always a risk that this highly evocative image of the key is reproduced in new settings and mobilized when selling new analytical approaches. Stem cell technologies and therapies Perhaps the most conspicuous case of a research field being brought to the forefront of the life sciences in the beginning of the first decade of the new millennium is stem cell research. Being widely positioned as a field potentially capable of both producing new methodologies as well
176
Venturing into the Bioeconomy
as new therapies, stem cell researchers and stem cell biotech companies have been given an almost heroic position in the popular literature on the life sciences. ‘It needs to be pointed out that there is a remarkable interest [around stem cells],’ one of the researchers in Biotech company 4 said. At the same time, the research director in the company recalled that even though there has been ‘hype’, there have also been periods of relatively modest interest: ‘There’s been this hype, but also a bottom, you should know that! . . . it goes up and it goes down.’ In Biotech company 4, working with stem cells, the interest for the field was encouraging and, after working for some ten years in the field, the co-workers finally felt they were given some credit from the larger pharmaceutical companies, by and large sticking to their ‘small-molecules therapies’ policy until quite recently. One of the interviewees, a researcher at the company, told a story about the inertia of big pharma: When I worked at PharmaCorp, one of the last projects I worked with, a small project, we aimed at developing a molecule to speed up the so-called fibromyosis. We were a project team trying real hard to produce an antibody therapy. This was around 2001. We more or less became a joke among the decision-makers in the company . . . ‘PharmaCorp will never get antibodies to work because we produce small molecules, period! That is our domain of expertise and that is the future.’ Now, a few years later, after I moved here [to Biotech Company 4], it is rather amusing to see that PharmaCorp bought a company for hundreds of millions [Swedish crowns] that works exclusively with antibodies. (Researcher, Biotech Company 4) Presently, major pharmaceutical companies no longer treat stem cell technologies as a source of new methodologies but increasingly as a potential domain for therapies. The researcher continued: Stem cells may serve as an in vitro tool but offer no help in producing therapies. We do not work with molecules as large as cells [major pharmaceutical companies claimed] . . . A few years later came the next stage. There are three stages as I see it; first, you use stem cells as tools . . . then you use this old concept, you develop a small molecule that affects the endogenous stem cells . . . it is a relatively simple concept for the pharmaceutical industry to buy into because then they can continue to produce their small molecule and patent it . . . it is a new kind of therapy but they may still use their old-fashioned molecule . . . ‘Cells? No chance! We’re not doing that. It is all too hard. How can you patent a cell? How can you produce a cell?’ [the pharmaceutical
The Craft of Research in Biotech Companies 177
companies asked], but some of us said ‘Let’s wait, things will soon happen.’ And right now, as well as over the last year or one and a half years or so, there are many things happening . . . Now, we’re on the threshold of accepting cells as future therapies and that is so great . . . they think cell therapies . . . What’s great is that this goes for all large pharmaceutical companies today. (Researcher, Biotech Company 4) The trajectory outlined by the researcher thus starts with the view that stem cells are methodological frameworks for in vitro studies, followed by a stage where cells could be used in combination with smallmolecule therapies and, finally, the stage where the stem cells per se are perceived as potential therapies. The researcher believed that the third stage was soon to come: Four to five years ago, all said: ‘Stem cells, that is a fine tool for us in our research. You can make a liver cell and then you can make the DMPK [drug metabolism and pharmaco-kinetics] and a little pharmacology metabolism and a few tox studies, but that’s about it. And then the next step came and then it was all about making small molecules affecting the stem cells and now almost all of the big pharma companies have taken active steps towards working with cell therapies. (Researcher, Biotech Company 4) Observing this change in attitude – to some extent derived from the decline of the target-centric, small-molecules model for therapy development – the researcher was very enthusiastic about the future of stem cell research: ‘There are a number of these barriers on the verge of crumbling right now . . . You don’t need a molecule that may treat 400 million patients because there are other ways of making money in the future’ (Researcher, Biotech Company 4). This change in perspective from the molecular to the cellular level changes the understanding of what the key terms in the development of therapies are. For instance, the concept of a ‘target’, traditionally understood as a receptor in, for example, a cell where a small molecule could connect, is being translated into new theoretical frameworks. According to the researcher at Biotech company 4: ‘Today, we make the claim that even an organ may serve as a target . . . if the cell is the molecule, then the heart is the target. So today we speak of making a therapy with cells rather than these small molecules.’ One of the challenges for the major pharmaceutical companies, as well as the smaller biotech companies, is how to apply patenting procedures to the new therapies. Patenting
178
Venturing into the Bioeconomy
a series of small molecules is a well-established procedure but moving up the ladder to whole cells may be a different issue from a juridical perspective: ‘That is the apparent challenge here [patenting]: previously, when the small molecule was developed de novo, then it was easy to patent it and all its cousins and so forth. If you have a cell or an antibody, it is so much harder’ (Research Director, Biotech Company 4). Also the CEO in Biotech company 5 was concerned about the entire procedure of patenting because he saw early successful attempts at patenting research findings as blocking future contributions: The way patents have been registered has been skewed . . . They have granted very ‘rough patents’, destroying the future. That makes it impossible for you to patent your invention because it is already someplace in all that [patent] text and the old patents that have never made any contribution to anyone block the future . . . It is a really difficult issue because the damage is already done. (CEO, Biotech Company 5) In these early patents, a certain biological entity is associated with ‘an entire medical book of potential diseases’, the CEO said. That is, there is no scientific proof of these connections, just a general conjecture that there ‘may be’ some connections (between, e.g., an overproduction of a certain protein and some physical disorder), but since these conjectures are protected by patent rights, it is complicated to patent more detailed research findings. The lack of competence in granting patents in the field of the life sciences in the late 1990s thus causes much concern in the present period, the CEO argued. In addition to the very patenting procedures, the research director at Biotech company 4 was concerned about the relatively short patenting times in comparison to the development times. In brief, the period of financial harvesting following years of hard work and investment was seen as being too short to justify all the work done, leading to the nottoo-innovative production of the ‘me-too drugs’ in the pharmaceutical industry: ‘In the long-term perspective, 20 years is very little. What kind of incentives should we use? I think that issue matters as well. Now, you have to tinker with the materials and you make a new drug that’s 5 per cent better or 10 per cent better. It is an unreasonable amount of money being used for such things because the patent only lasts for a relatively short period of time’ (Research Director, Biotech Company 4). During the first decades of stem cell research, the field has been criticized for applying ethically questionable research methods and
The Craft of Research in Biotech Companies 179
among, for instance, right-wing Christian groups, stem cell research is representative of a violation of human dignity as stem cells have been collected from umbilical cord blood and human fetuses. In the USA and in many European counties, including Austria, stem cell research has been either highly regulated or banned. Today, both the attitude and the technological possibilities have changed dramatically and there are today genomics technologies that enable a ‘reprogramming’ of differentiated cells to become stem cells. The research director of Biotech company 4 explains: There is this alternative now where you can make use of these [regular, differentiated] cells and you ‘push them back’ so they become pluripotent . . . Then we can overcome all these ethical problems . . . This has been done primarily on fibroblasts . . . you activate four genes and they de-dedifferentiate and become, it is believed, like a fertilized egg cell, they become embryonic stem cells . . . We can get liver cells, cardiac muscle cells, etc., from what has been a skin cell. (Research Director, Biotech Company 4) That is, rather than being collected from various sources the stem cells become fabrications, the outcome of the manipulation of a few genes. This once and for all eliminates the criticism that stem cell research is based on some kind of bio-piracy. In addition, as the research director points out, emphasizing the new acceptance for the research field, ‘one must not forget that all these fertilized eggs would have been thrown into the waste bin otherwise. That is the case. I think most people have accepted that … If you look globally, there is not that much ethical discussion and, in the USA, there has been a large change. The last step was when [President] Bush left office.’ As a consequence, stem cell researchers are no longer operating under the burden of having to clarify that what they are doing is ethically sound and legitimate research. Stem cell research is, thus, gradually becoming institutionalized, not only scientifically and therapeutically but also morally and culturally. The future of new drug development The influence of omics technologies Regarding the future of new drug development, either therapies developed in the traditional big pharma, in biotechnology firms, in alliances or in a network organization, the interviewees had a series of ideas. One of the key issues discussed in the industry is the long-term strategy of the large pharmaceutical companies to impose streamlined
180
Venturing into the Bioeconomy
standard operating procedures and routines in all activities. In addition, the monitoring of process variables and output has been widespread in the industry. The entrepreneurs were concerned about this reductionist approach to scientific endeavours and called for a more sophisticated strategy on how to accomplish the objective to produce new therapies: [HTS] is a technology and it can be used in good ways and bad ways. I may have ideas regarding the technical use of HTS in PharmaCorp more generally . . . [but] in order to understand how PharmaCorp conducts new drug development research, then you need to examine management at the level of strategy: what kind of strategy have they selected? I suggest that the concept of process has played a central role in that strategy. Organization forms and predefined rules determine what to do. If you want to understand how the individual researcher feels . . . then you need to look for explanations in the organizing and the scientific strategy. (Computational Chemist, Consultant) What the computational chemist claims is that the long-term strategies enacted have substantial explanatory value in terms of the disappointing output in PharmaCorp and many other major pharmaceutical companies. There is a tendency to think of new drug development as what is inherently structured and determined by technology, but there is a significant social element in the work that could be influenced by strategies and policies. Thus, to understand the relatively poor performance of major pharmaceutical companies one should look at the organization of the activities and the accompanying strategies, the computational chemist suggested. ‘The consequences from the strategic process-based research are not that encouraging for the individual researcher,’ he remarked. One of the factors to take into account when examining the day-to-day work in drug discovery activities is the investment in genomics technologies. The senior scientists at Biotechnology company 3 speak of genomics simultaneously as a ‘disaster’ and what may enable more detailed understanding of biological systems in the future: As it has been sold to the pharmaceutical industry, it’s been a real disaster, but the knowledge generated totally, when all genomes have been mapped and you can really make a connection to function, will increasingly enhance the capacity for predictions. But you see all these other tendencies making it harder and harder to get a drug approved because of regulatory demands. More and more evidence is
The Craft of Research in Biotech Companies 181
required and it takes a much longer time because of that. In addition, it is more and more complicated to identify these small molecules that can deliver this ‘golden compromise’ . . . So there are other things making it more complicated to develop new drugs. (Senior Scientist, Biotech Company 3) The computation chemist was also highly critical of the effects from the HUGO project, pointing at the relative distance between the genome and the ‘physiology’ of the human being: My experience is that [HUGO] has been more of a burden for the industry than a success factor. It cost a lot of money to be on top of it [English in the original]. The effects of HUGO, at the very least, generated an enormous waste of time, work and money because it didn’t delivered anything . . . if we subscribe to the idea that big pharma has had a hard time over the last ten years or so, I think these techniques and new methods for conducting research have contributed negatively. A lot was promised and the methods were adopted much too early before it was understood how they could possibly make a contribution. And here HUGO was the worst. As I see it, the gene is farther from the physiology than the RNA, which is farther from the physiology than the protein is. And when you move on to metabonomics, you are digging in the scrap heap. (Biotech Entrepreneur 1) Again, the inability to live up to the great expectations on the human genome mapping project, in hindsight, reveals some investments that led nowhere in terms of the output of therapies. The computation chemist continues: When HUGO was introduced, we were, all of a sudden, able to understand all of the etiology of illnesses and all problems were practically solved. We had heard such things before, and it did not happen . . . All these ‘gene-scan collaborations’ with the universities, where a gene was to be identified and connected to a certain illness, that was supposed to lead to the ultimate target for a new drug. PharmaCorp had zero hits . . . It costed a substantial amount of money. (Biotech Entrepreneur 1) One of the most spectacular discoveries in this ‘gene-hunting’ race was when PharmaCorp found an association gene for Krohn’s disease, a
182
Venturing into the Bioeconomy
discovery that was reported in Nature. However, this gene was probably a ‘non-drugable target’, the computation chemist suggested, and, after a significant period of discussions on what to do regarding this discovery, the entire project ‘died down’. The CEO of Biotech company 5 argued that the diseases being dealt with are still not understood in detail. Metabolic diseases such as type 2 diabetes or autoimmune diseases such as rheumatism may, in fact, be broad terms including a variety of sub-categories: Many illnesses are so big . . . We don’t fully understand diabetes or multiple sclerosis and there are targets that have never been examined because we do not understand how they are regulated . . . Possibly, these illnesses are divided into a variety of sub-groups. We might find a mechanism that is relevant for 1 per cent or 0.1 per cent of the patient category. (CEO, Biotech Company 5) The new technoscientific approaches such as gene expression analysis and other ‘systemic investigations’ may play a key role, to make more detailed clinical analysis and diagnoses of individual patients, the CEO claimed: There will be better diagnosis methods. That is a major concern: we cannot really diagnose properly. What I hear when I participate in medical congresses and such events is that the GPs bring in their patients and make the diagnosis and then they have this list of drugs that they test, one after the other, to sort out what works. (CEO, Biotech Company 5) These new diagnostic opportunities, then, lead the way forward to personalized medicine as more is learned about both individual patients and patient categories. For instance, in the field of autoimmune diseases and rheumatism, there are antibody-based therapies available and these therapies produce good clinical results on about 50 per cent of patients. Apparently, the other half of patients represent a genotype that responds better to another therapy, and being able to map the differences between the two categories of patients naturally leads the way forward to personalized therapies. Therefore, the CEO was hopeful regarding the possibilities for providing new therapies in the future: ‘I believe we’ll be able to provide better therapies for the patients needing them, that we’ll find more adequate therapies. That is major contribution to society’ (CEO, Biotech Company 5).
The Craft of Research in Biotech Companies 183
The decline of the target-centric small-molecules therapy model? The synthesis chemist in Biotech company 3 was quite sure that the pharmaceutical industry had ‘reached the end of the road’, being on the verge of realizing that things no longer work as anticipated and that there was a need for a change. In addition to the general problem of being too large and increasingly populated by ‘super-experts’ in narrow fields, preventing an overview of the activities, big pharma had been seduced too much by the target-centric approach to new drug development. This target-centric approach was based on a belief in sifting out qualified molecules from large initial numbers: You usually invoke this kind of ‘funnel-scenario’, where you start with a million substances that you screen and then you have the screening criteria and eventually in the end you have like one or two substances. When you are in the in vivo stage you may have, like, ten to 15 left. All the rest have been sorted out. We do not work with these quantities at all but we may work with 50 to 100 substances within a programme, but all of them are characterized in vivo. We bring all of the information until we make the final selection. But that selection is made on a very firm foundation. (Synthesis Chemist, Biotech Company 3) The contradictory fact that the swift advancement of the life sciences emerged at the same time as big pharma struggled to maintain the output of new therapies did not imply, however, that the pharmaceutical industry has lost its competence – quite the contrary: The pharmaceutical industry has not become worse in developing new drugs, quite the contrary; the competence has grown. But they have turned the ‘pancake’ upside down and claimed that the illness is caused by this and that and that may lead us to the point where we may be able to treat the illness as this target or that target. They have made up their minds from the very beginning what it looks like, without having the slightest clue. They don’t know anything about what is going on in the brain or the stomach or wherever they work . . . This reductionist thinking [is] like cherry-picking – you select something you believe is the cause of the illness . . . But who says there’s a rationale for that decision? (Synthesis Chemist, Biotech Company 3) What is missing in the major pharmaceutical companies is, then, a proper and sound understanding of pharmacology avoiding reductionist and simplistic target-centric models. The synthesis chemists thought
184
Venturing into the Bioeconomy
this was a matter of flawed managerial influence, both emphasizing metrics and large-scale volumes and, perhaps more alarming, failing to recognize the influence of serendipities in scientific thinking: In the first place, I think it is management that have goofed up here . . . They have sailed the ship in the wrong direction. The second thing is, where is the innovation capacity? Innovation capacities lie in the ability to make discoveries. And then you have this whole thing about serendipities; that is not what you work towards, but the fact is, in the end, that is what remains. All great innovative drugs are outcomes from serendipities. There is no human who contrives to work this way, but they have made discoveries in good animal models and then they have tested in all sorts of ways and then they have discovered something no one had thought of. (Synthesis Chemist, Biotech Company 3) This lack of understanding for or patience with the scientific work leads to a gross underestimation of the time needed for accomplishing something worthwhile in the life sciences: You get, like, six months to evaluate project. C’mon! It may take like ten years to learn how the relationships are constructed in the small area you are working in. Signal substances back and forth . . . to get a feeling for the molecules, what they do and do not do. But all that is eliminated and the target is selected and they say ‘this is the problem: screen and look for molecules with good potentiality and selectivity’ and then you use animal models and get a few results and move on. (Synthesis Chemist, Biotech Company 3) This short-term perspective and impatience regarding the time needed to develop both an understanding for the biological system and the illness have, the synthesis chemist argued, undermined the possibilities for producing new and innovative therapies. Also the stem cell researchers in Biotech company 4 anticipated some changes coming in the pharmaceutical industry. One of the researchers spoke of a ‘plateau phase’ preceding the full exploitation of all the new technoscientific advancement in the life sciences, potentially leading to new therapies in coming decades: What we have seen here is a kind of dip, or a plateau, in the development of the pharmaceutical companies. I think it all depends on
The Craft of Research in Biotech Companies 185
the fact that they were, for too long, too anxious to maintain the belief in the small molecules. If you look back 15–20 years, what PharmaCorp and all the other companies were producing on a largescale basis were still this kind of ‘easy target’; all the pharmaceutical companies ‘had a beta blocker’, all ‘had a statin’. All these easy targets are now gone and they stuck to them all too long. What we can observe now, it’s dip or a plateau. All the large pharmaceutical companies are developing these kinds of things: antibody therapies, stem cell therapies, regenerative medicine . . . it does not pay off right now, but perhaps in ten years’ time. (Research Director, Biotech Company 4) At the same time as the target-centric small molecules product development model was facing some challenges due to less ‘easy targets’ and harder regulations, the researcher at Biotech company 4 emphasized that he and his colleagues ‘don’t make the claim that one should abandon traditional pharmaceutical drug development’. Instead, they wanted the major pharmaceutical companies to recognize their efforts to turn stem cells into therapies and not just conceiving of them as in vitro tools in small-molecules research: ‘We would also like to establish the view that you can think of stem cells as being more than mere tools; you can think of them as targets for these small molecules or as a therapy in their own right. Then the organs – the heart, the brain – [become] targets.’ In addition, even though there were a lot of discussions in the industry about ‘personalized medicine’ and other more differentiated therapies, the research director at Biotech company 4 did not think the blockbuster model was yet outmoded. In the field of antibodies and biologics, there will still be blockbuster therapies, the research director argued. In contrast to Biotech company 4, working with stem cell therapies, Biotech company 5 was working in the field of small-molecules therapies. First, the CEO of Biotech company 5 said the tendency to work with antibodies and biologics – that is, larger molecules – was a bit of a trend for the time being that might change in the course of time. In addition, this kind of research work is capital intensive and trying to compete with the major pharmaceutical companies in the field of antibodies and biologics was deemed to ‘totally lack future prospects’ (CEO, Biotech Company 5): ‘We don’t work with . . . antibodies and proteins. That is because of the competition. The big corporations do that. It demands another kind of competence and very expensive technologies . . . A small company like us, we don’t feel we’re able to compete there’
186
Venturing into the Bioeconomy
(CEO, Biotech Company 5). Being a small and innovative company, having a detailed understanding of one disease area (autoimmune diseases in the case of Biotech Company 5), the company was competing on viable research ideas that could be outlicensed to the major fims. That is, rather than operating on basis of sheer force of major capital investment, Biotech Company 5 competed on basis of other means: A large corporation may work with targets that are known because they have another weight in their processes. Ask someone working in Sanofi or Roche what targets they are working on and they will tell you that ‘we work on the same targets as AstraZeneca or Pfizer are working on’. There is a flock mentality here. (CEO, Biotech Company 5) The most recent trend in the industry is that the major pharamceutical firms are cutting down on pre-clinical research and license new chemical entities that are further developed and brought into clinical research. Biotech company 5 thus hopes to serve as a provider of such new chemical entities: The tendency today is for the large corporations to abandon their pre-clinical research, and that is because they fail to generate new targets and new inventions. That is how we think about it . . . the large corporations are not the right milieu for creating new inventions but the smaller companies are . . . They control enormous resources but they have killed the innovation culture to some extent. (CEO, Biotech Company 5) He continued: We’re developing a new candidate drug, that is we work pre-clinically and we will never study the drug clinically in man . . . Our intention is to produce something new, that is patented and demonstrates a clinical relevance in animal models . . . making a clinical study in man, in patients, justifiable. (CEO, Biotech Company 5) Regarding the ‘end of easy targets’ narrative being given by some of the biotech company representatives, the CEO of Biotech company 5 thought it was quite natural that the major pharmaceutical companies ‘explored the easy targets first,’ but also added that what are regarded as proper target candidates is largely a matter of what disease model
The Craft of Research in Biotech Companies 187
has been enacted in the scientific community, and, since some diseases or disorders are not very well understood, there may be a variety of targets that are not yet known, or considered interesting targets. Using the example of their own scientific findings regarding the connections between the number of free oxygen radicals in the immune system and autoimmune diseases, the CEO of Biotech company 5 underlined the need for clinical data and integrated and approved disease models when guiding the research: We discovered that free oxygen radicals have an effect on this kind of illnesses but what was curious and unique with our discovery was that it was an advantage if one could increase the number free oxygen radicals in the immune defence system. This was intuitively against common belief [a few years ago] . . . if you think about this hype about antioxidants . . . We’re [now] pleased to learn that there is really no scientific evidence supporting the thesis that antioxidants have a positive health effect; on the contrary, there are data indicating a negative effect. (CEO, Biotech Company 5) He continued: The degree of autoimmune illnesses increased quite substantially [in the laboratory study], both among animals and humans who have a lower capacity for producing free oxygen radicals. If we could improve that capacity then we might be able to provide therapies for autoimmune illnesses and that is our operative hypothesis. (CEO, Biotech Company 5) As a consequence, the small molecule-target model developed at Biotech company 5 was based on a research finding that was somewhat at odds with widespread belief at the time. This kind of new contribution, the CEO claimed, would be able to identify a new set of potentially ‘drugable’ targets. In other words, the small molecules-target model may still have a future in some disease areas and some new drug development projects. Renewal of big pharma In general, the pharmaceutical and biotechnology industry has to grapple with a series of challenges for the future. The senior scientist at Biotech company 3 expected to see a rediscovery of more traditional approaches to new drug development, relying on in vivo wet lab research and the
188
Venturing into the Bioeconomy
construction of integrated and empirically validated disease models: ‘They will be forced to take a step back because they have eliminated very fundamental functions. But they will have learned quite a bit from all these basic research methods being developed but they need to build this basic model [of diseases] . . . Think more in the classic manner, to get relevant disease models.’ Another important lesson for the future was to maintain a clear idea on how to work and to avoid falling prey to the predominant fashions in the industry. This called for a more long-term commitment to the development of new therapies that could resist the seduction of quick fixes and the cutting of corners: The pharmaceutical industry has been duped by academics, who have been all too eager to sell research findings and acquire funding for their work, and the management consultants touring, who say ‘We’re in a crisis situation, we need to do something radical right away’ – and then there is a 180 degree turn. You have this pendulum effect. For a while, it was a general belief that the genome would be the solution because it was promoted in such terms; then they abandoned the entire fundamental, the important [work]. Now, you notice that it is too complex to be reduced to one single target, isolated, because you miss too much information, and therefore they promote the opposite, that we need to do it so much because it is more complicated. Then there are quite a few things not related to the elementary factors, like measuring [relevant parameters] . . . that are promoted, like the modelling of complex systems, and such things. (Senior Scientist, Biotech Company 3) ‘You really need to follow your own path; there is too much following what competitors in the industry are doing,’ the senior scientist concluded. The computation chemist pointed at the broader socioeconomic context of the industry and emphasized the ‘quarter economy’, the ‘financialization’ of the economy embedded in neo-liberal economic doctrines that increasingly set the boundaries for what kind of commitment and long-term strategies could be invested: An important factor is the quarter economy. Politicians have got rather ‘hung up’ on it too. Those who actually are in charge, the capital, that is, are driven by the short-term quarter economy, with bonuses and all that analysis. That goes for politicians too. They are in the hands of capital and they want to be re-elected. (Computational Chemist, Consultant)
The Craft of Research in Biotech Companies 189
As has been suggested quite straightforwardly, new drug development is very time-consuming and detailed work, unravelling intricate and emergent properties of biological systems, seeking both to understand the etiology of diseases and how the NCE interact with targets and biological pathways; this research relates poorly to economic doctrines demanding results almost instantly. While the profits of the pharmaceutical industry are consistently above average, profits and cash flow tend to be substantial after equally vast economic resources have been invested. The ebb and flow of income thus operates at a higher magnitude than in other industries. Herein lies a pedagogical challenge for executives in the pharmaceutical industry – to educate and inform financial actors regarding the progress of the work in order to shelter the core activities in drug discovery from undesirable external influences. There were also some promising insights into what could be achieved that the interviewees addressed. For instance, the concept of ‘personalized medicine’, the idea that therapies could be developed for a specific group of patients sharing a genotype or phenotype (e.g., Asians, women) was regarded as one of the opportunities for revising the industry: ‘Personalized medicine is something I am interested in . . . Here a whole new set of challenges for the industry is emerging . . . What industry is capable of producing these [individualized therapies] on basis of commercial interests? None has even been thinking about that,’ the computational chemist said. The CEO of Biotech company 3 thought that biologics (molecules with higher weight such as vaccines) would play a much more important role in the future as the ‘smallmolecules doctrine’ dominating in major pharmaceutical companies increasingly becomes a subject for debate. In addition, new multivariate analysis models, capable of apprehending many but weak influences on a biological system, would increasingly be used in new dug development activities. Finally, the very organization of the life science industry, today characterized by a number of large multinational pharmaceutical companies and a more fragmented and diverse biotechnology industry consisting of highly specialized companies, was an issue to address. Many of the biotech company representatives were impressed by the scope and depth of the know-how of major pharmaceutical companies but also thought of these organizations as being too large and too bureaucratic to fully accommodate and exploit the creative potentials of the intellectual capital. That is, the sheer size of these companies was addressed as an issue. ‘If they [pharmaceutical companies] are capable of organizing the company in a clever manner [large organizations may
190
Venturing into the Bioeconomy
survive]. What makes the difference is not whether the company per se is large or small but if the work can be organized in a smart way,’ the research director at Biotech company 4 argued. Biotech company 4 was itself highly entangled in a web of national and international organizations and funding bodies in order to get access to both scientific and financial resources to help the company to advance its research: ‘We are really performing well in terms of collaborations. We have partipated and participate in nine EU projects, giving us access to about 150 academic partners [universities],’ the research director argued. Also the incubator director at the university hospital science park spoke about a new attitude towards commercializing scientific results among the younger faculty at the medical school: ‘There’s been a remarkable change during the last ten years, especially regarding the acceptance of these issues [commercalization of scientific results],’ he said. ‘There are more people coming here [to the science park] . . . it is easier to get invitations from departments and boards of directors . . . The university and the hospital are allocating more resources to inspire their staff to commercialize [scientific discoveries].’ Emphasizing especially the growth of the biomaterials sector, the incubator director claimed that ‘the sense of doing a pioneering work is substantial in the biomaterials sector’. Contrary to the large-scale pharmaceutical industry, relying on their conventional models, the biomaterials sector was, the incubator director argued, in a state of quickly developing a variety of new products and new scientific procedures: ‘I believe more in a technological shift [towards biomaterials and cell therapies] than the scenario where the pharmaceutical industry, with their small molecules, learns to act differently. That scenario is not very likely.’ This biomaterials sector of the bioeconomy was based on both small, emerging companies and the more traditional biomaterials companies. The incubator director recommended a closer analysis of this category of biotechnology companies. In summary, for the biotech company representatives, the future of the life sciences was very likely to emerge into this network-based organization form, where know-how flows over organization boundaries and where collaborations are widely used. That is, the future of the life sciences lies in ‘the markets’ rather than ‘the hierarchies’.
Summary and conclusion In this chapter, the possibilities and limitations of new drug development activities have been examined from the perspective of biotechnology firm representatives. The interviewees were in many ways
The Craft of Research in Biotech Companies 191
critical of the major pharmaceutical companies’ strong emphasis on new drug development models centred on small molecules, single targets and a high degree of affinity to accomplish an adequate degree of efficacy. Rather than thinking in such linear and target-centred terms, the interviewees argued that one must think in new and innovative terms. In some therapeutic areas – for instance, hypertension – the ‘one molecule–one target–one disease’ model has proved its merit and today cardiovascular medicine is hugely successful in terms of enhancing longevity and quality of life among patient groups with hypertension. In other fields, such as the central nervous system, where the complexity of the biological system is much larger and where properties of emergence are more significant, medical conditions such as neurodegenerative diseases – for example, Parkinson’s disease or Huntington’s disease – are not easily explored or understood based on such models. Instead, the respondents claimed, there is a need for shifting the focus from the traditional models to more sophisticated multivariate models. However, making this move does not imply that one is proceeding from a traditional in vivo-based research framework to one based on biocomputation, but instead the research models must effectively combine in vivo research with analytical procedures such as systems biology, potentially being able to construct viable operative models of how biological systems and pathways function.
5 Exploring Life in the University Setting
Introduction Universities, the outgrowth from the monasteries in towns such as Padua, Bologna, Oxford and Cambridge, were the original sites where systematic knowledge were accumulated and where the ideology of treating knowledge as having a value per se, notwithstanding its practical utility, was first enacted. Starting as theological pursuits accompanied by agricultural interests (to this day, monasteries in, e.g., Belgium brew some of the most sought after beers in the world), monasteries and the emerging universities were domains protected from everyday work, what the Greeks referred to as Skhol¯e, ‘the absence of work’, being the root of a variety of concepts including schools and scholars (see below). Practical utility was not the starting point for the universities, but scholarly debates regarding theological matters were held in esteem. Even until the end of the eighteenth century, in the sciences, by now increasingly technical and engaging with a variety of non-theological matters, ‘natural philosophers’ entertained the upper classes with physical or chemical experiments. This may sound a frivolous view of the sciences but it was not until the last decades of the eighteenth centuries, Porter (2009: 299–300) argues, that the sciences were conceived of as a social resource in the organization of society. Early proponents of the sciences, such as the French encyclopaedists and August Comte in the first half of the nineteenth century, pointed to the sciences as a means to transcend traditional modes of thinking and ancient beliefs. The sciences thus became a tool in modernization and, to use Max Weber’s term, ‘rationalization’ of society. This was also the period where the social sciences such as economics (referred to as political economy in the period) and sociology were instituted as scientific disciplines. Medicine and biology were two recent scientific disciplines, being developed in the 192
Exploring Life in the University Setting 193
nineteenth century, both propelled by the advancement of chemistry as the leading scientific discipline of the century. In the modern period, beginning in the late eighteenth century (the French Revolution is a common denominator for such periodization), the university has served the role of being the principal site for knowledge production, in the ideal case detached from political, religious and social dogmas and beliefs. The university as social institution has produced a great number of research findings that eventually have been commercialized or served as the basis for further advancement. While the university has more or less maintained its status as a credible producer of systematic knowledge in a period of secularization and a re-evaluation of old institutions, there is a lingering sense among policy-makers, industry representatives and scientists themselves that there are intellectual resources at universities not fully exploited in terms of their commercial potential. Therefore, over the last 20 years, a period of political liberalism, university researchers have increasingly been asked to justify their work by connecting research to practical interests and commercial concerns. While many good things have come out of such endeavours, there is still a mismatch between the shorter time perspective of the world of commerce and the long-term perspective of scientific work. As a consequence, university researchers are constantly asked what their research findings may lead to in terms of practical utility despite the fact that it is relatively complicated to predict what basic research may lead to. One may guess, for example, that it would have been almost impossible for the Swedish chemist Jöns Jakob Berzelius to predict what consequences his discovery of silicon, in the early nineteenth century, would have for society by the end of the twentieth century. This emphasis on short-term utility arguably helps university researchers communicate the perceived significance of their research but there is always the risk that today’s perceived problems may blindfold researchers and inhibit them from seeing other research interests. This chapter presents a study of researchers working in two research universities engaging in systems biology research. The study seeks to examine both the opportunities and the challenges for the emerging research paradigm. Similar to the two previous chapters, the study aims to present the research findings in categories, discussed in the first three chapters of the book.
Innovation work in the university setting A tendency in innovation literature is that university–industry collaborations are examined in great detail. Universities have traditionally been
194
Venturing into the Bioeconomy
focused on basic research and educating and training students for a working career in industry and only occasionally have leading researchers been used systematically in industry ventures. However, since at least the 1970s and the emergence of a biotechnology industry – having its starting point in the San Francisco Bay area at Stanford and UCSF, there has been interest from policy-makers, academic researchers and industry representatives in ‘tapping the resources’ of university departments and exploiting the opportunities from a more close collaboration between university and industry. Seen from an historical perspective, the post-World War II growth of tertiary education is unprecedented. In 1900, there were about three tertiary education students per 10,000 worldwide. ‘By 1950, this number had increased eight-fold to 25. By 2000, it had increased another sixfold to 166,’ Frank and Meyer (2007: 289) report. Today, the distinction value of a university education is relatively lower than in 1900 or even 1950. Living in the ‘knowledge economy’, tone must hold formal education credentials if one is to compete effectively for attractive job offers. The university is, Frank and Meyer (ibid.: 294) argue, ‘poorly equipped’ to teach people how to do their future work in detail, but it is ‘well poised to teach people how specific features of nature and society relate to ultimately encompassing truths’. More specifically, what university education can provide is an understanding of what Frank and Meyer call ‘meta-principles’ and ‘general abstractions’ that are helpful when structuring the social reality into intelligible components, mechanisms and processes. Passing through a university education is, thus, both a form of socialization and a preparation for a future career in society, and the appropriation of a set of analytical capabilities and some awareness of factual conditions on specific matters. Students of collaborations over the university–industry boundary report a substantial growth in joint activities. Croissant and Smith-Doerr (2008: 692) show that ‘industry funding as a percentage of the total support for university research was 2.6 percent in 1970, 3.9 percent in 1980, and 7.1 percent in 1994’. During the period 1976–98, there was an eightfold increase in university patents (Powell and Snellman, 2004: 204). Universities are, in other words, starting to compete with industry in the field of innovation. Some commentators have referred to this growth in commercialization activities as ‘academic capitalism’: ‘[A]cademic capitalism refers both to direct market activity, which seeks for profit (patents, licenses, spin-off firms, etc.) and to market-like behavior, which entails competition of external funding without the intention to make profit (grants, research contracts, donations, etc.),’ Ylijoki (2005: 557)
Exploring Life in the University Setting 195
writes. However, the establishment of a regime of ‘academic capitalism’ does not come free or without effort. A growing body of research is examining and exploring the mechanisms and practices underlying the growth in collaborations between university and industry and the growth in patents, especially after the passing of the Bayh–Dole Act in the USA enabling universities to patent their scientific output (Rafferty, 2008; Mowery and Ziedonis, 2002), a change in legislation having significant implications for the life sciences and the biopharmceutical industry (Zucker et al., 2002; Smith Hughes, 2001). In their substantial review of the literature (in a journal paper stretching over more than 100 pages), Rothaermel et al. (2007) suggest that the field of ‘university entrepreneurship’ has, somewhat surprisingly, been relatively ignored in the management literature: ‘[U]niversity entrepreneurship has to date been more the domain of public policy researchers rather than management scholars. It is only fairly recently that the phenomenon of university entrepreneurship has gained attention among more traditional entrepreneurship and strategy researchers’ (ibid.: 700). In general, Rothaermel et al. think that the research on university entrepreneurship is a burgeoning field but that it remains ‘fragmented’. For instance, there are a number of rather heterogeneous tendencies and practices including ‘technology transfer, university licencing, science parks, incubators, university spin-offs, TTOs [Technology Transfer Offices], etc.’ that are examined as if they were basically the same thing. In summary, Rothaermel et al.’s review of the literature portrays a field of research in progress, not yet capable of sorting out a number of events and occurrences into a coherent theory of university–industry collaboration. Etzkowitz (1998) has, in a number of publications, advocated the concept of ‘the entrepreneurial university’ – a term also used by Vestergaard (2007) – to denote a variety of activities and practices that aim to exploit the stock of knowledge developed at the university in industry collaborations. Etzkowitz explains the term: The entrepreneurial university integrates economic development into the university as an academic function along with teaching and research. It is the ‘capitalisation of knowledge’ that is the heart of the new mission for the university, linking universities to users of knowledge more tightly and establishing the university as an economic actor in its own right. (Etzkowitz, 1998: 833) While most people would agree with the overarching objective to ‘capitalize knowledge’ and get a better leverage on money invested in
196
Venturing into the Bioeconomy
basic and applied research (a conventional dichotomization making less and less sense), there are, in fact, a number of academic virtues and traditions that are at stake when changing the century-long tradition of the role of the universities. A significant amount of research has addressed these practical, ideological and – not to forget – juridical changes. Research has advanced terms such as ‘academic entrepreneurs’ (Bercovitz and Feldman, 2008; Louis et al., 1989), ‘ambidextrous professors’ (Markides, 2007), or ‘academic inventors’ (Murray, 2004) to denote the enterprising qualities in what has been called ‘academic entrepreneurship’ (Nerkar and Shane, 2007; Siegel et al., 2007). Academic entrepreneurship has been studied in the form of ‘university research centres’, collaborations between university and industry (Boardman and Corley, 2008; Boardman and Ponomariov, 2007; Youtie et al., 2006; Bozeman and Boardman, 2004; Styhre and Lind, 2010) or on the level of ‘research groups’ (Etzkowitz, 2003). Lam (2007) also draws on the concept of entrepreneurialism when suggesting the concept of the ‘entrepreneurial professor’ as a key term when transforming the university into ‘an economic actor in its own right’. Lam suggests that a network organization form is what is potentially gearing up the innovation process in specific industries. The rationale for this argument is the idea that large organizations are comparatively less well equipped for innovation work than smaller organizations: Many organizations have to balance the need for retaining a strong in-house competence in their core business area, while at the same time, building external networks in order to remain open to new knowledge and scientific developments. In the field of innovation, and research and development (R&D), large organizations offering well-established firm-based careers embedded in strong labor markets find themselves at disadvantage. (Ibid.: 994) Being able to create innovation networks with research universities as central nodes thus has great potential when it comes to innovation. Lam (ibid.) still sees significant hurdles that need to be overcome when establishing the entrepreneurial university – a theme also addressed by Anderson (2008) and Bartunek (2007): ‘Industry–university collaboration has long been shown to be problematic because of the difficulties in reconciling the divergent work norms and reward structures governing the two different knowledge production systems,’ Lam (2007: 997) argues. In what Gibbons et al. (1994) call ‘Mode 2 research’ (Hessels and van Lente, 2008; Harvey et al., 2002; MacLean et al., 2002), a regime
Exploring Life in the University Setting 197
of knowledge production that transcends the traditional distinction between basic and applied research and that takes place in the intersection between universities, industry and governmental organizations such as research institutes, the entrepreneurial professor serves a central role as the relay between different research traditions (Lam, 2007: 1006). Lam explains: [B]oth cognitively and organizationally, these entrepreneurial professors play a critical role in bridging the interface between science and business. They contribute not only their deep scientific expertise to industrial project, but more critically, their brokering role enables the firms to embed themselves within the wider scientific networks, including their local laboratory networks of researchers and doctoral students. (Ibid.: 1007) Even though Lam declares great hopes for the entrepreneurial professor to enter the historical stage and sort out all the inconsistencies between different traditions of research and modes of knowledge production – a sort of deus ex machina figure similar to that of the entrepreneurs in what (Armstrong, 2001: 534) calls the ‘enterprise ideology’ – one must neither underrate the strength of historical traditions, nor dismiss the accomplishments of the present system regarding knowledge production. First, several students of university–industry collaborations underline the inherent conflict of interests between the traditional academic reward system, focusing on and privileging peer-reviewed publications and what Siegel et al. (2007: 497) call ‘the technology transfer reward system’, emphasizing ‘revenue generation from applied research’. To be capable of producing entrepreneurial professors on a large scale, there is a need for a reformulation of both university objectives and the reward system, putting less emphasis on academic publications and more on industrial contributions. Murray (2002) here emphasizes the ‘co-evolution’ between university and industry and the need to move beyond mere ‘co-publications’ and ‘cross-citations’ and to establish more substantial collaborations: [W]e ought to conceptualize the overlap of the networks as complex, multi-faceted and active – we might think of co-publication and cross-citation as the tip of the iceberg. Co-evolution most likely arises through a rich set of mechanisms that have only just started to be uncovered. In particular, spillovers seem to arise through active participation of academic scientists in co-evolution and technical
198
Venturing into the Bioeconomy
progress rather than positive externalities from a passive scientific community to the commercial setting. (Murray, 2002: 1401) In a study of the American academic life sciences, Stuart and Ding (2006) found that scientists having colleagues – especially highly scientifically regarded colleagues – engaging in commercialization activities (operationalized as the starting of a company or being part of the board of one of more biotech or pharmaceutical companies) were more likely to engage in such activities themselves. Stuart and Ding emphasize that this change in attitude has emerged only slowly and gradually over time, and that today there is a broader scope of legitimate actions permitted within the scientific community: Although academic scientists’ early efforts to commercialize university discoveries were met with consternation in the scientific community, the eventual diffusion of commercial activity in academe has, in the minds of many, broadened the acceptable role of the university scientist to incorporate taking part in for profit science. (Ibid.: 98) The key to these changes are the relationships within the scientific community, directing ‘[t]he flow of everything from task-relevant information to advice, gossip, opinions, and referrals’ (ibid.: 99). Therefore, given the strong normative institutional framework of the sciences (Merton, 1973), the leading role of prestigious scientists taking the first step towards becoming entrepreneurs must not be underrated: Conceptions of appropriate role behavior appear to have been endogenously related to the decisions of highly distinguished scientists to become entrepreneurs . . . Scientists were more likely to become entrepreneurs when they worked in departments where colleagues had previously made the transition, particularly when the individuals who had become commercialists were prestigious scientists. (Stuart and Ding, 2006: 99–100) Academic entrepreneurship has ‘always been the province of distinguished scientists’, Stuart and Ding (ibid.: 100) argue, but the gap between those who participate in entrepreneurial activities and those who do not has diminished over time and academic entrepreneurs have gained increased social acceptance. Since it is primarily the leading scientists of their field that engage in entrepreneurial activities, it is the
Exploring Life in the University Setting 199
usual top-tier universities such as Harvard, Yale, MIT and Stanford that produce the most academic entrepreneurs. These universities are, thus, role models for other universities spread around the world. Following what complexity theorists call ‘positive feedback’, initial and successful initiatives in entrepreneurial activities produce more successful action, the study of Stuart and Ding (ibid.) is thus consonant with the findings of Sørensen (2007) in terms of emphasizing the role of the local culture and the influence of colleagues. Sørensen (ibid.) found that individuals working in small and innovative firms were more likely to become entrepreneurs than individuals working in large bureaucratic organizations. Behaviours are therefore institutionalized on the level of the organization (Leicht and Fennell, 2008) Renewal of the university system In the general ambition to further exploit the know-how and competence residing in the university system, there is tendency – as usual, the cynic may say – to regard the university system as an antiquated reminiscence from medieval times, where scholars contemplated otherworldly matters – the concept of scholar is derived from Skhol¯e, ‘leisure, distance from economic activity, and practical urgency’ (Bourdieu and Wacquant, 1992: 89) – very much sheltered from everyday life concerns. Instead, the modern university, having its roots in the scientific revolution of the seventeenth century, developed from at least the middle of the nineteenth century, is a rather effective solution to balance personal incentives and makie know-how available to the broader public in an efficient and reasonably controlled manner. Washburn (2005), for instance, is one writer taking such a stance in favour of the university system: Today it is fashionable to criticize academic culture for its inefficiency and failure to move ideas more rapidly from the laboratory to the marketplace. What’s forgotten is how effective this same culture has been in furnishing society with valuable public goods that markets do a poor job of producing on their own: a reliable and ever-expanding body of scientific and technological knowledge; a well-trained cadre of students and workers; a richly endowed public information commons; and an educated citizenry. Historically, the vitality of this academic research culture has always stemmed from its non-market reward structure, a system predicated not on money, but on ‘priority of discovery,’ where professors are continuously racing against one another to be the first to unearth and publish
200
Venturing into the Bioeconomy
new inventions and theories that advance the state of knowledge in their field of expertise . . . Academic investigators have traditionally enjoyed a high degree of intellectual freedom and autonomy – and also collegiality, because academic publication requires them to disclose what they are working on publicly, including their raw data and methodologies. In industry, by contrast, results are often closely guarded and judged by narrow, short-term commercial and production criteria. Industry scientists also tend to conduct their work in a far more regulated, hierarchical environment. (Ibid.: 195) Washburn (ibid.: 196) concludes her argument: ‘The emergence of a utilitarian, market-model university, combined with a loud drumbeat calling on schools to spur national and regional economic growth, now threatens to obliterate the distinctiveness of this economic research culture.’ In addition, studies of national university systems such as Sweden suggest that the idea that university professors are engaging in overtly theoretical endeavours devoid of practical implications or utility is, by and large, a myth or a misconceived idea (Granberg and Jacobsson, 2006). Instead, in the case of Sweden, university professors serve a very clear role as producers of academic research in domains outlined by democratic institutions and organizations. Only a relatively marginal amount of research is grounded in personal theoretical interests and concerns. The student of university–industry collaborations must, therefore, both understand the strengths of the present system and the potential for changes to be exploited in more detail. So what is at stake if we leave the century-long tradition of university-based knowledge production and verification – that is, the disclosure of knowledge-claims by peers, fellow scientists? Fishman (2004) suggests on the basis of her study of the biomedicalization of ‘dysfunctional’ female sexual desire that, unless there are clear routines and standards for producing knowledge-claims, scientific claims may easily be compromised by commercial interests; that is, rather than serving the role of advancing collectively enacted forms of knowledge, grounded in scientific routines, university research may be mixed up with individual researchers’ entrepreneurial ambitions that are not, in the first place, concerned with scientific truth but with revenues and market opportunities. Fishman sketches the new situation: [T]he decline of medical exceptionalism, the privatization of medical care and scientific research, and the importance of consumerism in US culture have led to a greater acceptance of researchers’ entrepreneurial pursuits. Because of the continued reverence and idealization
Exploring Life in the University Setting 201
of medical ethics as existing outside of capitalist commodification, a relic of the golden age of medicine, the changing practices of biomedical researchers and their arrangements with industry have created spaces that remain unregulated and without oversight. (Ibid.: 188) Such lack of ‘regulation and oversight’ may not be threatening academic virtues in a short-term perspective, but the force of commodification in a capitalist society unrestrained by ethical standards and regulation is very likely to hollow out scientific research. However, notwithstanding the strengths and weaknesses of the present university system and what role it may potentially play in leveraging innovation work in industry, the tendency that innovations are increasingly produced in collaborative efforts, in joint ventures, research institutes, research project and so forth, is interesting. In the emerging bioeconomy, such collaborations are likely to be further accentuated. The biotechnology and the pharmaceutical industries have a longstanding tradition of university–industry collaborations and little suggests that this tradition should be overturned in the coming decades. On the contrary, an even more intimate relationship between academic researchers and industry representatives may be expected.
The field of research: systems biology All the researchers appearing in this chapter are loosely coupled to a specific technoscientific field of research referred to as systems biology or, in the case of medicine, systems medicine. For the sake of simplicity, the term ‘systems biology’ will be used throughout the chapter. Systems biology is a broad term, including many practices and technologies, but basically representing a change in perspective from individual components of a biological system to a more integrated and holistic view of the biological system. For Fujimura (2005), systems biology is a buzzword, just like the previous terms ‘biotechnology’ and ‘genomics’. This rather decisive change in perspective is widely accounted for in the literature. For instance, Nightingale and Mahdi (2006) speak of the transfer from ‘wet biology’ to ‘in silico biology’: While biologists in the late 1980s may have focused primarily on empirical ‘wet’ biology, today they may spend much of their time in front of computers engaging in more theoretical in silico science, experimental design, quality control, or trawling through large data sets. The process has made pharmaceutical R&D more theoretical,
202
Venturing into the Bioeconomy
more interdisciplinary, and has meant that medicinal biologists may now find themselves working with clinicians in hospitals. (Ibid.: 74) Systems biology is an approach to biology where the interrelationships and interactions between biological components and processes are emphasized: Systems biology is an emerging field of biological research that aims at a systems-level understanding of genetic or metabolic pathways by investigating interrelationships (organization or structure) and interactions (dynamics or behaviours) or genes, proteins, and metabolites. (Wolkenhauer, 2001: 258, emphasis in original) Wolkenhauer explicates this position: Crossing several scale-layers from molecules to organisms, we find that organisms, cells, genes, and proteins are defined as complex structures of interdependent and subordinate components whose relationships are properties largely determined by their function in the whole. This definition coincides with the more general definition of systems as a set of components or objects and relations among them. (Ibid., emphasis in original) In the Science 2020 report, published by the Microsoft Research Institute in Cambridge, UK, systems biology is defined accordingly: A multi-disciplinary field that seeks to integrate different levels of information to understand how biological systems function. Systems biology investigates the relationships and the interactions between various parts of a biological system (e.g., metabolic pathways, cells, organs). Much of the current systems biology is concerned with the modelling of the ‘omics to generate understandable and testable models of whole biological systems, especially cells.’ (Science 2020, 2006: 82) It is, thus, important to keep in mind that there is no conflict between using the various omics technologies and systems biology; systems biology is instead essentially built on the advancement of omics technologies but, in Pisano’s (2006: 36) formulation, ‘[s]hifts the focus of analysis from the level of component (the individual gene or specific protein) to the level of the biological system (e.g., the cell, the signal pathway across cells) and is deeply concerned with the dynamics of such systems’.
Exploring Life in the University Setting 203
Fujimura (2005) introduces systems biology as a post-genomic, nonreductionist and systemic approach to understanding life: In contrast to reductionist genetics, one could argue that systems biology is attempting to model biological complexities as organized systems in order to understand them. Systems biology seeks to explain how organisms function by using information on DNA, RNA, and proteins to develop systematic models of biological activities. It wants to connect networks, pathways, parts, and environments into functional processes and systems. The focus is on functioning organisms and less on environments, but systems biology does attempt to incorporate environments into its models. (Ibid.: 198) Fujimura points out four differences between systems biology previous analytical approaches. First, today, large data sets are produced that need to be examined in a systematic manner. Second, the new approach focuses ‘the region between the individual components and the system’ (ibid.: 205) – that is, not the system per se but how the system influences individual biological mechanisms. Third, new computational tools are used in the analysis. Fourth, there is a ‘potential to simulate the impacts of different environments on organizational systems’ (ibid.: 206) – that is, the study of how certain factors may affect the biological organism on a systemic level. Fujimura (ibid.: 219–20) claims that systems biology is influenced by a diverse set of fields including artificial intelligence, robotics, computer science, mathematics, control theory and chemical engineering and repeats the questions ‘What is lost in the translation?’ between the mechanical system and the biological system and ‘What biological processes and mechanisms are not effectively modelled on and understood on the basis of mechanical systems? Are mechanical systems simply biological systems writ large?’ For Fujimura (ibid.), there are substantial challenges for the systems biology framework. In vivo and in silico biology: concerns and shifts in perspective Torgersen’s (2009) study of the relationship between the old regime of biology and systems biology in Austria shows that there are some perceived major differences between the two perspectives. Torgerson is inclined to portray systems biology as a disruptive technology or analytical framework, in many ways breaking with the past: Systems biology intruded into a field previously dominated by qualitative experiments and introduced a new understanding not only
204
Venturing into the Bioeconomy
pertaining to the way knowledge is acquired but also to how uncertainty is treated. Although systems biology is no longer new, it still seems to be underestimated with respect to its potential disruptive power with regard to the conceptual understanding of modern biology. (Ibid.: 78) Analytically and conceptually, the new in silico analysis methods are built on a ‘more context-based and process-oriented view’ in comparison to the more traditional experimental molecular biology, studying the functioning of ‘one or a few proteins, RNAs or DNA sequences through cleverly designed experiments’ (ibid.). In systems biology, the cell is turned into a cybernetic informational system that is examined not in its bits and pieces but en bloc and through the use of mathematical algorithms: [T]he research object – the biological cell – is no longer placed on the bench to be dissected, so to speak; instead, it becomes a more abstract entity, a ‘system’ to be approached by abstract mathematical means. This approach towards the cell as a biological system demands a different way of doing research. No longer is it a question of individual experimental and intellectual scientific endeavour and genius, what is needed is the application of standardized methods to generate hypotheses on the basis of computational skills in a nonpersonalized way. (Ibid.: 79) As a consequence, Torgersen (ibid.) reports, systems biologists tend to think of the traditional wet lab biology as being as being preoccupied with ‘anecdotal research’, made up of ‘fractional and sporadic analyses of tiny parts of a problem determined by the incidental availability of certain tests and reagents’. For systems biologists, finding the right assay to ‘test a particular mechanism is considered a matter of sheer luck’, Torgersen contends. In this view, systems biology and other in silicobased analytical approaches are not the continuation of the traditional in vivo studies by other means, but, on the contrary, a radical rearticulating of the research programme. Understanding the functioning of the cell on a ‘global scale’ is not a matter of examining its constitutive elements but of approaching the cell on the aggregated level. In the systems biology view, the cell is a ‘factory’ (Vemuri and Nielsen, 2008) or a ‘complex biochemical machinery’ (Nightingale and Mahdi, 2006) whose functioning needs to be understood through the means of computation and mathematical modelling. In Burbeck and Jordan’s (2006: 530) account, systems biology is based on the four interrelated processes of automated analysis, modeling, simulation and integration of
Exploring Life in the University Setting 205
computational biology and experimental biology. In all these practices, systems biologists use advanced computation procedures and large data sets to sort out what the data provided from clinical studies or in vivo studies mean. While the period from the early 1990s until the middle of the first decade of the new millennium has been dominated by interest in the human genome and other biological entities on the molecular level, systems biology to some extent complements this biological paradigm. In the perspective of molecular genetics, the genome plays the role of being some kind of code-script or ‘master plan’ regulating the entire biological system; in systems biology, such a view is not taken: In systems biology, the body is not ‘genetic’ in the sense that it grants no priority to DNA or genes, preferring to see them as one component among many other components and processes, In molecular genetics, the ‘code’ of DNA comes to play a wide range of roles, from master plan, to instruction manual, to dictionary, to source code or software. (Thacker, 2004: 160) Contrary to this perspective, granting much explanatory value to the genome per se, in the systems biology perspective, the genome plays an important role but is one among many biological processes that determines the phenotype: The biomolecular body of systems biology is distributed (emphasis on nodes and relationships), in parallel (multiple intersecting processes), dynamic (based on perturbations to pathways), and focused in relations over substance/structure (pathways and not matter or forms). (Ibid.: 162) Thacker continues: The ‘purpose’ of DNA is . . . to constitute the self, molecule by molecule, organ by organ, part by part. The biomolecular body in systems biology takes a different route. It rarely mentions the self at all, not out of any disregard or ethical oversight, but rather because, from the ‘perspective’ of the biomolecular body, the self is irrelevant. What is relevant are the relations between components and processes, and their potential perturbations. (Ibid.) While omics technologies are capable of producing detailed and extensive data sets, potentially revealing very much about the biological
206
Venturing into the Bioeconomy
system examined, systems biology is an approach to integrate and make sense out of these disparate data. For the proponents of systems biology, it is an approach that is potentially capable of making the omics technologies truly useful, not only in terms of providing biological data but also making this data useful in terms of enhancing understanding of biological systems and, as a consequence, leading to new therapies. For the sceptics, systems biology is a new and fashionable phrase denoting exactly what medicine has been doing for centuries, engaged in clinical physiology where the human organism is treated as an integrated and surprisingly dynamic system, capable of regulating itself. However, neither proponents nor sceptics ignore the scope of the challenge lying ahead of the systems biology programme. The pre-clinical and clinical data sets are substantial; being able to sort out the data and producing operative and explanatory models would be a true breakthrough for both the life sciences and the biopharmaceutical industry. Nevertheless, systems biology is part of a cluster of new and innovative approaches that is expected to play a role in the life sciences in the future.
Academic researchers in the field of systems biology Introducing University A and University B The two universities in the study are located in the same Scandinavian metropolitan area and collaborate closely in some fields of research (for instance, in the discipline of chemistry). The city is one of the largest university cities in the Nordic countries and attracts students from all over the country and the world. University A is a technical university, established in the first decades of the nineteenth century, and hosts a number of departments in the engineering sciences, an architecture school and a technology management school. Being one of the leading Scandinavian universities and listed by the Times Higher Education Supplement as one of the 200 best universities in the world, University A takes pride in being an internationally well-recognized research institution and in its tradition of collaborating closely with industry. University B is younger than University A, established in the late nineteenth century and given the status of university in the mid-twentieth century; but it includes all faculties and a business school, a medical school and an academy of music, opera and drama. A recently awarded Nobel Prize helped to boost confidence in University B and it is today highly ranked in a number of disciplines and fields of expertise. Although it is mostly lower ranked than University A (in the Times Higher Education Supplement), University B is still among the relatively young universities competing with the oldest
Exploring Life in the University Setting 207
universities in Scandinavia in terms of prestige and the capacity to host national and international research centres and institutes. Among the more prestigious faculties are the business school and the medical school, two relatively autonomous units in the university structure, having their own chancellors. At times, especially in fields of research conducted in both universities, there is competition between University A and University B. For instance, both universities have systems biology departments collaborating in many research projects. Working with systems biology: challenges and opportunities Model organisms and clinical data In University A, the research team were studying a number of biological systems, but most of the research team used a yeast cell that shared many metabolic processes with human cells, thereby, combined with its ease of practical use, making the yeast an interesting ‘model system’. As the professor and head of the research team explained, the cell was satisfyingly overlapping with the human cell: It is a very great leap, but about the 60 per cent of the genes in the yeast cell are also present in human cells. If you examine the metabolism, there is an incredible degree of similarity, so I venture to say that human cells do have yeast metabolism and a few more things. That is characteristic for the [human] cell. Yeast as a modelsystem can allow for a range of experiments, because we can do genesplicing and so forth without encountering any ethical problems and understand how it works. It is a good scaffold for conducting studies in more complex systems. (Professor, Systems Biology, University A) He continued: The are many human diseases that we cannot understand on basis of yeast, but we can understand the general mechanisms and [enable] method development that we are trying to transfer [to humans] when studying more complex systems. That is the philosophy. (Professor, Systems Biology, University A) This belief in the ‘overlap’ between the model and the human biological system was further emphasized by one of the assistant professors: The overlap definitely exists. Both cells are eukaryote cells. Both cells are organized in a similar way and function according to the
208
Venturing into the Bioeconomy
same principles. Apoptosis [cell death] was, for instance, for many years thought to not exist in yeast but now we are sure that it does. The mechanisms are similar overall. We might not know all the proteins and all the functions present, but the beginning, the signals and the end-results are the same and then in-between, that is where my research starts. (Assistant Professor, Systems Biology, University A) At the same time, she admitted that ‘we know that there are some differences, but we do not know how many they are’. The main reason for using the yeast cell model was, of course, to avoid the additional work that the use of human cells would demand. Therefore, the scientists had to convince themselves they were using with a model system that overlapped with the human biological system. One of the assistant professors explained: It is very difficult to do research on human cells, and especially the human neurons . . . it is complicated, expensive, sometimes impossible . . . basically, since yeast and human cells have enough similarities – at least I am trying to convince myself that [giggles] – I can conclude something about how the yeast cell is doing this, it will help me infer how the human cell, the human neuron could work. (Assistant Professor, Systems Biology, University A). One of the associate professors in the medical school in University B emphasizes that, at the end of the day, understanding human biological system is what matters: When studying mice, we understand how things work but we don’t know if it is relevant. Similarly, in studying a sick human being, we do not know why they are sick. What are the underlying molecular mechanisms? (Associate Professor, University B) One of the two assistant professors in the research team of the very successful lab, studying apoptosis – what has been called ‘programmed cell death’ (see, e.g., Landecker, 2001) – outlined the research process: First of all, you have to identify, ‘What are the players?’, ‘What do they do?’ and ‘How does it happen?’ When it goes wrong, why does the cell die, like in a degenerate disease? You want to know why the neurons are dying. On the other hand, when it goes wrong
Exploring Life in the University Setting 209
in the other direction, when they are not dying, like cancer cells which refuse to die and continue dividing all the time . . . There is an interplay of cells dividing and dying when they are supposed to do either thing. When this does not happen, basically it is a challenge to understand why it is happening. The challenge is technical. (Assistant Professor, Systems Biology, University A) The assistant professor also thought that her work was, to some extent, reminiscent of a crime investigator’s work, carefully putting together pieces of evidence and gradually producing a larger picture of the event: You get this feeling of being Sherlock Holmes, digging out little pieces and putting them together, that is, on a daily basis, what is very exciting . . . There are a few clues scattered around and, if I have a feeling that this is the right way to go, then I collect all the pieces and put them into the big picture. (Assistant Professor, Systems Biology, University A) The other assistant professor explained in detail how the work proceeded through sequential steps in the laboratory: If you find a gene-protein interaction, to test this hypothesis we go into the lab and delete a gene, to make sure that the protein cannot interact with the gene. So we delete the gene, and this is where the molecular biology comes in, not as a subject in itself but more as a tool. (Assistant Professor 2, Systems Biology, University A) He continued: We delete a gene and put this mutant yeast in a fermentum we grow under very specific conditions; we would like to identify what changes happen in the yeast cell on a global scale. By ‘on the global scale’, I am referring to the list of all the genes; for example: how does this gene deletion affect the rest of the genes? How does this gene deletion affect the rest of the proteins? Are there carbon-flow patterns inside the yeast, for example? (Assistant Professor 2, Systems Biology, University A) Finally, the laboratory results are brought into the computation phase where the large data sets are examined. This is the very core of the
210
Venturing into the Bioeconomy
systems biology approach, generating an integrated model of the data provided: Data integration, that is really how I would summarize the third point. Generate the data that is considered a parts-list. If you have a parts-list you would not have any idea of how the internal system would work. Integrating the data into a biological context [is the objective]. (Assistant Professor 2, Systems Biology, University A) The first assistant professor admitted that, even though there were strong practical implications from this research – for instance, an understanding of how insulin was produced in the yeast cell – her interests were primarily theoretical: I am not that interested in the end-product, the insulin, I am interested in the part where we engineers can study how proteins are synthesized and degraded . . . many steps in apoptosis are actually related to protein aggregation and folding and so on. (Assistant Professor, Systems Biology, University A) The research team had many close collaborations with industry in the field of bioengineering, but what appeared to be the dominating interest was a more detailed understanding of the elementary biological processes in the cells. The other assistant professor, a male biochemist, explained how they were proceeding to understand the biological system: We have information about the cellular components, the genes, the metabolites, the proteins and so on, and we can quantify them very accurately, using techniques that are developed in our lab but also in other labs. Using this quantitative information, what we are trying to do is put together a map of the entire cell and understand how these interactions in the map change with nutritional environments, change with genetic modifications, change with evolution and so on, to correlate the cause and effect. (Assistant Professor 2, Systems Biology, University A) In the analysis of the biological system, the operating hypothesis was that the metabolic pathways change when the cell is in a state of disorder: We map metabolism onto a network, as a graph . . . Our fundamental hypothesis is that the difference in metabolism in between a healthy
Exploring Life in the University Setting 211
cell and a diabetic cell is change in these interactions. That is what we firmly believe. That is why we map metabolism as a set of interactions. (Assistant Professor 2, Systems Biology, University A) The other assistant professor pointed at the potential contribution of the work: If we manage to figure out why a certain process is happening and what the regulation hub is, is it going to be a molecule [that is the basis for the therapy]? Is it going to be a set of molecules? . . . If you can define what the switch or switches are in that process, you can, of course, go back to chemistry and try to devise compounds for that, or if we ever get to the stage where people do gene-therapy, we could modify a gene that [produces] the protein for that specific switch . . . the moment you do have a system that is quite well described, you can try to modify it. You cannot plan any sort of modification or intervention in that system if you do not really know exactly what you are playing with. (Assistant Professor, Systems Biology, University A) The key to understanding the metabolism was thus to identify patterns in the metabolic processes and correlate these changes with the external conditions of the cell; for instance, the abundance of glucose or a starved milieu, both stressing the cells. ‘Pattern is a keyword here: we identify patterns and the interactions and correlate these patterns to an external environment, or the cause’ (Assistant Professor 2, Systems Biology, University A). Systems biology is, thus, characterized by the capacity to identify patterns in the data not previously seen. It is a form of cartography of the biological system ‘in action’, in the process of becoming, in a state of change and constant modification. The value and contribution of systems biology The interviewees in University A were quite confident that their expertise in the yeast model systems could be used to inform other domains of research – for instance, the clinical data provided by the internists at University B with whom the research team collaborated. Even though the interlocutors did not see systems biology as a very well-defined concept, they still saw it as a reasonably bounded term denoting a set of practices and approaches. The professor at University A claimed that there were quite a few complementary fields in systems biology, making it a quite broad discipline: ‘It is to some extent a broad and rather loosely defined domain without a fixed curriculum . . . I have
212
Venturing into the Bioeconomy
written quite a few textbooks in systems biology, but cannot take down a textbook from my shelf and say “This is a good book about systems biology” because you might need ten textbooks to cover the entire field’ (Professor, Systems Biology, University A). The female assistant professor emphasized the non-reductionist approach of the study: People in systems biology don’t agree necessarily on a strong definition, so I think that nobody has written it down very precisely. What we think, it is a view that you start studying biological systems [not from] a reductionist point of view, from single models, or single molecules and assemble the system you infer from that; rather you study it as a big complex immediately and conclude that the system has some elements that are not the additions of the individual properties. The sum is different from the collection of the individual parts. (Assistant Professor, Systems Biology, University A) The other assistant professor also emphasized the vagueness of the term: There are a lot of definitions for systems biology . . . But one thing I noticed is that these definitions typically involve keywords such as data integration, mathematical modelling and interactions between cellular components . . . These three keywords kind of translate or convey the meaning of systems biology as we use it today. (Assistant Professor 2, Systems Biology, University A) A key to systems biology is to be able to quantify and model the different cellular components and to construct models of these quantifications that ultimately could predict how the metabolism would emerge under different conditions. While this may intuitively sound uncomplicated, recent developments (over the last ten to 15 years) did, in fact, enable new such approaches. One of the two PhD candidates explained how things have changed during her period of time as an under-graduate and graduate student: ‘Seven years ago we used knock-out genes, the classical way to improve strain . . . recombinant technologies, but now, it is like an new era coming. We don’t study “one-by-one”; we study the whole metabolism and try to use integration analysis instead of reductionist approaches’ (PhD Candidate, Systems Biology, University A). The movement into what has been called the ‘post-genomic era’ thus altered biology from being, in the words of one of the assistant professors, a ‘descriptive science’ to a ‘quantitative science’ – ‘I see that as a very good thing,’ he said.
Exploring Life in the University Setting 213
The other PhD candidate thought of the models developed in systems biology as a form of ‘compressed thinking’. You can almost think of these models as compressed thinking. Such a model is just a lot of information in a very compact form. In the same manner, the methods are compressed thinking applied to data . . . A condensed work method, almost. (PhD Candidate 2, Systems Biology, University A) Capturing many processes in the biological system without reducing them to their individual components was thus the principal contribution of systems biology. One professor in the medical school at University B, operating in the field of metabolism research and obesity studies, pointed at the value of a systems biology approach to the data generated: We have been working for quite some time with 40,000 genes; how they are used in a specific tissue, and have compared sick and healthy [patients] and what happens during weight loss and so forth . . . We and everyone else has known that this procedure of examining one gene at the time . . . is a quite naive way of examining because it is likely that many genes contribute. This is where systems biology may help us see the connections and relations and what different pathways are switched on and off at different points in time and what differs between individuals. There may be rather small changes in many genes even though they are part of the same pathways. (Professor, Clinical Metabolic Research, University B) The professor said she thought ‘this will produce interesting things’, but also admitted that there is a challenge in bridging the ‘statistical and mathematical know-how’ of the bioengineer and the ‘biological and medicinal expertise’ among the clinical researchers. An assistant professor, having a background in bioengineering but now working in the medical school at University B, shared the enthusiasm over systems biology: ‘It opens up opportunities for new therapy strategies, new types of molecules, not only the small molecules substances . . . We learn quite a bit about the functioning of the biological systems thanks to these large-scale methods. Taken together, I cannot help thinking that these improvements will lead to large advancement regarding the treatment of a range of diseases’ (Assistant Professor, Systems Biology, University B).
214
Venturing into the Bioeconomy
The professor in University A pointed at the value of the system biology approach in, for instance, the pharmaceutical industry, where he claimed the omics technologies were not used very thoughtfully, screening different substances but without operating on basis of an integrated theoretical model of the metabolic pathways: The pharmaceutical industry has experienced many great successes. Molecular focus, drug identification, developing the new drug and launching it on the market. When these new omics models became available on the market, and were developed, they were used merely for screening. More of the same, that is. You only get additional data. The strength lies in the fact that you have data not only about expressions of one or five or ten genes, but also actually have the expressions of 30,000 genes or however many there are in a human cell – that is, you are capable of looking for correlations between groups of genes . . . This kind of more advanced analysis, they [the pharmaceutical industry] are not very good at. The problem is also that such data sets are not very easily handled using conventional statistical methods . . . the biological systems are too complex. The kind of correlations you really should look for are not direct correlations . . . Then you need to start with an hypothesi say, ‘This component is interacting with this component and that is affecting this response.’ If you formulate such a hypothesis and connect it to your data, then you may verify your hypothesis. That is the philosophy of systems biology, to examine structural information in association with data. (Professor, Systems Biology, University A) This critique against the use of omics technologies unaccompanied by theoretical models was articulated by other interviewees as well. One professor in cardiology emphasized the lack of understanding of what the pre-clinically or clinically generated data means in the pharmaceutical industry: Personally, I believe that one of the reasons for this omics-generated data not having the impact expected in the pharmaceutical industry is that they have problems seeing how all this information hangs together. You don’t see the wood but only the trees. You focus on the details and are perhaps failing to see what are the most interesting parts. (Professor, Cardiology, University B) Genomics conventionally identifies connections between series of genes and the phenotype or a specific disorder, but, as the professor
Exploring Life in the University Setting 215
emphasized, ‘You need to keep in mind that these genes identified may explain perhaps a few per cent of all diseases. That suggests, if anything, that one mustn’t think that everything could be solved by identifying only one factor’ (Professor, Cardiology, University B). This opened up the systems biology perspective in medicine: The [human] body was not developed as a prototype overnight. It has been developed over many years. If something in the system breaks down, it is relatively common that some other mechanism compensates quite well. If you look at insulin-resistant individuals, initially the pancreas will start producing more insulin. It compensates as long as possible . . . Quite often you see compensating mechanisms at different levels and that makes it complicated to predict: ‘If I knock out this [gene], what will happen then?’ (Professor, Cardiology, University B) He continued: We are dealing with extremely complex mechanisms; we talk about complicated metabolic pathways that we, in many cases, only have a limited understanding of. If you look at all this research conducted the last ten to twelve years, you can say that cardiovascular research has been extremely successful. We have increased the understanding in a manner that widely exceeds that of many other fields of research . . . At the same time, the majority this new research has not yet led to any real breakthrough in drug development. (Professor, Cardiology, University B) Another interviewee, a professor in clinical physiology, thought of the recent interest in system biology as a rehabilitation of a longstanding tradition in medicine, emphasizing the whole biological organism: ‘I see it as a renaissance, because I have always lived with systems biology perspective . . . From the perspective of the brain and how it regulates the body’ (Professor, Clinical Physiology, University B). In his view, the techniques (i.e., omics techniques) were only tools enabling an understanding of parts of the system but never the systems as a functional whole, in many cases capable of regulating itself and demonstrating emergent properties: ‘The phenotype has been marginalized over many years, but there is a renewed interest because it is realized that something was lost on the way,’ he concluded.
216
Venturing into the Bioeconomy
At the same time that the lack of an integrated perspective was deplored, the professor in cardiology also recognized that the challenge lying ahead of both the scientific and the pharmaceutical communities is substantial in terms of making sense out of vast amount of data: If you look at the metabolic pathways and these links, they are extraordinarily complicated. The omics revolution has resulted in an immense number of measuring points and much data. To be honest, at times I feel like a Neanderthal man. After conducting the most complex analyses with real state-of-the-art technology, and when you get all this vast data, you sit there like some Neanderthal man, browsing the Excel-files . . . And then people are starting to do elementary statistical analyses. There, I believe, we have much to learn from other disciplines, like systems biology. (Professor, Cardiology, University B) This, the professor argues, calls for a self-critical view of the medical sciences: ‘In order to understand what we thought was quite unproblematic all of a sudden demands team work between the clinics, bioinformation analysts – experts in measuring and analyzing [biological] data – and mathematicians.’ In University A, a technical university, the interviewees saw two rather different areas of application for the systems biology approach. First, in the field of bioengineering, where, for instance, the yeast cells’ ability to produce glucose was seen as one biological process that played a key role in many domains of industrial production: This has a lot of applications in the industrial biotech area. Almost all the processes that are currently used in industry use glucose as their raw material, the starting point. From that substrate, glucose, many products are made by the microorganisms, so it is very important how the glucose is metabolized and how this metabolism is regulated. Also there is nitrogen because the cell cannot survive only on glucose, it also needs nitrogen among many other nutrients. (Assistant Professor 2, Systems Biology, University A) In addition, some of the researchers, including the professor had worked on what is called neutraceuticals, the active component in functional food, and saw that as a large and growing field of application. The research team had, for instance, collaborated with a North American company to develop antioxidants of the same type present in red wine
Exploring Life in the University Setting 217
that could be used, for instance, in breakfast cereals. The professor saw a great deal of potential in that substance. Neutraceuticals is an interesting concept bearing in the 3Ps [prevention, participation, personalized medicine, discussed in medicine]. If you look at Nestlé or Unilever, they have an increased focus on health and healthy food and so forth. Then we are talking about neutraceuticals as substances that have pharmaceutical qualities and that can be used in groceries. (Professor, Systems Biology, University A) Furthermore, the systems biology approach could be used in the life sciences to develop new therapies, especially in what has been called personalized medicine or individualized medicine: What is widely discussed, if we look at the health sector . . . the pharmaceutical industry has traditionally focused on the so-called blockbuster model . . . It is getting harder and harder to identify blockbusters and the clinical studies become more and more expensive and [the costs] are accelerating. The wisdom is that in some of the latest drugs and clinical studies this model is failing, and they [pharmaceutical companies] have to adopt what is called individualized medicine or personalized medicine. That does not imply that drugs will be developed for every single person but that you are capable of classifying patients . . . One of the major issues in the field is to identify clear biomarkers that can give efficient groupings of patients. You can see that clearly in the field we are studying with the medical school [at University B]. There is a group of diseases called the metabolic syndrome occurring in association with obesity: high blood pressure, type 2 diabetes, cardiovascular diseases and so forth. Traditionally, the whole group of patients has been examined, but it is a large and broad group – you could have 20 or 30 different backgrounds leading to the same disease. With the help of biomarkers, this group of patients can be segmented and each type of patient could be given one type of drug and so forth. Such research, we will see more of in the next coming years. (Professor, Systems Biology, University A) In University B, at the medical school, the researcher recognized the contribution from systems biology. Being able to combine the analytical techniques and methods with the clinical data collected in the research, more detailed understanding of, for instance, metabolic disorders could be enabled: ‘Systems biology has a lot to gain from collaborating with medicine.
218
Venturing into the Bioeconomy
We have complex research questions, we have massive amounts of data, and we need to be able to break them down in order to see what they all mean’ (Professor, Cardiology, University B). This view was shared by one of his colleagues, the associate professor: ‘You need a really good material to study; good biological and medicinal research questions. One problem, as I see it, is that qualified systems biologists are stuck with too elementary [research] questions . . . In the end you need to give them good biological or medicinal questions’ (Associate Professor, University B). He continued: So far, I do not think that systems biology has led to more questions being posed and it is just recently established . . . Today, much systems biology has shown precisely what was already known. Rather elementary models have been created and they have been verified . . . Hopefully, in the next coming years . . . we’ll be able to see how much use we have for all this [systems biology] . . . In order to accomplish something, there is a need to verify quite simple systems. (Associate Professor, University B) In addition to the use of the clinical data, the assistant professors in University A emphasized the long-term consequences of their individual research projects. For instance, the male assistant professor studying metabolic processes pointed at quite immediate consequences for disorders such as obesity and type 2 diabetes: Glucose metabolism is directly related to obesity and diabetes. Excess glucose consumption leads to people getting fat. On the other hand, if glucose is not metabolized, or if it is metabolized slowly, it leads to accumulation in the blood which is characteristic of diabetes. So there is a very small window of what is considered a normal mode of metabolism. Any deviation from this normal mode of metabolism causes metabolic disorders like diabetes and obesity and so on. (Assistant Professor 2, Systems Biology, University A) One of the key contributions of systems biology to medical practice and pharmaceutical therapies was, thus, to identify biomarkers for the metabolic syndrome – that is, biologically relevant measures indicating how the patient is doing in terms of his or her health: One of the more practical objectives with my project is to find biomarkers for the metabolic syndrome, so one can predict that ‘if this [indicator] is up and this is down at the same time as this is
Exploring Life in the University Setting 219
down, then there is a increased risk of getting diabetes’, for instance. (PhD Candidate 2, Systems Biology, University A) Easily administrated biomarkers are often sought by pharmaceutical and biotech companies, and the basic research conducted by the systems biologists could potentially make a contribution. The female assistant professor, studying neurodegenerative diseases such as Parkinson’s disease and Alzheimer’s disease, possibly caused by apoptosis, thought her use of systems biology could potentially make a contribution to the field, but that these diseases were very complex and that much needs to be known prior to any specific therapies being developed: Neurodegenerative diseases are way more complicated than presumed. It is not that dopamine neurons are dying out and then you have Parkinson’s and then something happens, [but] . . . Somehow you realize that there is not just one cause and one way of treating it . . . [There are] many different interactions. (Assistant Professor, Systems Biology, University A). In summary, systems biology was regarded among the interviewees as being an important factor in the near future of the life sciences and it was thought that systems biology is capable of helping the omics technologies accomplish their real breakthrough and starting to inform the new drug development practices. At the same time, some of the interviewees were a bit concerned about the hype around the entire field of the life sciences and emphasized that it was, in fact, not that easy to transform some interesting scientific finding to a full-scale therapy. For instance, one of the assistant professors thought the field of systems biology had not yet delivered what has been promised: All the money that we put in . . . we have not seen the results that we promised when we started with the field [systems biology]. So this is an outcome from that. I think it is a little premature and a lot of things were promised and then, as we got into it, we realized how messy, how dirty things were, and how ignorant we were about this fact. (Assistant Professor 2, Systems Biology, University A) One of the PhD candidates shared this sceptical view: To be perfectly honest, I think it is a bit overrated [expectations on biotechnology], at least in a long-term perspective. I believe
220
Venturing into the Bioeconomy
biotech . . . will play a substantial role in 100 years’ time. But a lot of what you read [in popular media] is overtly optimistic. And then it is fun to work in a field that is hot. People are interested in what I do and so forth, but at the same time it is a bit ridiculous when I read about it in popular media. It is really a hype [hausse]. I think it will take a long time before biotechnology plays a major role in the Swedish economy. (PhD Candidate 2, Systems Biology, University A) The issue was that, while it may be not too complicated to find correlations in the data sets, making predictions on basis of these correlations is quite another matter: It is quite easy, after all, to examine major data material and identify some causal connections and do something about it. Doing predictions [English in the original] with some practical value is quite another thing. Systems biology is a bit of a buzzword [English in the original]. (PhD Candidate 2, Systems Biology, University A) In summary, there were great expectations for the new approaches but at the same time it was hard to make claims regarding the short-term effects of the techniques and approaches. Collaborating with industry The interviewees on all levels, ranging from full professors to doctoral students, claimed they had collaborated with industry and generally thought this collaboration was both rewarding and helpful for advancing the research. The full professor at University A, just like his colleagues at University B, collaborated on a regular basis with industry and had started no less than four companies and registered close to 30 patents based on his research. At the same time as industry collaborations were welcome and appreciated, the researchers expressed their principal domains of interests as being theoretical and not entrepreneurial. The younger researchers claimed they were ‘happy’ doing what they were doing and anticipated careers in the university system. At times, they even claimed they had always wanted to do what they were doing. Seen in this view, the industry collaborations played an important role but did not encourage or entice the junior researchers to engage in more entrepreneurial activities. Even the venturesome professor at University A saw himself primarily as an academic researcher – indeed, still as a ‘chemical engineer’, regardless of all his success in his field of expertise.
Exploring Life in the University Setting 221
The researchers saw a number of challenges for the field of systems biology. First, there was a need for creating and preferably co-locating transdisciplinary research teams, including the various domains of expertise, to run successful systems biology programmes. Here the role of research centres, jointly funded by the national ministry of education, the universities and industry, played an important role in terms of providing funding for the research lab infrastructure. Second, the younger researchers, and especially the assistant professors, expected to start up individual research programmes, calling for a research funding policy that earmarked research money to recently graduating PhDs. In the present Swedish system, there are no such funding opportunities and the younger researchers thought it was unfair that they had to compete with established researchers having perhaps 30 years of experience from the field of research. However, notwithstanding such concerns, there was a great deal of self-confidence and expectation for the future among the researchers. The full professors advocated their field of expertise as one of the most promising domains in the life sciences and the younger researchers admitted that they were given job offers on a regular basis. The hope that the life sciences would make substantial economic and therapeutic contributions built a confidence that they were at the front line of technoscience, operating in a most promising field. In summary, then, the interlocutors did not see industry as a threat to scientific rigour but as a partner, but they had their identities firmly grounded in the university setting. The researchers in both universities were well aware they were working in a field with strong industry focus and commercial interests. When asked if the collaborations with industry, which were quite extensive in the department, in any way compromised the research agenda or took away focus from the basic research, all researchers denied such consequences of industry collaborations. The professor in University A did not see any conflict of interest: ‘I wouldn’t say that is something affecting the research work I want to conduct. The best way to put it is to say that I never have a drive based on “this is exciting because it may be commercialized”’ (Professor, Systems Biology, University A). This view was shared by some of his colleagues: ‘The fact that there is a lot of collaboration with industry does not really change our focus or interests. We are really doing what we want and it’s just lucky that many of the projects that we have end up being something interesting for society,’ one of the assistant professors argued. The other assistant professor claimed that he was actively looking for companies to collaborate with and emphasized that just as much as the academy needed the companies,
222
Venturing into the Bioeconomy
so did the companies rely on university contacts. In day-to-day work, most conflicts or concerns could be avoided by sorting out things and making decisions before the work started. For instance, younger researchers, being on the tenure track and in need of publications – the ‘only tangible outcome’ from the research for large groups of academic researchers – need to be able to publish their research work in order to acquire the credentials to advance their careers. The rate of publications is ‘one very, very important metric by which researchers’ progress can be assessed’, the assistant professor stressed. So, in the vocabulary of the juridical discourse, the intellectual property rights need to be handled with care in the early phases of the work. While industry collaborations were generally regarded as being rewarding and helping to establish a focus in the research work, a too narrow and short-term focus of the research activities could serve as an impediment for joint work. One of the key challenges for the researchers was, thus, to underline that the nature of science is to venture into domains where the output can never be assured or taken for granted: Sometimes it can be very difficult to collaborate with industry if they have very strong focus on something particular. If your research leads you in a new direction or things don’t work out as they have planned – because it is research and you can never really know what the next step is going to be – then it might, of course, be tougher because you have to negotiate with them and convince them to follow that lead instead of what they think is the right thing to do . . . In that respect, the freedom is maybe a little restricted if you collaborate with industry but it also depends very much on the partner you have on the other side of the table . . . Most of them do actually understand how it works . . . the negative parts are not that serious, at least that is how we see it here. (Assistant Professor, Systems Biology, University A). Normally, these divergences in time-perspective could be handled by establishing a joint research agenda in the early phases of the work. However, one of the more experienced researchers claimed that relationships with industry were deteriorating and that the administrative burden added to accomplish such joint work inhibited a fruitful collaboration: I have been part of this for a period of time and it gets worse and worse . . . It is complicated to collaborate. There are 15 kilos of paper
Exploring Life in the University Setting 223
being sent either way. It is really complicated compared to before . . . That leads to all parts staying on their turf. In the academy, they think that ‘Industry should be able to help us’ and that is a terrible [attitude] . . . and in industry, they say ‘Damn academics, they can stay where they are. They just want money.’ It is bit like that. (Professor, Clinical Physiology, University B) This rather sceptical attitude was shared by one of his colleagues: I’ve been collaborating quite bit with [a local major pharmaceutical company] . . . They are not easy to work with. They are very self-centred; they do not have the time, they do not have the time, and then all of a sudden there is a major rush and everything is highly acute and they arrange meetings, and we are supposed to drop everything to get there as soon as possible. I’m a bit tired of it, to be quite honest . . . They hire my colleagues, so it is my friends working out there. (Professor, Clinical Metabolic Research, University B) This suggests that, even though industry collaborations were welcome in theory, in practice they may be more cumbersome to maintain over time. The senior researchers called for a mutual understanding of the two worlds, that of the academy and the major pharmaceutical industry, to make collaboration run smoother. Identity-making in emerging professional fields Living with expectations The field of systems biology is a transdisciplinary domain of expertise where bioengineers, molecular biologists, computer scientists and mathematicians collaborate to establish analytical models helping to structure and sift out relevant information from large-scale data sets. Being at the intersection of various scientific disciplines implies that professional identities are to be forged on the basis of highly heterogeneous elements. At the same time, the researchers were aware of the market value of their competence and that the life sciences were portrayed both in the media and in the economic literature as the new dominating industry of the twenty-first century, in many ways comparable with the automotive industry in the preceding century. However, there were rather modest expectations about what could be accomplished on a short-term basis; there was a discrepancy between the public image of a research field and the intramural beliefs of what
224
Venturing into the Bioeconomy
could be expected to be accomplished. As one of the assistant professors put it: The general public is . . . interested in things that [appear] very exciting . . . What the general public find interesting is not always what academia finds interesting. There can be a discrepancy . . . It is sometimes exaggerated what is to be expected from research immediately. The public should be aware that there is a lag in time between what we are doing now and when it can be expected to [play a role] in everyday life. (Assistant Professor, Systems Biology, University A). She also emphasized that the pharmaceutical industry was primarily concerned with making money and, therefore, it was not always the case that the most scientifically interesting results led to the development of new therapies. Healing people and treating disease, it is not always what is financially the best thing to do . . . Technologically and scientifically, it is the right way to do it, but it does not suit the market. (Assistant Professor, Systems Biology, University A) At the same time, the assistant professor claimed that she did not feel ‘any pressure’ to deliver any major contributions at this early stage of her career. Academic work provides a clear career path where things take time but where there still is a movement forward: I don’t feel any pressure . . . I just think you should be in science because that’s something you really care about and that is what your personal or professional driver is. I don’t want to see myself really as doing something that necessarily leads to something relevant for human society because then, of course, I will put pressure on myself and my research . . . I just want to do, you know, interesting things and then, if it is possible to apply to something good [that is rewarding]. (Assistant Professor, Systems Biology, University A) Another assistant professor in the research team shared this feeling of having no obligations to society besides doing a good research work: I don’t have any expectation or sense of responsibility in giving something to society . . . I do my research, I get my results, and publish the data. That, in a sense, is what I consider my service. If I can
Exploring Life in the University Setting 225
see my results somehow translating into or contributing to the identifying of a drug target or maybe screening a drug compound [that is fine]. (Assistant Professor 2, Systems Biology, University A) At the same time as the younger researcher was balancing the demands of making a contribution to society and nourishing their research interest, they were convinced they had ended up in the right profession and were happy about the work they were doing. Some interview excerpts testify to this belief: I like to work on this. I am happy . . . I am never bored. Maybe, if something happens and I have to repeat [the practice] I am a little bit bored. (PhD Candidate, Systems Biology, University A) I never had a problem with such motivation. It is something I enjoy doing. I love it. (Assistant Professor 2, Systems Biology, University A) I have known, since a long time back, that this was what I wanted to do, modelling biological systems. In my case, it was quite clear that I would work in the academic area or in some small research-based firm. (PhD Candidate 2, Systems Biology, University A) I always wanted to be in science . . . Socially and professionally, this is where I feel comfortable . . . I know I like academia and I know that I like research at the molecular level. This is where I think I should be. (Assistant Professor, Systems Biology, University A) After all, a life in the laboratory or in front of the computer is rewarding in its own right. The entrepreneurial spirit In the economic literature and in popular media, the emerging bioeconomy is regularly portrayed as the next field that will propel the growth of the capitalist economy in the new millennium. Biotech companies have attained substantial interest among economic analysts, academic researchers and media pundits, and the general fascination for the human body and its health and well-being as the ‘final frontier’ has made the life sciences one of the most lucrative fields of research. Notwithstanding all this media attention (and to some extent media hype), there were only relatively weak entrepreneurial interests among the researchers. According to the professor in University A, such entrepreneurial spirit was rather
226
Venturing into the Bioeconomy
uncommon: ‘Relatively few of the younger people I have encountered have these entrepreneurial aspirations,’ he said. ‘Universities are more generally focused on the problem of handling innovation processes,’ he added. Having extensive experience from entrepreneurial activities in the field of the life sciences, the professor was well aware of the efforts demanded: ‘It is so much work to start an enterprise . . . It is an enormously demanding process’ (Professor, Systems Biology, University A). At the same time, he admitted that he had always been fascinated by ‘the intersection between research and [its] application’. He also repeated that, in order to successfully establish a new company, there is a need to be part of a field, to have collaborating and partnering companies with which one can co-evolve – ‘it is like a crystallization process: you start with a small core and then it grows outwards’, the professor explained. This coevolutionary development was especially accentuated in the life sciences that demanded much know-how and infrastructure and equipment: If you look at relatively successful start-ups in the USA, the most of them are spin-outs from large companies. The philosophy that the universities are [the foundation for] successful ventures, I am not sure that is an adequate image, at least not in the life sciences. It may be different in computer technology where two young persons could sit down à la Bill Gates and accomplish something grand. That is not as capital-intense and not as far from the market. (Professor, Systems Biology, University A) Notwithstanding these challenges, one of the associate professors at the medical school at University B had a clear idea that a company would be the best vehicle for advancing the research and potentially producing therapies: Well. It’s fun to find things in the research but, at the end of the day, one wants to produce something out of it, a product, and then you quickly realize that if it is a drug you need the protection from the patent to make any pharmaceutical company take an interest. (Associate Professor, University B) When asked about it, some of the senior researchers admitted that they occasionally received job offers. ‘It is quite clear that there is a lack of people with expertise in practical bioinformatics,’ an assistant professor at the medical school in University B said. Especially in the USA, venture capitalists want to see lists of who has been recruited to the firm
Exploring Life in the University Setting 227
they are financing and consequently there was a great deal of money circulating in the industry aimed at hiring qualified staff. We cannot be sure, however, whether access to capital will be as abundant in the future, one of the assistant professors claimed: Q: Do you get a lot of job offers, for instance? A: Yes I do. Since I moved to University A [eight months ago] I have received two job offers. This is a hot field and people are very interested in hiring good candidates and there are a lot of start-up companies . . . The start-up companies have a lot of money to hire good people. It may be that they do not have the running costs to manage their potential, but they have a lot of resources . . . So there is an exposure to that side. (Assistant Professor 2, Systems Biology, University A) Even though, in most cases turned down such job offers in order to pursue an academic career and rise through the academic ranks, he was still involved in a ‘very, very small software company’ in the USA, currently ‘not making any money’. The company was producing software helping biotech companies analyze their clinical results in a query-based structure. This engagement took relatively little time from the research efforts and brought some sense of using the individually acquired know-how in a meaningful way. Neither of the two doctoral students expressed any major interest in starting their own companies. The doctoral student, being at the very end of their PhD programme, dreamed about moving back to Thailand, her home country, and accomplishing something good on the basis of her expertise in bioengineering. The other doctoral student thought that he lacked the ‘entrepreneurial drive’ demanded – ‘the ‘desire to create something’, he said – and was rather content with being in an academic milieu where he could develop his skills in biocomputing and mathematical modelling. A similar attitude dominated in the medical school at University B: Q: Is that something you discuss, companies and entrepreneurship? A: No, not as much as we should because the interest is relatively low. Especially when it is hypothetical. (Associate Professor, University B) One of the full professors in the medical school admitted that she had recently terminated her own small company to be able to dedicate
228
Venturing into the Bioeconomy
herself full time to the research group and emphasized the lack of time to deal with all matters simultaneously: You have a limited amount of time. My responsibility is to make the activities here continue. I need to invest quite a bit of time to acquire resources and I need to prioritize what is absolutely necessary . . . Unless I prioritise the activities here, then, in the end, I will have no platform for working with companies. (Professor, Clinical Metabolic Research, University B) When asked whether she saw any entrepreneurial spirit among the PhD candidates or among the younger colleagues, she said that ‘a few persons but not too many’ entertained entrepreneurial ambitions. The assistant professor shared this view: ‘There is little of that [discussions about entrepreneurial opportunities] here. There is one person that is very ambitious in that respect but otherwise I cannot say that too many here are nurturing such ideas’ (Assistant Professor, Systems Biology, University B). He also admitted that he didn’t think he was ‘the entrepreneurial type’. ‘Not everyone has that kind of drive. That’s how things are,’ he contended. Part of the explanation for the weak interest in entrepreneurial activities was the absence of role models: Among the senior researchers, there is no major tradition to commercialize or to think in such terms . . . Even among the senior research group leaders I cannot see that entrepreneurial spirit. (Assistant Professor, Systems Biology, University B) The associate professor added: ‘It depends very much on who you are and who your boss is.’ The principal challenge for the technology transfer offices and other university-based initiatives to reinforce the entrepreneurial spirit among academic scholars was to help scientists combining academic careers and entrepreneurial work: ‘One should preferably create the conditions for moving ideas forward without consuming half of your career as a scientist,’ the assistant professor argued. This group of busy scholars would benefit from some help in making the ends meet: I think there is this ambition someplace . . . It should be promoted better or perhaps the systems should be made less complex . . . The concern is that everyone conducting research is very busy and, in addition, they combine this work with family life and other
Exploring Life in the University Setting 229
things. I wrote my patent application myself. If I had a family, that would have been impossible. (Assistant Professor, Systems Biology, University B) In summary, there is a strong emphasis on the basic research in the research team; industry collaboration may be helpful and further extend the basis of the state-of-the-art expertise developed in the department, but needs to be carefully managed in order to avoid disappointment on both sides; the entrepreneurial ethos is not very articulated and, again, what there is is largely subsumed under the research interests and the institutionalized academic career path. Taken together, the system biology researchers represent a quite conventional perspective on basic academic research and the hopes for what such endeavours could lead to.
Summary and conclusion When thinking in terms of professional identities, there was little extravagant behaviour or larger-than-life expectation on the part of the researchers. Instead, they rather closely identified themselves with the tradition of basic research, based on conventional procedures of laboratory research and analytical work conducted in the computer setting. Collaborations with industry was, largely, appreciated but best handled with care; it was rewarding for the work but the focus of attention was always to seek to understand the generic biochemical and microbiological processes in the biological system. Hopefully, research efforts could contribute to new therapies and better health care practices further down the road, but the principal objective of the researchers was to provide qualified research that was published in peer-reviewed journals. It is, thus, easy to draw the conclusion that the systems biology researchers closely followed the professional credo of the practising scientists outlined by Max Weber in ‘Science as a Vocation’ (1948): scientific work is essentially a matter of applying pre-defined methodologies; scientists must be equipped with an extraordinary capacity of patience; any scientific contribution to the field of expertise is of necessity small and clearly bounded (Rabinow, 2003: 99). There were few venturesome activities or extraordinary aspirations among the researchers. They stuck to their ‘bread and butter’ and were perfectly happy doing so. Speaking of theories about the role of identities as both an individual resource and a managerial tool enabling unobtrusive control, or what Du Gay (1996) calls autosurveillance, identity-making is always a matter of using what is at hand; constructing identities is always a collective
230
Venturing into the Bioeconomy
accomplishment. In the case of the systems biologists, there are a number of competing or potential identities that could be emphasized: the scientific identity, stressing the basic research efforts; the enterprising identity, emphasizing the ability of further capitalizing on one’s highly useful know-how; the industrialization identity, underlining the capacity of systems biology to operate as one important input factor in the production of various commercial products and services. Even though the systems biologists recognized a little bit of all these resources for identity-making, they had a firm identity as being traditional laboratory researchers integrating the latest know-how in the fields of computer science and mathematics in their work. It was the desire to understand and intimately know the biological organism of study – in most cases a specific yeast model but also fungi in University A, and the human body in the medical school in University B – that guided and directed the research efforts of the systems biologist. Keller (1983) talks about acquiring a ‘feeling for the organism,’ in her biography about Nobel Prize laureate cytologist Barbara McClintock, as the ultimate goal of the scientist in the life sciences. This ‘will to know the organism’ was perhaps the most articulated factor in the identity work of the systems biologists.1 The quite substantial hype in both the media and the more scholarly literature around the potentials of the life sciences has by no means led to any unjustifiable expectations on part of the systems biologists. On the contrary, they point out that there is much to learn and significant difficulties ahead of the researcher seeking to fully reveal the metabolism of biological systems. On the other hand, the researchers were fully aware of the applications of their research. Having already close-knit collaborations with industry in the field of industrial bioengineering (complementing the medical research agenda wherein the researchers collaborated with medical school research teams), the researcher had extensive experience from using, for instance, yeast cells to produce a range of substances on industrial basis. Taken together, the systems biologists were able to forge a collective and individual professional identity on the basis of a rather heterogeneous field of research by strongly emphasizing the run-of-the-mill work in the laboratory and in front of the computer. Even though they recognized the transdisciplinary nature of the field, they did not regard themselves as representing an extraordinary domain of the life sciences but were eager to locate themselves in the longstanding tradition of the life sciences and the clinical sciences. The study of the field of systems biology suggests that the academic researchers saw great potential in examining large data sets, derived
Exploring Life in the University Setting 231
either from laboratory settings or in clinical studies in the fields of medicine, through the bio-computational models developed in systems biology. The bioengineers and bioinformatics specialists in University A thought they were capable of helping to sort out data produced in, for instance, medical research. In University B, there was an awareness of the shortage of advanced analytical methods to handle the massive data sets produced by the omics technologies in use. At the same time as all interviewees emphasized the potential of the systems biology approach, they were also aware of the challenges lying ahead of the research field; to date, few new phenomena have been proven while at the same time rather elementary studies using, for instance, already published data (as in the case of the dissertation of the assistant professor at University B) could produce new knowledge and understanding of a biological pathway or the identification of a new receptor. The academic researchers identified themselves with the field of academic research while at the same time they welcomed industry collaborations as long as they were thoughtfully managed and took into account the needs and demands of both partners. On the other hand, the researchers did not overall have the time or the dedication to commit themselves to entrepreneurial activities. Many of the researchers did not see the benefit of starting their own companies and were in general concerned about the extensive administrative work demanded to register and run a new enterprise. In line with the findings of Stuart and Ding (2006), the associate professor at University B claimed that what mattered was who you were and who your bosses were (see also Roach and Sauermann, 2010). The entrepreneurial spirit is contagious, especially when carried by leading researchers of the field. By and large, the systems biologists did not demonstrate any specific entrepreneurial activities or desires, notwithstanding the excess of demand for their expertise in the life sciences. The instituted academic norms and virtues, focused on publishing and research funding, tended to blend poorly with entrepreneurial ambitions.
Note 1. This kind of empathic expression, a form of anthropomorphism where lowerlevel biological systems such as cell lines are treated as human actors, seem to be common among scientists. For instance, in Barley and Bechky’s (1994: 98) study of laboratory technicians, technicians spoke about ‘keeping the cells happy’, a social practice including a variety of rather sophisticated and intimate understandings of the biological specimen and the technologies used: ‘To ensure healthy cells, cell culture specialists continually monitored differences in the cells’ shape and color as well as changes in the visible
232
Venturing into the Bioeconomy
properties of the media in which they grew,’ Barley and Bechky note (ibid.). This monitoring also included the skill of ‘having a feel’ for the instruments and technologies used. Unfortunately, there are numerous ways the neophyte or unskilled technician may fail to keep the cells happy, thereby ruining parts of the experimental system: ‘A variety of acts constituting “touching the cells wrong.” Pushing a mouse spleen too vigorously through a screen designed to separate normal cells could lower a fusion’s yield by damaging an unnecessarily large number of cells. Pipetting too forcefully into a test tube could destroy a cell colony at the bottom of the tube’ (ibid.: 100). The skilled laboratory technician is, thus, invaluable for ‘making things work’ in the laboratory and their ability to be what the technicians referred to as organized is central to any successful laboratory.
6 Managing and Organizing the Bioeconomy
Introduction The bioeconomy is a rather recent phenomenon in the globalized economy. Even though agriculture and cattle breeding have been human endeavours for centuries, it is not until the emergence of modern science that human beings more effectively could intervene in nature. Shaping and forming nature on the cellular or molecular level is perhaps the single most promising scenario for mankind, struggling, for the time being, to cope with severe and highly worrying climate changes and a steady growth of the global population, leading, for example, to what may easily become an endemic shortage of freshwater. However, being at the forefront of the technosciences, this venturing into the inner matter of life as such is not a trivial matter; rather it represents the most advanced and sophisticated procedure invented by humans, accompanied by a series of concerns regarding the nature of humanity and the potential effects from actively forming life and nature. Recently, stretching back to the 1970s and the inception of the first biotechnology companies, this venturing into life and nature has become an economic issue, an issue of governance, regulations and management. Organizing these activities demands significant resources, both in terms of financial and human capital. In this book, some of the implications from the bioeconomy have been addressed, in theoretical and empirical terms. As has been suggested, the bioeconomy is still in its infancy, gradually stabilizing a set of procedures, routines and institutions enabling its further growth. By examining the three principal domains of research in the contemporary economy, the major multinational pharmaceutical companies, the smaller biotechnology companies and the academic research community, it has been suggested that there are a variety of practical, theoretical, 233
234
Venturing into the Bioeconomy
ethical and political issues to be addressed. Living in the post-genomic era, characterized by a loss in faith in the human genome as being the ‘book of life’ and seeking alternative, non-reductionist methodologies and theoretical framework, researchers in these three domains of research appear to be still convinced that the life sciences are capable of providing a new understanding of biological systems and innovative therapies with good efficacy, but that this will take its time. What is haunting these researchers is what Paul Rabinow has talked about as the ‘purgatorial dimension’ of the life sciences, that ‘[m]ost of these technologies promise much more in terms of material interventions than they currently deliver’ (Marks, 2006: 333–4). Being interrogated by journalists and expected to point to short-term results in research grant applications, scientists easily fall prey to making connections between, for instance, certain genes or SNPs and dysfunctions on the level of the phenotype. Therefore, the popular press is filled with stories and anecdotes about ‘gay genes’ or ‘left-hand genes,’ stories that accrue little scientific credibility and have limited practical significance. The public image of science easily portrays contemporary technosciences as some kind of theatre of curiosities rather than a more systematic intervention into biological or natural systems in the pursuit of understanding life per se. Such tendencies are deplored on a broad basis and are negative for the sciences, as researchers are always expected to bring out the news no matter how premature they are. In an age where the sciences are legitimized on basis of their performativity, their capacity to accomplish socially valued output, there is always the threat of trivializing research findings or making announcements that overrate the practical implications. The purgatorial dimension of the contemporary technosciences is, thus, one of the aspects of the sciences we have to live with today. In this final chapter, some concluding remarks will be made regarding the theoretical and practical implications from the studies. Rather than constructing some integrated model or theory of the bioeconomy and the professional practices taking place in the domain, the chapter will address some of the lingering concerns regarding scientific intervention into biological systems, indeed into life.
Technoscience and its ‘impurity’ The French philosopher Michel Serres (1995, 1997) claims in a number of publications that chaos, not order, is what prevails in the world. Humans tend to think of noise, turmoil and turbulence as being peripheral or marginal problems appearing at the fringe of an ordered world. For
Managing and Organizing the Bioeconomy 235
Serres this is a fundamental mistake: the world is messier, more impure, more enfolded and complex than we are prone to think. The human mind seeks stability, predictability, order and linear causalities. There is perhaps no place where this ideal of the ordered world is accentuated more strongly than in the sciences, the domains where equally natural and social processes are described in terms of axioms, theorems and laws. The sciences are the human pursuit of taming chance, or making the world predictable and hence controllable, subject to mastery and governance. Humans see a great beauty in natural laws, both because they are outcomes of extraordinary human accomplishments demanding great efforts, cognitive capacities and creative thinking (Johannes Kepler or Issac Newton are great heroes in this tradition) and because they provide a sense of making the world more predictable. As has been pointed out by numerous scholars, faith in God (or other divinities) has been replaced by secular faith in the sciences. We may no longer believe that Jesus is the son of God but we do believe that certain proteins may play a key role in curing cancer. The key to the sciences is, then, not perhaps its actual performativity but its credibility. As has been argued by, for instance, Colin Milburn (2008) in the case of nanotechnology, the interest of nanotechnology lies not precisely in what has been accomplished but what may be produced within this scientific programme. Thus, the connections between popular culture (e.g., science-fiction writing), science and politics are tighter in the field of nanotechnology than is commonly admitted. Highly speculative thinkers such as K. Eric Drexler, writing a series of bestselling and highly evocative books about the future of nanotechnology, have helped establish the very idea of nanotechnology on the research agenda. Scientific credibility here plays a key role. According to Epstein (1996): By ‘scientific credibility,’ I mean to refer to the believability of claims and claim-makers. More specifically, credibility describes the capacity of claim-makers to enrol supporters behind their arguments, legitimate those arguments as authoritative knowledge, and present themselves as the sort of people who can voice the truth. (Ibid.: 3) The key to ‘being believed’ is formal credentials and previous accomplishments. The sciences have ample access to these resources. Being established in the university system, institutions such as Stanford University or the Max Planck Institute are highly credible and prestigious sites from which scientists can articulate their knowledge claims. In addition, the history of the sciences is one long, more or less
236
Venturing into the Bioeconomy
uninterrupted success story. Some elements of failure or setbacks are, more or less, small digressions from the great narrative of advancement. Martin Heidegger’s claim that ‘Only God can save us’ is now translated into ‘Only the sciences can save us.’ Global warming and overpopulation are two human challenges that only scientific advancement and technological innovation can handle. That scientific contributions and technological innovation played a key role in bringing humanity to the present precarious situation is rarely mentioned; that the sciences are just as much part of the problem as the solution is not always recognized. By and large the sciences have maintained or even increased their credibility in an historical period characterized by the loss of authority: ‘Growing distrust of established experts is magnified by our culture’s ambivalent attitude towards the institutions of science and their technological products. To be sure, science remains in relatively high esteem, especially considering the overall decline in confidence in many social institutions in the United States in recent decades,’ Epstein (ibid.: 6) says. Scientific credibility does not fall from the sky and is not delivered at the bottom of cereal boxes; it is actively produced by the day-to-day work of scientists (as shown in Latour and Woolgar’s [1979] seminal study). The work of scientists is, then, largely focused on turning relatively fluid and even ambiguous empirical data into facts – that is, to ‘black-box’ them in order to render them safe from criticism and help the scientists continue their work without any discussions regarding the nature of their contribution. ‘Masked beneath their [scientific facts’] hard exterior is’, Epstein (1996: 28) writes, ‘an entire social history of actions and decisions, experiments and arguments, claims and counterclaims – often enough, a disorderly history of contingencies, controversy, and uncertainty.’ Disorder is here secreted under the veil of certainty. In the sciences, as in the human picture of the world more broadly (Serres, 1995), disorder and ambiguity cannot be tolerated. Therefore the sciences represent a systematic endeavour to render what is contingent, controversial and uncertain into neatly packaged axioms, theorems and laws, or, in the absolute majority of the cases, as facts. Producing facts is the best we can hope for, especially in the social sciences, examining emergent systems where agents respond to social interventions, and facts are what build credibility both in the community of scientists and in the relationship with the broader public. Epstein accounts for the entire procedure preceding the establishment of the fact: Scientists strive to close black boxes: they take observations . . . present them as discoveries . . . and turn them into claims . . . which
Managing and Organizing the Bioeconomy 237
are accepted by others . . . and may eventually become facts . . . and finally, common knowledge, too obvious to even merit a footnote. Fact-making – the process of closing a black box – is successful when contingency is forgotten, controversy is smoothed over, and uncertainty is bracketed. Before a black box has been closed, it remains possible to glimpse human actors performing various kinds of work – examining and interpreting, inventing and guessing, persuading and debating. Once the fact-making process is complete and the relevant controversies closed, human agency fades from the view; and the farther one is from the research front, the harder it is to catch glimpses of underlying uncertainties. (Epstein, 1996: 28) The entire procedure of fact-making is a struggle against noise and turbulence. The world is reduced to the level where it can fit into the collectively enacted definition of the fact and – once qualified as fact, presented in the form of inscriptions – the fact may be passed around, travelling between scientific communities or groups. However, what Epstein (ibid.) and other students of the sciences have suggested, is that there is no ‘innate purity’ in the sciences; purity is a fabrication, an ideal, an ideology, an objective. Instances of nature (e.g., molecules or proteins) do not speak for themselves but are spoken on behalf of. Scientists examine and interpret highly complex technoscientific inscriptions produced in the operational experimental systems and do their best to sort out and structure what they are capable of observing. At times, this procedure takes place in confined laboratories, sheltered from the immediate influence of politics and controversies, while in other cases the laboratory is pervaded by such forces. In Epstein’s (ibid.) study of AIDS research, political activism, political initiatives, lasting intra-scientific controversies and debates, and the sense of urgency as thousands of people were dying of AIDS during the research efforts, the scientific search for both credible explanations for the disease and adequate therapies for the disease were far from isolated from society. Instead, the AIDS research was what Epstein called ‘impure’, never capable (or perhaps willing to) separate itself from society and social concerns. In the case of the life sciences in the bioeconomic regime, there is perhaps not as strong a sense of urgency in terms of producing therapies and explanations for a variety of illnesses, but the various fields of research that have been addressed in this book (e.g., post-genomic research, reproductive medicine, organ transplantations, neurodegenerative diseases) deal with health care issues that are of great interest for
238
Venturing into the Bioeconomy
society. In addition to the actual interests in the life science’s ability to handle a series of diseases and conditions, there are a number of recurrent anxieties regarding the role and purpose of the technosciences that need to be addressed. Marks (2006) lists three concerns that immediately affect the domain of the life sciences. First, there is a fear that the advancement of the life sciences will lead to a ‘gradual reappearance of eugenic practices’ to handle ‘social deviancy’. Second, there is a more abstract concern regarding the nature of humanity and what it means to be human in the age of biotechnological advances. Third, there is the fear that the ‘[c]ollective human genetic inheritance – the germ line – might be irreparably damaged by genetic modification or manipulation’ (ibid.: 334). Uncontrolled life sciences, operating without regulations or political agreements, are thus potentially threatening social norms by imposing new eugenic practices, reformulating the concept of humanity, or even polluting the human genome itself. Even though the life sciences may insist on regulating themselves and the neo-liberal economic regime supporting their growth in economic, political and cultural importance may be opposed to a too minute political control (as in, e.g., the case of stem cell research), there are still social beliefs and norms in the general public that cannot be ignored. As we learn from Shostak and Conrad (2008), the gay and lesbian community did not tolerate the research on the ‘gay gene’, potentially legitimizing their sexual orientation as some kind of quasi-medical condition and therefore perpetuating the longstanding (orthodox) Christian tradition to condemn homosexuality as an intolerable deviancy, and thus managed to undermine the legitimacy of this entire research programme. It may be that there is no purity in the sciences, but purity may be managed to a different degree by the spokesmen of the technosciences. Taken together, the life sciences, venturing into the bioeconomy, are characterized by a high degree of diversity and, indeed, impurity. Working in the first years of the post-genomic era, there is a struggle to define what paths to travel in the coming decades. Concepts such as systems biology point at a bio-computational and bioinformatics programme that seeks to understand biological systems on an aggregated level, on the level of exchanges and flows of information sharing much with Norbert Wiener’s (1950) image of the human as an aggregate of information. On the other hand, the field of reproductive medicine has less use of such a view of biological systems but rather concentrates on the elementary components of human reproduction (e.g., oocyte, sperm, hormones). Yet these two technoscientific programmes are part of the bioeconomy and the enterprise of inscribing economic value into
Managing and Organizing the Bioeconomy 239
biological systems. Rather than being one single analytical framework or representing one or a few entrenched technoscientific procedures or techniques, the bioeconomy is characterized by its diversity and the criss-crossing of various human demands and interests. Couples unable to have a child of their own have different expectations from the life sciences than the elderly man diagnosed with a neurodegenerative disease such as Alzheimer’s disease, but they are unified in their hope that the life sciences will provide some therapy or solution to their respective predicament. What these two different social concerns share is reliance on the bioeconomy to provide therapies on the basis of enterprising and economic rationalities. In the bioeconomy, the life sciences not only serve mankind on the basis of intellectual curiosity or the will to do good, but also on basis of joint economic interests. Here, the processes of life, potentially amenable for human manipulation, are economic resources in their own right. Today, even sick populations in India may be regarded as economic resources (Prasad, 2009). Life per se, the elementary matter of life (molecules, proteins) and the economy and financial interests co-exist in close proximity, an historically unprecedented entanglement of life and money that, for some, is a precarious bundling of interests that should preferably be kept apart. Only the future will show whether venturing into life sciences per se on basis of financial interests was a good idea or not.
Professionalism in the bioeconomy The concepts of professionals, professionalism, professionalization, etc., are central in the social sciences and the study of work. Professionals are occupational groups that manage to enact a domain of jurisdiction and erect entry barriers in the form of credentials to the domain. This sounds like a relatively simple act but it is, in fact, a complex social procedure where forms of know-how are negotiated and structured into fields and hierarchies and where formal rules and regulations are established. Professionalization includes a great variety of social processes that, in their own way, contribute to the boundary-making between forms of expertise and specialism. In the second chapter, professionalism was discussed in terms of relying on both professional ideologies and professional identities, two key terms in the social sciences that cannot be kept apart as they presuppose one another. Ideologies are aggregates of doctrines, beliefs and materialization that precede and pervade the everyday practices of the agent (Žižek, 1994: 9). Identities are ideologies located and materialized into the very body of the agent,
240
Venturing into the Bioeconomy
the self-reflexive and conscious choices made on the basis of association with specific ideological stances. Identities are the operationalization of ideologies on the level of the individual agent and groups of agents. In addition, the identities of agents, leading to specific practices, further reinforce ideologies, rendering them solid and immutable. In terms of professionalism, Strauss et al. (1964) emphasize the connections between ideologies, identities and practices, showing that, in the field of psychiatry, a number of different therapies for the treatment of the patients, based on specific ideologies, are capable of producing strong professional identities separating the professional community of psychiatrists into three relatively distinct categories of professional psychiatrists. Ideology and identity are always an indispensable element in professionalism. In the three case studies of researchers active in a pharmaceutical company, in biotechnology companies and in the research university setting, professional ideology emphasizes the need to adhere to scientific ideologies. In the pharmaceutical company, the main interst was with the production of new drugs and all things managerial intervening into this work were regarded as non-value-adding processes. In the biotechnology companies, arguably by definition based on entrepreneurial interests since they occupied the terrain between the large bureaucratic pharmaceutical companies and the network-based university systems, the main concern was to develop either new therapies or make new methodological contributions to the life sciences. In the university setting, both doctoral students and full professors argued that their principal interest was in the field of research and that they did not entertain any far-reaching entrepreneurial ambitions. Being able to conduct qualified research, including a variety of practices such as lecturing, applying for research grants, supervising PhD candidates (in the case of the professors), conducting and overlooking actual studies, writing research papers and so forth, took the full work time of both the professors and the doctoral students. Of the interviewees, very few claimed they could be interested in running their own companies. By and large, the interviewees, all with their background in the life sciences, pledged allegiance to scientific ideologies. In the literature on the bioeconomy, there is strong emphasis on the economic potential derived from the new technological advancements. The early pioneers of the biotechnology industry have been portrayed as role models for a whole new generation of ‘bioentrepreneurs’, capable of exploiting the vast economic and financial interests in the life sciences. Such cadres of bioentrepreneurs may be observable in some
Managing and Organizing the Bioeconomy 241
regions of the world, say in the San Francisco Bay area, where universities such as Stanford, UCSF and University of California at Berkeley provide the critical mass of talent, capital and infrastructure to generate their own impetus for further growth. In other parts of the world, where the expertise in the life sciences is still state-of-the-art but less concentrated, there is less interest in enterprising activities. The study of the three domains of the bioeconomy thus suggests that the professional ideology and identity of scientists – the scientific ideology – still serves as the foundation for these researchers. They are dedicated to research; if their research findings lead to practical implications and eventually economic enterprises, this is a welcome event but it is not something that is expected. In this respect, the interviewees of the study adhere to the view of the scientists presented by Max Weber (1948) in his essay ‘Science as a Vocation’ – that is, as involved in work that demands minute attention to detail, a great amount of patience and an understanding that what may be produced is of necessity small in scope and is bound to become obsolete in the progress of the sciences. The scientists operate in confined domains to accomplish highly specialized contributions. This activity and, indeed, credo demand a specific mindset that may be in conflict with the more market-oriented and opportunity-exploiting activities of entrepreneurial practice. Thus, the growth of a truly innovative and flourishing bioeconomy may demand the combination of enterprising and rhetorically skilled entrepreneurs and scientists with expertise in the research work. One of the explanations of the great success of industry clusters such as the Silicon Valley computer industry has been the combination of know-how that mutually supports the growth of industrial activities. In Silicon Valley, there is an abundance of know-how in computer science, systems engineering and related expertise but also venture capital and law firms understanding the nature of the computer industry. In order to make the bioeconomy expand further, the scientific expertise in the life sciences needs to be accompanied by an infrastructure that can support the growth of new enterprises seeking to exploit the know-how of life science researchers. Operating on their own, the scientists are far too concerned about maintaining their position in the university system that they neither have the time nor the incentive to invest in starting their own companies. Given the evangelical or sceptical tone of some of the texts about the future of the bioeconomy, the actual preferences and objectives of the actors in the field are relatively conventional, showing little evidence of a disruptive break with the past. Researchers in the life sciences appear to
242
Venturing into the Bioeconomy
be more concerned about the lack of progress or the inability to deliver what has been promised than about the dangers of the intervention in life per se through scientific means. Much of the literature highlights specific cases that may be further accentuated (e.g., the case of organ theft and similar violations of human rights), but such cases are quite far from the everyday work of scientists in the life sciences. On the other hand, some revolutions occur silently and the extraordinary advancement of the life sciences in the last 15 years must be carefully examined and understood in a wider social and economic context. However, the sciences are also embedded in vision and narratives, stories of what may be or become (Squire, 2004), and today there is an intriguing combination of actual accomplishments (e.g., the cloning of Dolly the sheep at the Scottish Roslin Institute) and enthusiasm and worries about what may come out of an unrestrained and freewheeling bioeconomy devoid of regulations and monitoring bodies. The overall impression is, then, that the bioeconomy stands with one foot in the actual world of practical concerns, including that of how to handle the massive amounts of data produced by the new technoscientific apparatuses, and with one foot in a virtual world that is in a state of becoming, a world that is fictional but still not unreal. Henri Bergson’s (1988) conceptalization of the virtual, as expressed by Deleuze (1988: 96), emphasizes that the virtual as what is ‘real without being actual, ideal without being abstract’. The bioeconomy is, thus, partly actual, grounded in socio-material practices, and partly virtual, not yet materialized but in the making, anchored in beliefs, hopes and fiction about what may eventually be accomplished. Professional ideologies and identities are thus grounded in both practical possibilities and potentials, things that may be accomplished today, and hopes for what may be accomplished in the (near) future. This latter, virtual element presupposes visionary thinking, ideas of how mankind would benefit from, for instance, new reproductive technologies (‘Women and men make their careers first and the get their kids when they think they have the time, around the age of 50 or so’), xenotransplantation (‘If you get cancer in the liver we can always grow you a new one’), or pharmaceutical therapies (‘If your weight is threatening your health we can give you a pill to drop some weight’). However, in the present regime of the bioeconomy, scientists seem to be more preoccupied with practical concerns derived from the development of quite recent genomic and post-genomic research methodologies than to engage in such visionary thinking. Taken together, the bioeconomy is not yet the scientific-economic complex capable of generating both financial resources and new life science wonders that at times appears in
Managing and Organizing the Bioeconomy 243
texts addressing the bioeconomy. Instead, we have increasingly stressed researchers, operating on the basis of highly sophisticated technologies and dealing with growing amounts of data sets that certainly do not ‘speak for themselves’, being told by others that they, if anyone, are in the position of producing a new future for mankind. This is a far less glamorous and less threatening image of scientific work than that of the ‘mad scientists’ cloning first sheep and, potentially, even humans that are presented in some of the texts criticizing the so-called transhuman or post-human frameworks that conceive of man as something entirely different in the future. Again, there is quite a substantial leap between what happens today in laboratories and scientific circles and the more visionary texts addressing the future of the bioeconomy.
Studying the bioeconomy on the basis of organization theory The study of the bioeconomy from an organization theory and management studies perspective should take into account the great diversity and the many contradictions in the very endeavour of understanding biological processes on the basis of organizational rationalities. Of particular interest here is the process of what, in the literature on new economic sociology, is called ‘economization’ (Çalis¸kan and Callon, 2009; Fligstein, 2001), ‘monetarization’ (Carruthers and Espeland, 1998) and ‘financialization’ (Dore, 2008), the inscribing of economic value into new entities or processes. In the emerging bioeconomy, new human tissues or fluids (e.g., sperm) are accruing an economic value. More specifically, the economic value of a tissue or fluid is not strictly a matter of its technoscientific value but is also the outcome of social norms and agreements. For instance, as Almeling (2007) shows, women chosing a sperm donor prefer physically fit men over less physically fit men with higher education, even though the physical constitution of the future baby is only marginally influenced by the hereditary material. The physically attractive man is a social norm that is translated into the preferences for the sperm donor, notwithstanding the scientific relevance of this choice. Women also prefer egg donors that are physically attractive and with college education. These women hold that the physical attractiveness and social accomplishment of these donors will be present in the sperm or the egg. The biomaterials circulating in the bioeconomy are thus inscribed with social value and norms, notwithstanding their scientific relevance. As a consequence, there are opportunities for ‘economizing’ certain biomaterials (‘We only cater in eggs
244
Venturing into the Bioeconomy
from attractive and accomplished women’), a process that is of interest for organization theorists. Organizations, and more specifically, organizations in the field of the bioeconomy or at its fringe, are the domains where the sciences, social interests and concerns, and economic rationalities intersect. Organizations like the egg agencies studied by Almeling (ibid.) are the principal sites where biomaterials and tissues are economized and monetarized, inscribed with economic value, both user value and exchange value. In terms of organization structure and managerial practice, empirical research in the bioeconomy may provide insight into how these activities are organized and managed. A first hint is that these organizations are either structured into the traditional multinational divisionalized pharmaceutical companies or as small or medium-sized biotechnology companies interacting in organization networks. The bioeconomy is inextricably entangled with the life sciences but as the research findings are translated into commodities, therapies and services, organization theory and management studies may contribute to an understanding of how advanced technoscientific research is translated into commodities and social and organizational practice. This organization and management of the bioeconomy is a domain of research that has been the subject of only marginal interest in the field of organization theory and management studies. In the future, it is likely that the Western world will produce primarily goods and commodities based on highly specialized scientific and technological know-how while other forms of production are moved to other parts of the world (Kollmeyer, 2009). As the population in the West ages and the supply of health care services increases the (perceived) quality of life, there may be a substantial market for bioeconomic products and services. Organization theory and management studies may make worthwhile contributions to this domain of the economy. The bioeconomy is also interesting from an organization theory perspective because the products offered represent a highly novel mix of products and services. For instance, in field of reproductive medicine, in infertility clinics, same-sex female couples unable to have a child on their own are treated by a variety of both technological and biopharmaceutical means with the intention of producing a foetus – and, eventually, a baby. This ‘end-product’ naturally transcends all product/service dichotomies, as what is de facto produced is actual life. The bioeconomy is an eminent example of how old forms of binary thinking and binary categories are crumbling as new ways of thinking are emerging; the distinction between ‘born’ and ‘made’ is becoming, as Franklin and Roberts (2006) suggest, more fuzzy and complex. Also, in
Managing and Organizing the Bioeconomy 245
the field of consumer behaviour in marketing and the sociology of consumption (see, e.g., Douglas and Isherwood, 1979; Firat and Venkatesh, 1995; Miller, 1995; Ritzer, 2005), the emerging bioeconomy will open up new research agendas and public concerns. If the consumption of alcohol, cigarettes and other goods is prohibited on a broad basis in the predominant health ideology, there will be room for new sources of escapism and symbolic means of consumption (Thanem, 2009). In addition, the perceived quality of life, health and fitness is more or less an inexhaustible domain of consumption; there is really no limit to how ‘good you can feel’ and there is little wonder the first decade of the new millennium was the period where spas and other escapist forms of health-centred consumption increased significantly. In addition, there are signs of companies and other employers taking a more active role in regulating the employees’ health and well-being, instituting new of forms of health control and policies prohibiting, for instance, smoking during the work time (Holmqvist, 2009). There is, in other words, a closer proximity between management practice and perceived health of employees. This tendency is likely to produce new forms of disciplinary and subject-forming practices in the workplace. The ‘war on smoking’ has been a success story in the West (but not without the loss of some individual integrity rights), arguably reducing the incidence of diseases such as lung cancer in generations to come; the next battle in public health may be regarding being overweight and obese, a concern that is talked about in terms of epidemic proportions, with unforeseeable consequences (Throsby, 2009; Critser, 2003). In the USA and the UK, obesity is becoming one of the most worrying public health care problems, expected to lead to an explosion in health care costs (Thanem, 2009). Enterprising actors in the emerging bioeconomy are here providing some products that supposedly help obese patients grapple with their health condition. The concern is, as shown by Jonvallen (2006) in her study of the clinical trail of an obesity medicine, that operative definitions are produced on the basis of the desired outcomes en route, not based on objective standards prior to the clinical trials. Definitions of obesity are, thus, enacted as an innate procedure in the clinical trials, based on what Hogle (2009) calls ‘pragmatic objectivity’ and Cambrosio et al. (2006) refer to as ‘regulatory objectivity’, an enacted standard for what counts as an objective measure that is capable of propelling further action rather than being based on solid scientific evidence. In this regime, obesity is defined performatively on the basis of the opportunities for medical treatment of obesity. Pragmatic objectivity is, thus, an unobtrusive and inconspicuous compromising of scientific norms for
246
Venturing into the Bioeconomy
the benefit of the opportunities for producing a marketable drug for obesity patients – a drug that may prove to be highly profitable given the alleged ‘epidemic’ levels of overweight population in parts of the world. These intimate and at times problematic connections between perceived health care conditions, medical treatment and scientific practice may be fruitfully studied by organization theorists.
Managerial implications It is complicated to provide any straightforward recommendations to practitioners in the bioeconomy and the life sciences. Being based on scientific know-how and advanced technologies, the life sciences are disruptive and emergent in nature. Seemingly peripheral research initiatives may have substantial consequences over time while large-scale and highly prestigious joint efforts may prove to have less practical value than postulated (as in the case of the Human Genome Project in terms of new therapies developed). For the pharmaceutical industry, it seems as if there is a loss of faith in the traditional blockbuster model based on wet lab biology and in vivo studies. The pharmaceutical industry has been overwhelmed by a tidal wave of new medical and technoscientific technologies and analytical frameworks that have been developed over the last two decades and actors in the industry are still struggling to sort out and master all these different approaches and opportunities. New drug development is gradually becoming a computer and information science, operating on the basis of large-scale samples produced in clinical trials. For the time being, there is quite a bit that is not yet known and living in the post-genomic era, it is perhaps the proteins that play the most significant role in leading humanity forward towards a more substantial and detailed understanding of biological systems on the level of elementary matter. The genomics research programme is commonly portrayed as some kind of failure but it would be unfair to make such claims after just a decade or so after the finalization of the HUGO project. Rather than being a major disappointment, HUGO has opened up a new domain of research in proteomics and transciptomics, technoscienfic studies of how the genome is capable of translating into what today seem to be the key entities in biological systems, the proteins and amino-acids they are composed of. No matter where the recent advancement in the feld of microbiology and related disciplines (bio-computation and bio-engineering included) may take us in the future, the pharmaceutical industry seems to be at a crossroads, where the ‘one-drug-fits-all’ paradigm is perhaps gradually
Managing and Organizing the Bioeconomy 247
being displaced by the more ‘personalized medicine’-oriented research agenda. The good news is that the demand for drugs is not very likely to decline, as certain health conditions are medicalized (the case of ‘erectile dysfunction’, more recently redefined as ‘erection quality’, and the blockbuster drug Viagra developed and marketed by Pfizer being one prominent example; see Åsberg and Johnson, 2009) and the population is ageing. The bad news is that the stock of patients capable of participating is shrinking in the West; however, moving to Eastern Europe and South America to take advantage of ‘drug naive’ populations may be a handy solution to this concern. A guess would be that pharmaceutical companies will play a similar role in the future as they do today, primarily preoccupied with ‘lifestyle diseases’ and quality-of-life drugs (again Viagra™ being the paradigmatic example), but that the procedures used when developing these drugs will be much more diverse, blending traditional wet lab in vivo biology research with more bio-computational approaches. Regarding the practical implications for the biotechnology sector, it is even more complicated to articulate any qualified advice. Given the exponential growth in technoscientific methods and technologies in the life sciences, biotech companies may play a key role in providing specialized know-how to the major pharmaceutical companies. The biotechnology companies would then serve the role of being responsible for certain outsourced activities. Such a scenario presupposes an increased modularization of the entire new drug development process. In the field of clinical trials, even today there are a quite substantial number of so-called clinical research organizations (CROs) that take on substantial parts of clinical trial work. Much of the development work, however, remains an in-house activity in the major pharmaceutical companies. In the future, it may be that new drug development will undergo a ‘Toyotaization process’, where the major pharmaceutical companies serve as end-producers, assembling all the input from a wide variety of biotech companies and CROs into a final drug. Such a more accentuated, networked new drug development model is already observable in the great biotechnology clusters in the Boston region in Massachussetts and in the San Francisco Bay area (Powell et al., 2005; Whittington et al., 2009). The university disciplines operating on various post-genomic technoscientific programmes are likely to see a very bright future. The clinical sciences in the medical schools are capable of acquiring and aggregating massive amounts of data that demands proper analysis to reveal potential connections between entities and mechanisms in the biological
248
Venturing into the Bioeconomy
system, and here the bio-computational and bio-engineering sciences offer some hope for the future. The literature on, for example, systems biology warns against a hype similar to that of the human genome mapping project – for good reason – but the proponents of these approaches are in a good position to help sort out and structure what the data mean. In the present regime of the life sciences, biological systems are translated into numerical data through the means of what Thacker (2004) calls biomedia, but these data certainly do not speak for themselves; they need to be examined on an aggregated level. At the same time, all technoscientific pursuits are based on what Hanson (1958) calls theory-laden observation – enquiries based on operational hypotheses and theoretical framework. Some of the challenges pertaining to systems biology appear to be based on a lack of a shared and collectively enacted theoretical framework; no theory of biological systems can be constructed entirely on biological data, and consequently there needs to be some kind of shared framework of analysis. By all accounts, the recent advancement of bio-computation and bio-engineering is based on the cybernetic and informational paradigm that conceives of the human organism as essentially being based on informational exchanges and flows (Hayles, 1999). The biological sytem may be made up of flesh and bones, tissues and cells, but the underlying structure is possible to decode in the form of numerical data and other informational inscriptions. Only the future can tell whether this return to a strict formalistic view of the biological system, largely ignoring the influence of the environment on the organism (Oyama, 2000), will lead to any breakthrough in either the understanding of biological systems or in the production of new therapies. In summary, the practical implications from the emerging bioeconomy are substantial, even revolutionary in scope, but it is very complicated to predict what the outcome will be. The demand for health care services and therapies that improve the perceived quality of life is likely to grow, but how and by what means this demand will be met is quite another issue. The supply side of the bioeconomy is, however, a field of economic activity that should be of great interest for empirical studies in the coming decades. The life sciences are not only based on the accomplishment of clever and dedicated researchers but are equally shaped and formed by institutional isomorphisms, norms and standards, regulating both what may be accomplished in terms of scientific progress and what goods and services can be brought to the markets. Understanding the bioeconomy thus involves the analysis of both agency and structures in this emerging field.
Managing and Organizing the Bioeconomy 249
The future of the life sciences Today the research-based drug industry is working overtime to transform itself into a leaner, faster-paced version of its traditional self, as patents continue to expire for dozens of prescription products, ranging from mass-market blockbuster drugs to medicines targeted at highly defined populations.1 The golden age of the life science industry is gone. Between 1950 and the 1980s R&D opportunities were plentiful and unmet medical needs for the industry fortuitously played a key role in innovations, to no small measure, facilitated by less regulatory demand and smaller clinical studies. But, for the last 20 years, the industry has undergone radical transformation and consolidation. The numbers of mergers in the pharmaceutical industry over last ten years reflect the difficulties of maintaining a sustainable flow of science-based innovation. Some recurrent issues today relate to managing increased R&D costs and complexity in very large R&D organizations (of more than, say, 10.000 employees) – competing operating metrics clash over innovation priorities – all with the primary focus of securing NCEs, which still take 12–15 years to hit the market. To conclude, the industry faces a difficult situation were patents on many high-revenue products are expiring and there is a much more competitive marketplace with a much more restrictive reimbursement environment and more demanding regulatory demands. At the same time, over the last ten to 15 years, life science research has been characterized by breathtaking progress in understanding processes of life. Genomics, including related ‘omics’-technologies (e.g., proteomics, metabonomics) have begun to define huge new scientific areas. We also have seen rapid progress in knowledge, generated by increasingly efficient experimental methods. Huge amounts of data and information result from biomedical research and their management and exploitation requires permanent efforts to develop innovative computational methods and tools that can handle challenges such as heterogeneous data integration and complex (multi-scale) modelling. In addition, advanced in robotics and artificial intelligence will, very likely, soon play a dominant role in life science research. Today, the life science industry is facing a silent revolution which is shaped by unprecedented advancement in computer science, an explosion in information and data availability and a shift in the scientific method towards systems biology, which focuses on new advanced integrative models of how disease works. This new landscape offers
250
Venturing into the Bioeconomy
new opportunities, not only for pharmaceuticals and biotechnology, but also to bridge and connect with health care, in dealing with cost pressure and consolidation as well as break-up of value chains, disruptive transitions of core underlying technologies, growing influence from non-professional owners, new expectations from funders of healthcare and a change in governing logics of science-based innovation. To exemplify this, some important trends will be discussed to illustrate the growing need for increased understanding of the driving forces that will change limiting assumptions and governing logics for the pharmaceutical and biotechnology industry, but also health care providers and governments; the boundary-spanning nature of future activities and challenges will necessitate innovations in management models, systems and approaches. Pursuing a holistic approach to the discovery of new drugs The notion of systems biology, which aims to take a holistic approach, is not new; since drug discovery began, scientists have been looking at the effect of drugs on systems. Centuries ago, the system was the patient; nowadays, the system is generally an in vitro or in vivo model system mimicking a critical disease process. Future systems biology and translational systems biology approaches aim to develop a dynamic and global understanding of the human biological system to predict model disease throughout the drug discovery process, from the bench to the clinic However, pharmaceutical research during the past two to three decades has used a primarily reductionist approach, enabling the introduction of high-throughput screening procedures that actually lead the drug and development pipeline. These methods contributed to filling the early stages of the pipeline with an incredible number of new targets, reducing, in the meantime, the predictive power relating to the clinical efficacy of drugs interacting with these targets. The main paradigm has been to develop mostly small molecules directed at one target. The failure of this approach is now evident since, despite the dramatic increase of R&D costs, the number of marketed new drugs is decreasing. This failure is due to the very low efficiency rate of the process of drug development, mainly due to a failure in the translation to the clinic of pre-clinical findings. Lack of therapeutic efficacy of new candidate drugs is usually not discovered until the late phases of drug development (phase II to phase III) (Kant et al., 2010). The reasons for systems biology not gaining success in the industry can be seen in different ways. One can argue that the scientific methodological notion of measuring one variable at a time using large
Managing and Organizing the Bioeconomy 251
observation materials (i.e., patient/sample) to get as strong a statistical significance as possible has prevailed for too long. This approach may cause a misleading understanding of how disease works (e.g., one drug involves many targets). This philosophy is also based upon a classic image of a predefined hypothesis to be tested. Moreover, over the last decades knowledge of biology has exploded but the methods have not changed. Instead of locking the empirical space into ‘one target – one substance’, today’s techno-scientific progress can provide vast amounts of data using sophisticated multivariate methods taking into account a large number of variables at the same time to provide a much more integrative picture of the organism (system). This approach is now relative common in the pre-clinical method approach, but from a regulatory acceptance and industry practice perspective is still unproven as it has not yet been sufficiently tested in clinical trials. The implications for the industry of moving into an integrative and holistic scientific model using a total or partial biological system rather than a broken system to understand disease interactions have also promoted interest within the industry focusing on small molecules (drugs with a molecular weight of less than 500 Daltons, typically orally available) to add biologics (e.g., monoclonal antibodies and vaccines) to the research portfolio. The advent of personalized medicine The approach to evidence-based medicine has revolutionized medicine during the past 50 years. However, results from large population-based studies are not always applicable to a specific individual, and physicians generally take into account specific characteristics – such as age, gender, height, weight, diet and environment – when evaluating an individual patient. Recent developments in a number of molecular profiling technologies, including proteomic profiling analysis, and genomic/genetic testing allow the development of personalized medicine and predictive medicine, which is the combination of comprehensive molecular testing with proactive, personalized preventive medicine. It is hoped that personalized medicine will allow health care providers to focus their attention on factors specific to an individual patient to provide individualized care. However, some question whether personalized medicine represents a true departure from traditional medical practice or is an evolutionary transition based on the latest technology. Today many pharma companies, if not all, have placed translational science (or predictive medicine), personalized health care and health information technology (HIT) on top of their strategic research agenda, which indicates a new approach to make use of much more data in understanding the right
252
Venturing into the Bioeconomy
patients for a certain drug, but also the abandonment of the blockbuster paradigm which essentially aims for a one-size-fits-all-patients product (PricewaterhouseCoopers, 2009). The imperative role of information management Today computer science concepts, tools and theorems are now moving into the very fabric of science. Scientific computing platforms and infrastructures allow new kinds of experiments that would have been impossible only 20 years ago, changing the way scientists do science (Science 2020, 2006). The mature concept of information and communication technologies (ICT), which is an umbrella term that includes all technologies for the manipulation and communication of information, demonstrates fast-moving technological advances in areas such as pervasive monitoring using miniature devices, chips and wireless networks to monitor patients (e.g., glucose, lipids, etc.) that can provide measurement on animals or humans in real time. We face an explosion in a generation of high-quality scientific and health information supported by an almost infinite storage in clouds of processors that result in data becoming more shared and available. This opportunity demands new skills which can support, or at best combine, traditional scientific disciplines with computer science – multivariate modelling and informatics (i.e., information science, information technology, algorithms and social science) to actually manage the deluge of data. There is an increasing focus in the industry on connecting and making data, and systems, interchangeable – that is, interoperable. Some external voices, like The Economist (2009), say; ‘Interoperability is king.’ This suggests that the key to success is how the life science industry connects internal information with the external world, including health care and patients, to share and make use of different information types or systems. The trend is clear: to succeed in these areas requires a new level of information sharing and exploitation capability to connect to external health information in a much more dynamic and seamless way than before. Implications for the life science industry and health care are huge. An important need is to secure the right computer science skills are integrated in life science and adequate scientific computing platforms and infrastructures in all stages of new drug development, which can manage the deluge of data that is produced. A salient example is the potential to connect and reuse health information repositories (e.g., electronic health records) from many hospitals to support medical research (e.g., patient recruitment and drug safety monitoring), which demands a new set of multidisciplinary skills (e.g., technical,
Managing and Organizing the Bioeconomy 253
organizational, informatics and legal) to provide viable interoperability of data between organizations in an ethical and safe way. New cross-organizational collaborations Today many biopharmaceutical companies building their business on complex and interdependent technologies will face increasing problems in carrying the necessary discoveries and development effort on their own. Health care providers, as well as funders, will need to collaborate in developing the most effective treatment strategies. This situation leads to the need to find new collaborative approaches, connecting to major regional innovation systems, enabling more cost-effective introductions of new treatments. The trend from a cost-efficiency perspective is that many life science companies are outsourcing traditional in-house capabilities, such as clinical trials management and data management, to contract research organization. But more interesting is the willingness in the industry to also collaborate with shared research projects. On an even broader scale is the trend of engaging in large-scale collaborative research programmes to support pre-competitive pharmaceutical R&D to accelerate the development of safer and more effective medicines for patients. Collaboration, such as private–public partnerships (PPPs), is likely to take several forms and has also been linked an ‘open innovation’ model of R&D (Chesbrough, 2003). One salient example is the Innovative Medicine Initiative (IMI) (see http://imi.europa.eu) which is a unique private–public partnership between the EU and the European Federation of Pharmaceutical Industry Association (EFPIA), which is a long-term partnership in collaborative research at European level running from 2008–17, with an equally shared budget of a2 billion. The objective of IMI is to remove major bottlenecks in drug development, and the expectation is that this collaboration will reinvigorate the European bio-pharmaceutical sector, and foster Europe as the most attractive place for pharmaceutical R&D, thereby, in the long term, enhancing access to innovative medicines. The interesting feature with IMI is that the pharmaceutical industry partners actively engage in research projects in close collaboration with all stakeholders – for example, industry, public authorities (including regulators), patient organizations, academia, SMEs and clinical centres. Thus, the output is no new medicines; instead, focus is on delivery of new approaches, methods and technologies, improved knowledge management of research results and data, and support training of professionals. IMI is based on doing pre-competitive research in areas that are of great importance for the pharmaceutical industry’s four research
254
Venturing into the Bioeconomy
areas: prediction for safety evaluation, prediction for efficacy evaluation, knowledge management and education and training. Towards a new logic for science-based innovation and bioeconomy Taking into account the recent socio-economic drivers in technoscientific advances, these will create an unprecedented transitional environment for life science, health care and biotechnology over the years to come. A number of interdependencies will come into play and create synergies that have a potential to create large changes that will be part of what can be called the ‘new bioeconomy’. Some synergies, for example, are that connecting and reusing large amounts of health data will be necessary to succeed in areas of personalized medicine. The progress in ICT, which is also already seen in the diagnostic industry, will be more evident in the new drug development process; in turn, it may reshape the industry from one performing large, costly clinical trials to one that is smaller and more data-intensive, capturing reallife data. Success in personalized medicine will bring health care and the pharma industry together, introducing new business models. The PricewaterhouseCoopers report, Pharma 2020 (2006:1), states: ‘It is no longer the speed at which scientific knowledge is advancing so much as it is the healthcare agenda that is dictating how Pharma evolves.’ In addition, trends promise the possibility that modern discovery research may begin to find the right balance between in silico research and the application of complex biological models in order to maximize innovative output. Given current trends, we believe that the new bioeconomy will demand a new governing logic for science-based innovation. This logic will, for example, involve much more diversity among players, who need to collaborate in a new way, involving the pharmaceutical industry, health care, academics and new actors like the IT and telecom industries. Critical competitive advantages will be collaboration, especially in the form of PPPs, to really accelerate science and science-based innovation with the combined ability of sharing and exploiting scientific information, together with new models of intellectual property rights. The business model of the pharma industry is already changing to become more virtual and externally focused. On industry’s side, it needs to devise new models of R&D to remain competitive, allowing for the decline of blockbuster drugs, and moving away from large, centralized organizations and the need to reshape its core capabilities.2 One can speculate as to whether the new bioeconomy can pull together all of the synergies that can achieve, in a true Kuhnian sense, a paradigm
Managing and Organizing the Bioeconomy 255
shift of science to fulfil the promises of translational systems biology throughout the drug discovery process from the bench to the clinic.
Summary and conclusion This book has been an attempt to critically examine the quite diverse literature on what Rose (2007) calls the ‘bioeconomy’ and point at the implications for organization theory and management studies. In addition, empirical studies of a major pharmaceutical company, a number of small biotechnology firms and university researchers have been reported. The image of the bioeconomy presented is that of a diverse and highly heterogeneous field, loosely connected by a joint interest in intervening in the biological system, both to enable an understanding of the constitution of life and processes of life and to produce various technoscientific tools, technologies and drugs that enhance the perceived quality of life or cure specific diseases and illnesses. This venturing into the bioeconomy, appearing in many sites and in manifold institutional conditions, is not based on altruism but is embedded in a complex economy, including both the circulation of financial resources and scientific credibility (that is, a combination of ‘economic’ and ‘symbolic’ capital, in Pierre Bourdieu’s uses of the terms), in many ways being co-constitutive – the one kind of capital translating into the other. On the other hand, it would be unfair to say that the actors of the bioeconomy behave opportunistically; many of the interviewees appearing in this book are certainly dedicated both to the scientific endeavours and to their practical implications – making lives better for thousands of individuals – and are not overtly concerned about accumulating financial resources for themselves. At the same time, in the late modern society and in the economy regulated on basis of a financial regime of control (Fligstein, 1990; Davis, 2009), the access to financial resources, such as venture capital, is one of the key explanatory parameters for the emergence and viability of industry clusters (Ferray and Granovetter, 2009). The bioeconomy does not run on the sheer brilliance of some of the best brains in the world, but is fuelled by financial resources and institutionalized and transparent regimes of regulations and control. One single scientist, no matter how talented or dedicated to the cause, can accomplish very little unless supported by and embedded in a wider social and economic setting. In addition, there is strong evidence of path dependencies in the bioeconomy; success thus breeds further success. Organization theory and management studies should pay more detailed attention to the emergence of this
256
Venturing into the Bioeconomy
field of production – the technoscientific production of therapies and health care services – in order to understand how life per se is turned into a site where economic, financial and managerial interests intersect with other human demands and expectations. Like, perhaps, no other emerging industry, the bioeconomy, operating inter alia in the fields of drug development, organ procurement, reproductive medicine and increased quality of life more broadly, is the field where, in Hirschman’s phrase (1970), the passions and the (economic) interests intersect in various and often unpredictable ways.
Notes 1. Outlook 2010, Tufts Center for the Study of Drug Development, Tuft University. 2. One example is Sanofi-Aventis. The company announced in August 2010 that it was dividing its vast resources into decentralized, disease-based units, each with its own departments for R&D, regulatory affairs, marketing and sales – a plan designed to identify promising drugs more quickly and weed out failures before spending billions on unsuccessful clinical trials.
Appendix: On Methodology and Data Collection and Analysis
Methodological framework Epistemological concerns The overarching epistemological perspective enacted in the study is what may be referred to as a constructionist perspective and, more specifically, what Karen Barad, in a number of publications, calls ‘agential realism’. For Barad (2003), there is no metaphysical, external reality which scientists and others investigate through the use of various tools and technologies. Drawing on the scientific thinking and philosophical reasoning of the Danish physician Niels Bohr, Barad suggests that matter is what is constituted through the scientific apparatus used, based on what Barad calls ‘intra-action,’ the active engagement with bridging technology, theory and entities examined: ‘[M]atter is instances in its intra-active becomings – not a thing, but a doing, a congealing of agency . . . matter refers to the materiality/materialization of phenomena, not to an inherent fixed property of abstract independently existing objects of Newtonian physics’ (ibid.: 822, emphasis in original). In other words, matter, ‘reality’, Barad (ibid.: 817) says, does not exists per se, is not composed of either ‘things-in-themselves,’ or ‘things-behind-phenomena,’ but rather as ‘“things”-in-phenomena’. What Barad suggests is an idea addressed by Roth (2009), that one must not speak of reality as some kind of independent factor but rather think of reality as what is produced in the intra-action engaging scientific apparatuses and other tools or techniques. Matter is thus produced and reality is the totality of such intra-actively enacted specimens of the real; ‘apparatuses have a physical presence or an ontological there-ness as phenomena in the process of becoming; there is not fixed metaphysical outside,’ Barad (1998: 168) contends. Furthermore, in the agential realist epistemology, agency is 257
258
Appendix
‘[a] matter of intra-acting; it is an enactment, not someone or something has’. Agency is thus expressed in post-humanist terms, not attributable to subjects or objects but is ‘a “doing”/“being” in its intra-activity’ (Barad, 2003: 827–8). While a close reading of Barad’s (1998, 2003) epistemology suggests a post-humanist stance (see also Schatzki, 2002) where agency is not located in human actors but is emerging in the intra-activity of apparatuses and other resources, what is appealing in this view is the idea that matter, reality, what is commonly treated as an unproblematic external material substrata, is here intimately bound up with the technoscientific apparatus enrolled in the work. Such apparatus is capable of producing what Bohr called ‘phenomena,’ but to assume that such phenomena exist as brute facts outside the technoscientific apparatus is to make an epistemological leap that is not justified by inferences from data. In Barad’s view, intra-action, agency, apparatus and matter are produced simultaneously and it is not possible to isolate the one process from the other. Using this epistemological framework in practical research, it may be suggested that tools or scientific models do not mimic or map reality as much as they produce images that we tend to take as legitimate instances of reality. In Johnson’s (2008) study of a simulation tool for the female urology system used in gynaecology training and education, the simulation tool was, in the first place, an embodiment of what was regarded as good clinical gynaecology practice rather than being an adequate model of the female body: [T]he simulator mimics practice not anatomy, and simulates participation with the body. It simulates specific practices carried out through time, both ontologically pre-existing bodies independent of experience, because the urological body being simulated is that of the intra-action between the patient’s urological system (and all the technological, cultural and economic structures that contribute to its very situated practices . . . ) and the medical practices of the surgery. (Ibid.: 111) In other words, what we tend to think of as ‘matter’ or ‘reality’ is not as much based on our capacity to inspect what is external or isolated from the technoscientific apparatus as it is bound up with the use of a series of technologies, techniques and tools that we have enacted as legitimate sources of truth claims (Kruse, 2006). In addition, agental realism, emphasizing the mutual constitution of the real through combining agency, apparatuses and so forth, does not only apply to advanced
Methodology and Data Collection and Analysis 259
technoscientific research in the front line of what is known to humans. More theoretical models may end up being what Georges Canguilhem called ‘in the truth’ (dans le vrai); ‘The truth of an idea is not a stagnant property inherent in it. Truth happens to an idea,’ William James claimed (1975: 97, emphasis in original). This was, by and large, the case for the Black–Scholes–Merton option pricing model examined by MacKenzie and Millo (2003). The Black and Scholes option pricing model is, for many proponents of neo-classical economc theory, the jewel in the crown of this discipline, in many ways accruing status to all the fields of economics, at least those of microeconomic and macroeconomic theory and econometrics (Fourcade, 2006). However, the pricing model, today widely used by option traders in all the world’s financial markets, did not initially predict option prices very accurately: ‘Black–Scholes, and Merton’s work was . . . theoretical rather than empirical. In 1972, Black and Scholes tested their formula against prices in the pre-CBOE [Chicago Board Options Exchange] ad hoc option market and found only approximate agreement’ (MacKenzie and Millo, 2003: 121). Only after a number of significant changes in the policy and the practice of options trading did the actual prices start to converge towards the predicted theoretical prices: [The] empirical success was not due to the model describing a preexisting reality; as noted, the initial fit between reality and model was fairly poor. Instead, two interrelated processes took place. First, the market gradually altered so that many of the model’s assumptions, widely unrealistic when published in 1973, became more accurate. (Ibid.: 122) One may, thus, say that the territory was remodelled to suit the map, or that truth did, in fact, adapt to the idea of an option pricing model as traders started to use the model in their work, regardless of its accuracy. Using Barad’s (1998, 2003) agential realism, one may suggest that there were no ‘true’ option prices prior to the use of the Black and Scholes’ model, but that such prices – now in truth, supported by the institution of prestigious economic theory and two leading economists1 – were produced as an effect of the use of the model. The intra-activity of traders, policy-makers, the Black and Scholes model itself (in the form of a not-too-complex formulae and eventually software programs) and historical option prices grually stabilized a domain of economic reality, that of derivate trading and options, previously nonexistent – at least in the same terms as in the early 1980s and onwards.
260
Appendix
Using the agential realist epistemology in the field of the life sciences and the bioeconomy suggests that it is the aggregate of technoscientific procedures, theories, practices and agency that is capable of producing effects that are, after close scrutiny and negaotiations in epistemic cultures, in the communities of scientists and researchers, taken for real. Matter, either in the form of genes, SNPs, cell lines, or in the form of higher-level biological systems (e.g., the OncoMouse™), is inextricably entangled with the resources used in exploring such ‘“things”-in-phenomena’. Matter is, thus, what is produced rather explored in technoscientific work. This does not suggest that the biological systems explored in the life sciences operate on basis of relativist epistemologies; as suggested by Barad (1998, 2003); we are here dealing with a ‘agential realism’, albeit a production of reality in a non-foundationalist ontology. The real is not ‘in-themselves’, nor ‘behind phenomena’, but is exactly produced in the very act of using the various technoscientific resources at hand. Methodological choices In order to put Barad’s (1998, 2003) agential realism into actual research practices, to ‘operationalze it’ in the positivist vocabulary, the agent and the actual practice must be brought to attention. The world explored in the book engages with a world-in-the-making in the disciplinary regimes of the technosciences. It is an attempt to get some insight into, some understanding of, how scientists are capable of producing realities on the basis of the interaction of enacted theories, collectively established and standardized research methodologies, technological apparatuses and biological specimens such as laboratory animals, samples from patient groups, or cell lines. All these resources are integrated, yet kept apart in the everyday work of the practising scientist. In order to find the most adequate role models, one may turn to the laboratory studies reported in the science and technology studies literature (e.g., Knorr Cetina, 1981; Fujimura, 1996). Another body of research that is just as meticulously attentative to the work conducted by the agent is the industry sociology and labour process theory literature. For instance, in Kusterer’s (1978) seminal but somewhat ignored work on the role of tacit knowledge in all forms of work, work is portrayed as what constitutes a meaningful life world for the worker: [T]he meaning statements and reality definition that people develop out of their work experience form the most basic, most central level of knowledge, upon which all other consciousness is built. This is because work is the most basic level of praxis, the one sphere of
Methodology and Data Collection and Analysis 261
human activity in which the worker is constantly seeing his reality definitions of the work situation tested and confirmed at first hand in the concrete results of his work activity. (Ibid.: 2) For Kusterer, work is, thus, not something ‘additional to’ or ‘complementary’ to everyday life but is instead the domain where ‘reality’ is produced as the outcome of practices. Kusterer exemplifies with the capacity of the operator of a machine to ‘keep the machine running’: The machinery, the materials, and the organizational relations of production are all complex subject areas that the operators must learn in detail in order to do their jobs. Knowledge of all these areas is a technical necessity of production, because the operators do not merely passively tend their machines. They actively keep their machines running, meeting and successfully overcoming all the material and social obstacles which periodically arise to interfere with continued quality production. (Ibid.: 62) The ability to keep the machine running demands an intimate sensitivity to and understanding of the machine and its parts, its functioning under various conditions, and how to handle malfuctions or breakdowns (Styhre, 2009). In analogy, the practising laboratory researchers, both technicians and scientists, need to have the same understanding of the assemblage of heterogeneous resources mobilized in the work (Nutch, 1996; Barley and Bechky, 1994). The comparative procedure of ‘keeping the machine running’ in the factory work is what Lynch (1985) calls ‘making it work’ in laboratory work. In terms of data collection, there are two paths to choose: either you observe people in their day-to-day work situation, or you talk to them about their daily endeavours. In many cases, these two principal data collection methods are combined into ethnographic approaches. Barley and Bechky’s (1994) and Lock’s (2002) studies of laboratory technicians and organ donations are two examples of ethnographic studies in the Chicago sociology school tradition (Deegan, 2001) that provide intriguing details about how actors perceive their own work and its broader life world. On the other hand, Sunder Rajan (2006) uses a more narratological approach and manages to outline a fascinating overview of both the drivers of and the implications from the bioeconomy. Most studies of the life sciences combine talk and observations. In the present study, there is a strong emphasis on the actors’ account of their work. It is, in many cases, problematic to get access to the major
262
Appendix
pharmaceutical companies and biotech companies and, in many cases, a meaningful ethnographic approach based on observation is a most time-consuming endeavour demanding much investment from both the researcher and the participating company or workplace. As a consequence, the present study is based on interviews with representatives of the three fields of the pharmaceutical industry, the biotech industry and academic researchers in the life sciences.
The three study sites: the pharmaceutical industry, the biotech industry and the research industry PharmaCorp study Data collection The present study is part of a research project aimed at examining how scientific technologies being developed in the life sciences are affecting professional work and identities in the field of the biopharmaceutical industry. Based on a case study methodology approach (Siggelkow, 2007; Eisenhardt, 1989) and an interview data collection method (Gubrium and Holstein, 2003; Holstein and Gubrium, 2003; Kvale, 1996; Fontana and Frey, 1994), ten interviews were conducted in a major pharmaceutical industry, here referred to by the pseudonym PharmaCorp. The research team has developed a collaborative research relationship (see Shani et al., 2008) with PharmaCorp lasting for almost ten years. Staged essentially as what Bartunek and Louis (1996) call ‘insider/outsider team research’, one of the members of the research team has worked in the company since the mid-1980s. The insider/outsider research approach is rewarding, Bartunek and Louis claim, because it is capable of combining analytical distance and detailed insight into the ongoing activities in an organization. In the present study, the detailed know-how on the part of one of the research team members – that is, the insider – helped clarify some of the more scientific and technical intricacies of the new drug development work. In the study, a number of scientists representing relevant domains of research were selected. Interviews were conducted in the interlocutors’ offices at one of PharmaCorp’s sites in Sweden. Interview duration was approximately one hour. A semi-structured interview guide was used in all interviews. Interviews were recorded and transcribed verbatim by one of the researchers. All interviews were conducted in Swedish. Most of the interlocutors had Swedish as their mother tongue, but in some cases Swedish had been taught after living in Sweden for a long period.
Methodology and Data Collection and Analysis 263
Interview transcription and data analysis The first transcription was translated into English, following as closely as possible the original utterances. In some cases, idiomatic expressions were adjusted to comparative expressions in English. The translated transcriptions were sorted into seven to eight different categories, as suggested by qualitative data analysis literature (e.g., Strauss and Corbin, 1998; Miles and Huberman, 1984), separating carefully what van Maanen (1979: 540) calls ‘first-order concepts’ – ‘“facts” of an ethnographic investigation’ – and ‘second-order concepts’ – the ‘“theories” an analyst uses to organize and explain these facts’ – the categories used to structure the data at the second order. That is, the interlocutors did not speak of, for instance, ‘experimental systems’ or ‘epistemic objects’ but used their own operative vocabulary. Biotech companies study Data collection In the first company, in practice a one-man company where one computation chemist collaborated with his father, a medical doctor, one person was interviewed. The interview was conducted in the interviewee’s home. In the second company, developing new migraine therapies, two co-workers, the founder and a professor at the medical school were interviewed. The interview was conducted at the office of the medical school professor. In company three, the CNS research company, three persons were interviewed: two researchers and the CEO. The interviewing took place at the company’s facilities in a major biotechnology building located next to the medical school and the university hospital. In the fourth company, two persons were interviewed, the research director and one of the senior researchers. Again, the interviewees were conducted at site located in the same building that hosts company four. Finally, in the fifth company, one person, the CEO, was interviewed. In addition to the biotechnology entrepreneurs, representing a wide scope of research interests and specialisms, an incubator director at the medical school science park was interviewed. This interview was conducted in his office at the science park, located on the medical school campus and next to the university hospital. In total, ten biotechnology sector representatives were interviewed. Getting access to biotechnology firms was by no means a trivial matter. One of the authors affiliated with the pharmaceutical industry used personal contacts leading to further recommendations. This approach to select interviewees based on recommendation has, at times, been
264
Appendix
referred to as ‘snowballing’, where one interview leads to the other. A requirement was that the firms included in the study should develop some kind of therapy or methodology or technique used in the development of therapies. That is, the work conducted in the biotechnology companies should in one way or another be related to the scientific frameworks and procedures developed in the life sciences. All five firms fulfilled these criteria and at least two of the firms were already, after close to ten years of work, on the verge of providing actual therapies. In the sample of interviewees, all were men. This highly gendered selection of interviewees is unfortunate but is, to some extent, indicative of the male dominance in the field. The research team could not influence this selection of interviewees. During the interviews, the interviewees were asked to tell about their research work, how new technoscientific procedures influenced their work and how they perceived the recent changes in the life sciences. Interview transcription and data analysis All interviews lasted for about one hour. All but one was tape-recorded. Biotechnology company three was in a position where a major clinical study was just about to be reported and the CEO did not wanted to take any risks; therefore, he did not want the research team to tape-record his interview. Instead, he welcomed the research team to submit their field notes to clarify any misunderstandings. The tape-recordings were transcribed verbatim by one of the authors. During the transcription work, the central passages addressing adequate topics for the research question were transcribed. After being transcribed, each document was coded using relevant coding categories. After the individual documents were coded, they were structured into one single document where shared codes were used to structure the data. Bringing all interesting interview excerpts into one single interview enabled the identification of a storyline, a theme (White, 1978), for the data. The interview excerpts were, thus, structured into a narrative that would make sense for the reader. The universities study In this study, ten researchers in two major research universities were interviewed. In University A, a technical university located in a Scandinavian city, five researchers were interviewed, including the full professor (who is an internationally highly renowned researcher in his field), two assistant professors and two doctoral students, whereof one female student was in the very last phases of the her PhD programme and a male bioengineer had started his PhD programme a few months
Methodology and Data Collection and Analysis 265
earlier. In University B, interviews were conducted in the medical school, one of the leading research institutions in Scandinavia. The interlocutors included two full professors in the field of clinical physiology and cardiovascular medicine. The selection of interviewees has based on the reputation of the full professor at University A, holding a chair in bioengineering and systems biology and being very well-known in the field. The professors interviewed in University B were recommended by the University A professor because they were collaborating with one another. In the same manner, the interviewees in the research group were recommended by the professor. This is referred to as ‘snowball selection’ (see above), where the first interviewee recommends the next and so forth. The research group at University A, otherwise primarily working with a yeast model system in their research, collaborated with University B to examine medical clinical data collected in a number of large-scale projects in the field of metabolic disorder research. In the interviews, the interlocutors were asked to talk about their background and working experiences, their ongoing research projects and research interests, the principal challenges in their work for the time being, their view of the role and purpose of systems biology – a research approach which they all formally endorsed and worked with – and their thoughts about what a full-scale systems biology programme might mean for the future of the life sciences. All interviews were conducted at the researchers’ offices or in the department’s conference room. Interviews were conducted by one single senior researcher and were transcribed verbatim. Interview transcription and data analysis The transcribed interview excerpt was coded into seven categories on the basis of shared and recurring themes in the interviews. The seven categories were then organized into a storyline where the interlocutors’ various opinions, experiences, examples and beliefs were made into a ‘plot’ that could be more easily followed by the reader.
Note 1. One option trader, cited in MacKenzie and Millo (2003: 121), emphasized the legitimizing value of the Black and Scholes model for the entire field of financial derivate instruments trading: ‘Black–Scholes was really what enabled the exchange to thrive . . . It had a lot of legitimacy to the whole notion of hedging and efficient pricing, whereas we were faced, in the late 60s–early 70s with the issue of gambling . . . It wasn’t speculation or gambling, it was efficient pricing . . . I never heard the word “gambling” again in relation to options.’
Bibliography Abbott, Andrew (1988), The System of Professions: An Essay on the Division of Expert Labor, Chicago and London: Chicago University Press. Abraham, John (1995), Science, Politics, and the Pharmaceutical Industry, London: UCL Press. Achilladis, Basil and Antonakis, Nicholas (2001), ‘The Dynamics of Technological Change Innovation: The Case of Pharmaceutical Industry’, Research Policy, 30: 535–88. Aglietta, M. (1979), A Theory of Capitalist Regulation, London: NLB. Almeling, Renee (2007), ‘Selling Genes, Selling Gender: Egg Agencies, Sperm Banks, and the Medical Market in Genetic Material’, American Sociological Review, 73(3): 319–40. Althusser, Louis (1984), ‘Ideology and Ideological State Apparatuses’, in L. Althusser, Essays on Ideology, London and New York: Verso. Anderson, Benedict (1983/1991), Imagined Communities: Reflections on the Origin and Spread of Nationalism, London and New York: Verso. Anderson, G. (2008), ‘Mapping Academic Resistance in the Managerial University’, Organization, 15(2), 251–70. Andrews, Lori, and Nelkin, Dorothy (2001), Human Bazaar: The Market for Human Tissue in the Biotechnology Age, New York: Crown. Angell, Marcia (2004), The Truth about the Drug Companies, New York: Random House. Ansell Pearson, Keith (1997), Viroid Life: Perspectives on Nietzsche and the Transhuman Condition, London and New York: Routledge. Ansell Pearson, Keith (2002) Philosophy and the Adventures of the Virtual: Bergson and the Time of Life, London and New York: Routledge. Armstrong, Peter (2001), ‘Science, Enterprise and Profit: Ideology in the Knowledge-driven Economy’, Economy and Society, 30(4): 524–52. Åsberg, Cecilia and Johnson, Ericka (2009), ‘Viagra Selfhood: Pharmaceutical Advertising and the Visual Formation of Swedish Masculinity’, Health Care Analysis, 17(2): 144–59. Bachelard, Gaston (1934/1984), The New Scientific Spirit, Boston: Beacon Press. Bakan, Joel (2005), The Corporation: The Pathological Pursuit of Profit and Power, London: Constable. Bakhtin, Michail (1968), Rabelais and His World, Bloomington: Indiana University Press. Banerjee, Subhabrata Bobby (2008), ‘Necrocapitalism’, Organization Studies, 29(12): 1541–63. Barad, Karen (1998), ‘Getting Real: Technoscientific Practices and the Materialization of Reality’, Differences, 10(2): 87–128. Barad, Karen (2003), ‘Posthumanist Performativity: Towards an Understanding of How Matter comes to Matter’, Signs: Journal of Woman in Culture and Society, 28(3): 801–31.
266
Bibliography 267 Barley, Steven R. and Bechky, Beth (1994), ‘In the Backroom of Science: The Work of Technicians in Science Labs’, Work and Occupations, 21(1): 85–126. Barry, Andrew (2005), ‘Pharmaceutical Matters: The Invention of Informed Materials’, Theory, Culture and Society, 22(1): 51–69. Bartunek, J. M. (2007), ‘Academic–Practitioner Collaboration need not Require Joint or Relevant Research: Toward a Relational Scholarship of Integration’, Academy of Management Journal, 50(6), 1323–33. Bartunek, Jean M. and Louis, Meryl Reis (1996), Insider/Outsider Team Research, Thousand Oaks: Sage. Bataille, G. (1988), The Accursed Share: An Essay on General Economy, New York: Zone. Bateson, Gregory (1972) Steps to an Ecology of Mind, Chicago: Chicago University Press. Bauman, Zygmunt (2000), Liquid Modernity, Cambridge and Malden: Polity Press. Bauman, Zygmunt (2005), Liquid Life, Cambridge: Polity Press. Bechky, Beth A. (2003), ‘Object Lessons: Workplace Artifacts as Representations of Occupational Jurisdiction’, American Journal of Sociology, 109(3): 720–52. Beck, Ulrich (2000), Welcome to the New World of Work, trans. by Patrick Camiller, Cambridge: Polity Press. Bell, Geoffrey G. (2005), ‘Clusters, Networks, and Firm Innovativeness’, Strategic Management Journal, 26: 287–95. Benhabib, Selya (2002), The Claims of Culture: Equality and Diversity in the Global Era, Princeton and Oxford: Princeton University Press. Bensaude-Vincent, Bernadette (2007), ‘Nanobots and Nanotubes: Two Alternative Biomimetic Paradigms of Nanotechnology’, J. Riskin (ed.), Genesis Redux: Essays in the History and Philosophy of Artificial Life, Chicago and London: University of Chicago Press: 211–236. Bensaude-Vincent, Bernadette and Stengers, Isabelle ([1993] 1996), A History of Chemistry, trans. by Deborah van Dam, Cambridge and London: Harvard University Press. Bercovitz, J. and Feldman, M. (2008), ‘Academic Entrepreneurs: Organizational Change at the Individual Level’, Organization Science, 19(1): 69–89. Bergson, Henri (1910/1988), Matter and Memory, New York: Zone. Bijker, Wiebe E. (1995), Of Bicycles, Bakelites, and Bulbs: Toward a Theory of Ssociotechnical Change, Cambridge and London: MIT Press. Bijker, Wiebe E., Hughes, Thomas P. and Pinch, Trevor J. (eds) (1987), The Social Construction of Technological Systems: New Directions in the Sociology and History of Technology, Cambridge and London: MIT Press. Blech, Jörg (2006), Inventing Disease and Pushing Pills: Pharmaceutial Companies and the Medicalization of Normal Life, trans. by Gisela Wallor Hajjar, London and New York: Routledge. Blumenberg, Hans (1993), ‘Light as a Metaphor for Truth: At the Preliminary Stage of Philosophical Concept Formation’, in D. M. Levin (ed.), Modernity and the Hegemony of Vision, Berkeley, Los Angeles and London: University of California Press: 30–62. Boardman, P. C. and Corley, E. A. (2008), ‘University Research Centers and the Composition of Research Collaborations’, Research Policy, 37: 900–13.
268
Bibliography
Boardman, C. and Ponomariov, B. L. (2007), ‘Reward Systems and NSF University Research Centers: The Impact of Tenure on University Scientists’ Valuation of Applied and Commercially Relevant Research’, Journal of Higher Education, 78(1): 51–70. Boddy, William (2004), New Media and Popular Imagination: Radio, Television, and Digital Media in the United States, Oxford and New York: Oxford University Press. Bogner, William C. and Bansal, Pratima (2007), ‘Knowledge Management as the Basis of Sustained High Performance’, Journal of Management Studies, 44(1): 165–87. Borgerson, Janet and Rehn, Alf (2004), ‘General Economy and Productive Dualisms’, Gender, Work and Organization, 11(4): 455–74. Boyer, Robert (1988), ‘Technical Change and the Theory of “Régulation”’, in G. Dosi et al. (eds), Technical Change and Economic Theory, London: Pinter. Bowker, Geoffrey (2005), Memory Practices of the Sciences, Cambridge and London: MIT Press. Bourdieu, Pierre (2005), The Economic Structures of Society, Cambridge: Polity Press. Bourdieu, Pierre and Eagleton, Terry (1994), ‘Doxa and Common Life: An Interview’, in S. Žižek (ed.), Mapping Ideology, New York: Verso: 265–75. Bourdieu, Pierre and Wacquant, Loic J. D. (1992), An Invitation to Reflexive Sociology, Chicago and London: University of Chicago Press. Bozeman, B. and Boardman, C. (2004), ‘The NSF Engineering Research Centers and the University–Industry Research Revolution: A Brief History featuring an Interview with Erich Bloch’, Journal of Technology Transfer, 29: 365–75. Braidotti, Rosi (1994), Nomadic Subjects: Embodiment and Sexual Difference in Contemporary Feminist Theory, New York: Columbia University Press. Braidotti, Rosi (2002), Metamorphosis: Toward a Materialist Theory of Becoming, Cambridge: Polity Press. Braithwaite, J. (1984), Corporate Crime in the Pharmaceutical Industry, London: Routledge. Brody, Howard (2007), Hooked: Ethics, the Medical Profession, and the Pharmaceutical Industry, Lanham: Rowman and Littlefield. Brown, Nik and Kraft, Alison (2006), ‘Blood Ties: Banking the Stem Cell Promise’, Techology Analysis and Strategic Management, 18(3): 313–27. Bud, Robert (1993), The Uses of Life: A History of Biotechnology, Cambridge: Cambridge University Press. Burbeck, S. and Jordan, K. E. (2006), ‘An Assessment of the Role of Computing in Systems Biology’, IBM Journal of Research and Development, 50(6): 529–43. Burri, Regula Valérie (2008), ‘Doing Distinctions: Boundary Work and Symbolic Capital in Radiology’, Social Studies of Science, 38: 35–62. Busfield, Joan (2006), ‘Pills, Power, People: Sociological Understandings of the Pharmaceutical Industry’, Sociology, 40(2): 297–314. Butler, Judith (1993), Bodies that Matter: On the Discursive Limits of ‘Sex’, London and New York: Routledge. Çalis¸kan, Koray and Callon, Michel (2009), ‘Economization, Part 1: Shorting Attention from the Economy Towards the Processes of Economization’, Economy and Society, 38(3): 369–98. Calvert, Jane (2007), ‘Patenting Genomic Objects: Genes, Genomes, Function and Information’, Science as Culture, 16(2): 207–23.
Bibliography 269 Cambrosio, Alberto, Keating, Peter, Schlich, Thomas and Weisz, George (2006), ‘Regulatory Objectivity and the Generation and Management of Evidence in Medicine’, Social Science and Medicine, 63(1): 189–99. Cardinal, Laura B. (2001), ‘Technological Innovation in the Pharmaceutical Industry: The Use of Organizational Control in Managing Research and Development’, Organization Science, 12(1): 19–36. Carruthers, Brace and Espeland, Wendy (1998), ‘Money, Meaning, and Morality’, American Behavioral Scientists, 41(1): 1384–408. Casper, Steven (2000), ‘Institutional Adaptiveness, Technological Policy, and the Diffusion of New Business Models: The Case of German Biotechnology’, Organization Studies, 21(5): 887–914. Chesbrough, Henry W. (2003), Open Innovation: The New Imperative for Creating and Profiting from Technology, Boston: Harvard Business School Press. Chia, Robert and Holt, Robin (2006), ‘Strategy as Practical Coping: A Heideggerian Perspective’, Organization Studies, 27(5): 635–55. Churchill F. B. (1974) ‘William Johannsen and the Genotype Concept’, Journal of the History of Biology, 7: 5–30. Clarke, Adele E. (1998), Disciplining Reproduction: Modernity, American Life Sciences, and the Problem of Sex, Berkeley: University of California Press. Clarke, Adele E., Mamo, Laura, Fishman, Jennifer R., Shim, Janet K. and Fosket, Jennifer Ruth (2003), ‘Biomedichalization: Technoscientific Transformations of Health, Illness, and US Biomedicine’, American Sociological Review, 68: 161–94. Clegg, Stewart R., Rhodes, Carl and Kornberger, Martin (2007), ‘Desperately Seeking Legitimacy: Organizational Identity and Emerging Industries’, Organization Studies, 28(4): 495–513. Clifford, James (1988), The Predicament of Culture: Twentieth-century Ethnography, Literature, and Art, Boston: Harvard University Press. Collins, Harry and Pinch, Trevor (2005), Dr Golem: How to Think about Medicine, Chicago and London: University of Chicago Press. Collins, Randall (1979), The Credential Society, New York: Academic Press. Comte, Auguste (1975), Auguste Comte and Positivism: Essential Writings, ed. by Gertrud Lenzer, New York: Harper Torchbooks. Conrad, Peter (2007), The Medicalization of Society, Baltimore: Johns Hopkins University Press. Conrad, P. O and Potter, D. (2000), ‘From Hyperactive Children to ADHD Adults: Observations on the Expansion of Medical Categories’, Social Problems, 47: 559–82. Cooper, Melinda (2008), Life As Surplus: Biotechnology and Capitalism in the Neoliberal Era, Seattle and London: University of Washington Press. Coriat, Benjamin, Orsi, Fabienne and Weinstein, Oliver (2003), ‘Does Biotech Reflect a New Science-based Innovation Regime?’, Industry and Innovation, 10(3): 231–53. Critser, Greg (2003), Fat Land: How Americans became the Fattest People in the World, London: Penguin. Croissant, Jennifer L. and Smith-Doerr, Laurel (2008), ‘Organizational Contexts of Science: Boundaries and Relationships between University and Industry’, in E. J. Hackett, O. Amsterdamska, M. Lynch and J. Wajcman (eds) (2008), Handbook of Science and Technology Studies, 3rd edn, Cambridge and London: MIT Press: 691–718.
270
Bibliography
Crossley, Nick (2001), The Social Body: Habit, Identity and Desire, London, Thousand Oaks and New Delhi: Sage. Czarniawska, Barbara (2003), ‘This Way to Paradise: On Creole Researchers, Hybrid Disciplines, and Pidgin Writing’, Organization, 10(3): 430–4. Dahlander, Linus and McKelvey, Maureen (2005), ‘The Occurrence and Spatial Distribution of Collaboration: Biotech Firms in Gothenburg, Sweden’, Technology Analysis and Strategic Management, 17(4): 409–31. Dallery, Aileen B. (1989), ‘The Politics of Writing (the) Body: Écriture féminine’, in A. M. Jaggar and S. R. Bordo (eds), Gender/Body/Knowledge: Feminist Reconstructions of Being and Knowing, New Brunswick and London: Rutgers University Press: 52–67. Das, Veena (2000), ‘The Practice of Organ Transplants: Networks, Documents, Translations’, in M. Lock, A. Youg and A. Cambrosio(eds), Living and Working with New Medical Technologies: Intersections of Inquiry, Cambridge: Cambridge University Press: 263–87. Davenport, Thomas and Beck, John (2001), The Attention Economy: Cambridge: Harvard Business School Press. Davis, Gerald F. (2009), ‘The Rise and Fall of Finance and the End of the Society of Organizations’, Academy of Management Perspectives, 23(3): 27–44. Deegan, Mary Jo (2001), ‘The Chicago School of Ethnography’, in P. A. Atkinson, S. Delamont, A. J. Coffey and J. Lofland (eds), Handbook of Ethnography, London, Thousand Oaks and New Delhi: Sage: 9–25. DeGrandpre, Richard (2006), The Cult of Pharmacology: How American became the World’s Most Troubled Drug Culture, Durham: Duke University Press. DeLanda, Manuel (2002), Intensive Science and Virtual Philosophy, London and New York: Continuum. DeLanda, Manuel (2006), A New Philosophy of Society: Assemblage Theory and Social Complexity, London and New York: Continuum. Deleuze, Gilles (1966/1988), Bergsonism, New York: Zone. Deleuze, G. (1992), ‘Postscript on the Societies of Control’, October, 59(Winter): 3–7. Dogdson, Mark (2000), The Management of Technological Innovation, Oxford and New York: Oxford University Press. Dodgson, Mark, Gann David and Salter, Ammon, (2005), Think, Play, Do: Technology, Innovation, and Organization, Oxford and New York: Oxford University Press. Donzelot, Jacques (2008), ‘Michel Foucault and Liberal Intelligence’, Economy and Society, 37(1): 115–34. Dore, Ronald (2008), ‘Financialization of the Global Economy’, Industrial and Corporate Change, 17(6): 1097–112. Dougherty, D. (1999), ‘Organizing for Innovation’, in S. R. Clegg, C. Hardy and W. R. Nord (eds), Managing Organizations, London: Sage. Dougherty, Deborah (2007), ‘Trapped in the 20th Century? Why Models of Organizational Learning, Knowledge and Capabilities do not fit Biopharmaceuticals, and What to do About That’, Management Learning, 38(3): 265–70. Douglas, Mary and Isherwood, Baron (1979), The World of Goods: Towards an Anthropology of Consumption, London: Allen Lane. Drennan, Katherine (2002), ‘Patient Recruitment: The Costly and Growing Bottleneck in Drug Development’, Drug Discovery Today, 7(3): 167–70.
Bibliography 271 Drews, Jürgen (2000), ‘Drug Discovery: A Historical Perspective’, Science, 287: 1960–4. Dreyfus, H. L. and Dreyfus, S. E. (2005), ‘Expertise in the Real World Context’, Organization Studies, 26: 779–92. Dubin, Robert (1969), Theory Building, New York: Free Press. Du Gay, P. (1996), Consumption and Identity at Work, London, Thousand Oaks and New Delhi: Sage. Dupre, Louis (1993), Passage to Modernity: An Essay in the Hermeneutics of Nature and Culture, New Haven and London: Yale University Press. Durand, Rudolphe, Bruyaka, Olga and Mangematin, Vincent (2008), ‘Do Science and Money Go Together? The Case of the French Biotech Industry’, Strategic Management Journal, 29: 1281–99. Durkheim, É. (1893/1933), The Division of Labour in Society, New York: Free Press. The Economist (2009), ‘Medicine goes Digital: A Special Report on Health Care and Technology’, April. Eisenhardt, Kathleen N. (1989), ‘Building Theories from Case Study Research’, Academy of Management Review, 14(4): 532–50. Empson, Laura (2008), ‘Professions’, in S. Clegg and J. R. Bailey (eds), International Encyclopedia of Organization Studies, London, Thousand Oaks and New Delhi: Sage: 1315–18. Enriques, J and Goldberg, R. A. (2000), ‘Transforming Life, Transforming Business: The Life-science Revolution’, Harvard Business Review, 78(2): 94–104. Epstein, Steven (1996), Impure Science: AIDS, Activism, and the Politics of Science, Berkeley: University of California Press. Eriksen, Thomas Hylland (2001), Tyranny of the Moment, London and Sterling: Pluto Press. Esposito, Roberto (2008), Bíos: Biopolitics and Philosophy, trans. by Timothy Campbell, Minneapolis and London: University of Minnesota Press. Etzkowitz, H. (1998), ‘The Norms of Entrepreneurial Science: Cognitive Effects of the New Industry–University Linkages’, Research Policy, 27, 823–33. Etzkowitz, H. (2003), ‘Research Groups as ‘Quasi-firms’: The Invention of the Entrepreneurial University’, Research Policy, 32, 109–21. Ewenstein, Boris and Whyte, Jennifer K. (2007), ‘Visual Representations as “Artifacts of Knowing”’, Building Research and Information, 35(1): 81–9. Ewenstein, Boris and Whyte, Jennifer (2009), ‘Knowledge Practices in Design: The Role of Visual Representations as Epistemic Objects’, Organization Studies, 30(1): 7–30. Fagerberg, Jan, Mowery, David C. and Nelson, Richard R. (eds) (2005), The Oxford Handbook of Innovation, Oxford and New York: Oxford University Press. Falk, Pasi (1994), The Consuming Body, London: Sage. Faulkner, Wendy (2007), ‘“Nuts and Bolts and People”: Gender-troubled Engineering Identities’, Social Studies of Science, 37(3): 331–56. Ferray, Michael and Granovetter, Mark (2009), ‘The Role of Venture Capital Firms in Silicon Valley’s Complex Innovation Network’, Economy and Society, 38(2): 326–59. Fine, Gary-Alan (2007), Authors of the Storm: Meteorologists and the Culture of Prediction, Chicago and London: University of Chicago Press. Firat, A., Fuat and Venkatesh, Alladi (1995), ‘Liberatory Postmodernism and the Reenchantment of Consumption’, Journal of Consumer Research, 22: 239–67.
272
Bibliography
Fischer, Jill (2009), Medical Research for Hire: The Political Economy of Pharmaceutical Clinical Trials, New Brunswick: Rutgers University Press. Fishman, Jennifer R. (2004), ‘Manufacturing Desire: The Commodification of Female Sexual Dysfunction’, Social Studies of Science, 34(2): 187–218. Fleck, Ludwik (1979), Genesis and Development of a Scientific Fact, Chicago and London: Chicago University Press. Fligstein, Neil (1990), The Transformation of Corporate Control, Cambridge and London: Harvard University Press. Fligstein, Neil (2001), The Architecture of Markets, Princeton: Princeton University Press. Fontana, A. and Frey, J. H. (1994), ‘Interviewing’, in N. K. Denzin and Y. S. Lincoln, Handbook of Qualitative Research, London, Thousand Oaks and New Delhi: Sage. Foucault, M. (1970), The Order of Things, London: Routledge. Foucault, Michel (1973), The Birth of the Clinic, London: Routledge. Foucault, Michel (1977), Discipline and Punish, New York: Pantheon. Foucault, Michel (1980), Power/Knowledge, New York: Harvester Wheatsheaf. Foucault, M. (1997), ‘The Birth of Biopolitics’, in Ethics, Subectivity and Truth: Essential Works of Michel Foucault, Vol. 1, New York: New Press: 73–9. Foucault, Michel (2003), Society Must be Defended: Lectures at the Collège de France, 1975–1976, ed. by Mauro Bertaini and Allessandro Fonatana, trans. by David Macey, London: Penguin. Foucault, Michel (2008), The Birth of Biopolitics: Lectures at the Collège de France, 1978–1979, ed. by Michael Senellart, trans. by Graham Burchell, Basingstoke: Palgrave. Fourcade, Martin (2006), ‘The Construction of a Global Profession: The Transnationalization of Economics’, American Journal of Sociology, 112: 145–94. Frank, D. J. and Meyer, J. W. (2007), ‘University Expansion and the Knowledge Society’, Theory and Society, 36: 287–311. Franklin, Sarah (2001), ‘Culturing Biology: Cell Lines for the Second Millennium’, Health, 5(3): 335–54. Franklin, Sarah (2005), ‘Stem Cell R Us: Emergent Life Forms and the Global Biological’, in A. Ong and S. J. Collier (eds), Global Assemblages: Technology, Politics, and Ethics as Anthropological Problems, Malden and Oxford: Blackwell: 59–78. Franklin, Sarah (2007), Dolly Mixtures: The Remaking of Genealogy, Durham: Duke University Press. Franklin, Sarah and Roberts, Celia (2006), Born and Made: An Ethnography of Preimplantation Genetic Diagnosis, Princeton and London: Princeton University Press. Freeman , C. and Perez, C. (1988), ‘Structural Crisis of Adjustment, Business Cycles, and Investment Behaviour’, in G. Dosi et al. (eds), Technical Change and Economic Theory, London: Pinter. Fujimura, Joan H. (1996), Crafting Science: A Sociohistory of the Quest for the Genetics of Cancer, Cambridge, MaA: Harvard University Press. Fujimura, Joan H. (2005), ‘Postgenomic Futures: Translating Across the Machine– Nature Border in Systems Biology’, New Genetics and Society, 24(2): 195–225. Fuller, Steve (2007), Science and Technology Studies, Cambridge: Polity Press. Galambos, L. and Sturchio, J. (1998), ‘Pharmaceutical Firms and the Transition to Biotechnology: A Study in Strategic Innovation’, Business History Review, 72(Summer): 250–78.
Bibliography 273 Galison, Peter (1997), Image and Logic: A Material Culture of Microphysics, Chicago and Londion: University of Chicago Press. Galloway, Alexander R. (2006), ‘Language Wants to be Overlooked: On Software and Ideology’, Journal of Visual Studies, 5(3): 315–31. Garnier, J.-B. (2008), ‘Rebuilding the RandD Engine in Big Pharma’, Harvard Business Review, 86: 68–76. Gassmann, Oliver and Reepmeyer, Gerrit (2005), ‘Organizing Pharmaceutical Innovation: From Science-based Knowledge Creators to Drug-oriented Knowledge Brokers’, Creativity and Innovation Management, 14(3): 233–45. Gatens, Moira (1996), Imaginary Bodies: Ethics, Power, and Corporeality, London and New York: Routledge. Geertz, C. (1973), The Interpretation of Cultures, New York: Basic. Gehlen, Arnold ([1957]1980), Man in the Age of Technology, New York: Columbia University Press. Gibbons, M. et al. (eds) (1994), The New Production of Knowledge, London: Sage. Giddens, Anthony (1990), The Consequences of Modernity, Cambridge: Polity Press. Gieryn, Thomas F. (1983), ‘Boundary-work and the Demarcation of Science from Non-science: Strains and Interest in Professional Ideologies of Scientists’, American Sociological Review, 48(6): 781–95. Gilman, Sander L. (1999), Making the Body Beautiful: A Cultural History of Aesthetic Surgery, Princeton: Princeton University Press. Gitelman, Lisa (2006), Always Already New: Media, History and the Data of Culture, Cambridge and London: MIT Press. Golan, Tal (2004), ‘The Emergence of the Silent Witness: The Legal and Medical Reception of X-ray in the USA’, Social Studies of Science, 34: 469–99. Gopalakrishnan, Shanthi, Scillitoe, Joanne L. and Santot, Michael D. (2008), ‘Tapping Deep Pockets: The Role of Resources and Social Capital in Financial Capital Acquisition by Biotechnology Firms in Biotech–Pharma Alliances’, Journal of Management Studies, 45(8): 1354–76. Gottinger, Hans-Werner and Umali, Celia L. (2008), ‘The Evolution of the Pharmaceutical-biotechnology Industry’, Business History, 50(5): 583–601. Granberg, A. and Jacobsson, S. (2006), ‘Myth or Reality: A Scrutiny of Dominant Beliefs in the Swedish Science Policy Debate’, Science and Public Policy, 33(5): 321–40. Greene, Jeremy (2004), ‘Attention to Details: Medicine, Marketing, and the Emergence of the Pharmaceutical Representative’, Social Studies of Science, 34: 271–92. Griffith, Paul E. (2001), ‘Genetic Information: A Metaphor in Search of a Theory’, Philosophy of Science, 68(3): 394–412. Grindstaff, Laura and Turow, Joseph (2006), ‘Video Cultures: Television Sociology in the “New TV” Age’, Annual Review of Sociology, 32: 103–25. Grosz, Elizabeth (1994), Volatile Bodies: Toward a Corporeal Feminism, Bloomington and Indianapolis: Indiana University Press. Grosz, Elizabeth (1995), Space, Time, and Perversion: Essays in the Politics of Bodies, New York and London: Routledge. Grosz, Elizabeth (1999), Becomings: Explorations in Time, Memory, and the Future, Ithaca and London: Cornell University Press. Grosz, Elizabeth (2004), The Nick of Time: Politics, Evolution and the Untimely, Durham: Duke University Press.
274
Bibliography
Grosz, Elizabeth (2005), Time Travels: Feminism, Nature, Power, Durham: Duke University Press. Gubrium, Jaber F. and Holstein, James A. (eds) (2003), Postmodern Interviewing, London, Thousand Oaks and New Delhi: Sage. Habermas, Jürgen (2003), The Future of Human Nature, Cambridge: Polity Press. Hackett, Edward J., Amsterdamska, Olga, Lynch, Michael and Wajcman, Judy (eds) (2008), Handbook of Science and Technology Studies, 3rd edn, Cambridge and London: MIT Press. Hacking, Ian (2003), Representing and Intervening, Cambridge: Cambridge University Press. Hacking, I. (2006), ‘Genetics, Biosocial Groups, and the Future of Identity’, Daedalus, Autumn: 81–96. Hacking, Ian (2007), ‘Our Neo-cartesian Bodies in Parts’, Critical Inquiry, 34(Autumn): 78–104. Haigh, Elizabeth (1984), Xavier Bichat and the Medical Theory of the Eighteenth Century, London: Wellcome Institute for the History of Medicine. Hannan, Michael T. and Freeman, John (1989), Organizational Ecology, Cambridge and London: Harvard University Press. Hansen, Mark B. N. (2006), ‘Media Theory’, Theory, Culture and Society, 23(2–3): 297–306. Hansen, Mark B. N. (2007), Bodies in Code: Interfaces with Digital Media, Cambridge and London: MIT Press. Hanson, Norwood Russell (1958), Patterns of Discovery: An Inquiry into the Conceptual Foundations of Science, Cambridge: Cambridge University Press. Hara, Takuji (2003), Innovation in the Pharmaceutical Industry: The Process of Drug Discovery Development, Cheltenham and Northampton: Edward Elgar. Haraway, Donna J. (1991), Semians, Cyborgs, and Women: The Reinvention of Nature, London: Free Association. Haraway, Donna (1997) Modest=Witness@Second=Millenium. FemaleMan©=Meets= OncoMouse™, London: Routledge. Harrison, Denis and Laberge, Murielle (2002), ‘Innovation, Identities and Resistance: The Social Construction of an Innovation Network’, Journal of Management Studies, 39(4): 497–521. Harvey, J., Pettigrew, A. and Ferlie, E. (2002), ‘The Determinants of Research Group Performance: Towards Mode 2’, Journal of Management Studies, 39(6), 747–74. Hawkes, D. (1996), Ideology, London: Routledge. Hayles, N. Katherine (1999), How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics, Chicago and London: University of Chicago Press. Hayles, N. Katherine (2005), My Mother was a Computer: Digital Subjects and Literary Texts, Chicago and London: University of Chicago Press. Healy, David (2004) ‘Shaping the Intimate: Influencing on the Experience of Everyday Nerves’, Social Studies of Science, 34: 219–45. Healy, David (2006), ‘The New Medical Oikumene’, in A. Petryna, A. Lakoff and A. Kleinman (eds), Global Pharmaceuticals: Ethics, Markets, Practices, Durham and London: Duke University Press: 61–84. Hedgecoe, Adam (2006), ‘Pharmacogenetics as Alien Science: Alzheimer’s Disease, Core Sets and Expectations’, Social Studies of Science, 36(5): 723–52.
Bibliography 275 Hedgecoe, Adam and Martin, Paul (2003), ‘The Drug Don’t Work: Expectations and the Shaping of Pharmcogenetics’, Social Studies of Science, 33(3): 327–64. Helmreich, Stefan (2008), ‘Species of Biocapital’, Science as Culture, 17(4): 463–78. Henry, John (2006), ‘Educating Managers for Post-bureaucracy: The Role of the Humanities’, Management Learning, 37(3): 267–81. Hessels, L. K. and van Lente, H. (2008), ‘Re-thinking New Knowledge Production: A Literature Review and a Research Agenda’, Research Policy, 37: 740–60. Hirschman, Albert O. (1970), Exit, Voice, and Loyalty, Cambridge: Harvard University Press. Hochschild, A. R. (1983), The Managed Heart, Berkeley: University of California Press. Hodgson, Damian and Cicmil, Svetlana (2007), ‘The Politics of Standards in Modern Management: Making “the Project” a Reality’, Journal of Management Studies, 44(3): 431–50. Hoeyer, Klaus, Nexoe, Sniff, Hartlev, Mette and Koch, Lene (2009), ‘Embryonic Entitlements: Stem Cell Patenting and the Co-production of Commodities and Personhood’, Body and Society, 15(1): 1–24. Hogle, Linda F. (2005), ‘Enhancement Technologies of the Body’, Annual Review of Anthropology, 34: 695–716. Hogle, Linda F. (2009), ‘Pragmatic Objectivity and the Standardization of Engineered Tissues’, Social Studies of Scinece, 39: 717–42. Holliday, Ruth and Hassard, John (2001), Contested Bodies, London and New York: Routledge. Holmberg, Ingagill, Salzer-Mörling, Miriam and Strannegård, Lars (eds) (2002), Stuck in the Future: Tracing the ‘New Economy’, Stockholm: Bookhouse. Holmqvist, M. (2009), ‘Corporate Social Responsibility as Corporate Social Control: The Case of Work-site Health Promotion’, Scandinavian Journal of Management, 25, 68–72. Holstein, James A. and Gubrium, Jaber F. (eds) (2003), Inside Interviewing: New Lenses, New Concerns, London, Thousand Oaks and New Delhi: Sage. Hopkins, Michael M., Martin, Paul A., Nightingale, Paul, Kraft, Alison and Mahdi, Surya (2007), ‘The Myth of a Biotech Revolution: An Assessment of Technological, Clinical and Organizational Change’, Research Policy, 36(4): 566–89. Hullman, A. (2000), ‘Generation, Transfer and Exploitation of New Knowledge’, in A. Jungmittag, A. Reger and G. Reiss (eds), Changing Innovation in the Pharmaceutical Industry: Globalization and New Ways of Drug Development, Berlin: Springer. Ikemoto, Lisa C. (2009), ‘Eggs as Capital: Human Egg Procurement in the Fertility Industry and the Stem Cell Research Enterprise’, Signs, 34(4):763–81. James, William (1975), Pragmatism and The Meaning of Truth, Cambridge: Harvard University Press. Jasanoff, Sheila (2005), ‘The Idiom of Co-production’, in S. Jasanoff (ed.) States of Knowledge: The Co-production of Science and Social Order, London and New York: Routledge: 1–12. Jasanoff, Sheila, Markle, Gerald E., Peterman, James C. and Pinch, Trevor (eds) (1995), Handbook of Science and Technology Studies, Thousand Oaks London and New Delhi: Sage. Johnson, Ericka (2008), ‘Simulating Medical Patients and Practices: Bodies and the Construction of Valid Medical Simulators’, Body and Society, 14(3): 105–28.
276
Bibliography
Jones, Oswald (2000), ‘Innovation Management as a Post-modern Phenomenon: The Outsourcing of Pharmaceutical R&D’, British Journal of Management, 11: 341–56. Jong, Simcha (2006), ‘How Organizational Structures in Science Shape Spin-off Firms: The Biochemistry Departments of Berkeley, Stanford, and UCSF and the Birth of Biotech Industry’, Industrial and Corporate Change, 15(2): 251–83. Jonvallen, Petra (2006), ‘Testing Pills, Enacting Obesity: The Work of Localizing Tools in a Clinical Trial’, PhD thesis, Department of Technical and Social Change, Linköping University. Kay, Lily E. (2000), Who Wrote the Book of Life? A History of the Genetic Code, Stanford and London: Stanford University Press. Kant, C., Ibberson, M. and Scheer, A. (2010), ‘Building a Disease Knowledge Environment to Lay the Foundations for In Silico Drug Discovery and Translational Medicine’, Drug Discovery Today, 5(2):117–22. Keller, Evelyn Fox (1983), A Feeling for the Organism: The Life and Work of Barbara McClintock, New York and San Francisco: W. H. Freeman. Keller, Evelyn Fox (2000), The Century of the Gene, Cambridge and London: Harvard University Press. Kent, Julie, Faulkner, Alex, Geesink, Ingrid and Fitzpatrick, David (2006), ‘Culturing Cells, Reproducing and Regulating the Self’, Body and Society, 12(2): 1–23. Knorr Cetina, Karin D. (1981), The Manufacture of Knowledge: An Essay on the Constructivist and Contextual Nature of Science, Oxford: Pergamon Press. Knorr Cetina, Karin (1995), ‘Laboratory Studies: The Cultural Approach to the Study of Science’, in S. Jasanoff, G. E. Markle, J. C. Peterman and T. Pinch (eds), Handbook of Science and Technology Studies, Thousand Oaks London and New Delhi: Sage. Knorr Cetina, Karin (2001), ‘Objectual Knowledge’, in T. R. Schatzki, K. Knorr Cetina and E. von Savigny (eds), The Practice Turn in Contemporary Theory, London and New York: Routledge. Kofman, Sarah (1999), Camera Obscura: Of Ideology, trans. by Will Straw, Ithaca: Cornell University Press. Kollmeyer, Christopher (2009), ‘Explaining Deindustrialization: How Affluence, Productivity Growth, and Globalization Diminish Manufacturing Employment’, American Journal of Sociology, 114(6): 1644–74. Kondo, Dorinne K. (1990), Crafting Selves: Power, Gender, and Discourses of Identity in a Japanese Workplace, Chicago and London: University of Chicago Press. Konrad, Monica (2004), Narrating the New Predictive Genetics: Ethics, Ethnography and Science, Cambridge and New York: Cambridge University Press. Kosmala, Katarzyna and Herrbach, Olivier (2006), ‘The Ambivalence of Professional Identity: On Cynism and jouissance in Audit Firms’, Human Relations, 59(10): 1393–428. Koyré, Alexandre (1992), Metaphysics and Measurement, Reading: Gordon and Breach Science. Krämer, Sybille (2006), ‘The Cultural Techniques of Time Axis Manipulation: On Friedrich Kittler’s Conception of Media’, Theory, Culture and Society, 23(7–8): 93–109. Kruse, Corinna (2006), ‘The Making of Valid Data: People and Machines in Genetic Research Practice’, PhD thesis, Linköping University. Kuhn, T. S. (1962), The Structure of Scientific Revolutions, Chicago: University of Chicago Press.
Bibliography 277 Kusterer, Ken C. (1978), Know-how on the Job: The Important Working Knowledge of ‘Unskilled Workers’, Boulder: Westview Press. Kvale, Steinar (1996) InterViewing, London, Thousand Oaks and New Delhi: Sage. Lakoff, Andrew (2006), Pharmaceutical Reason: Knowledge and Value in Global Psychiatry, Cambridge: Cambridge University Press. Lakoff, Andrew (2008), ‘The Right Patients for the Drug: Pharmaceutical Circuits and the Codification of Illness’, in E. J. Hackett, O. Amsterdamska, M. Lynch and J. Wajcman (eds), Handbook of Science and Technology Studies, 3rd edn, Cambridge and London: MIT Press: 741–60. Lam, A. (2007), ‘Knowledge Networks and Careers: Academic Scientists in Industry–University Links’, Journal of Management Studies, 44(6): 993–1016. Landecker, Hannah (2001), ‘On Beginning and Ending with Apoptosis: Cell Death and Biomedicine’, in S. Franklin and M. Lock (eds), Remaking Life and Death: Towards an Anthropology of the Biosciences, Santa Fe: School of American Research Press: 23–59. Lanham, Richard (2006), The Economics of Attention: Style and Substance in the Age of Information, Chicago and London: University of Chicago Press. Larson, Magali Sarafatti (1977), The Rise of Professionalism: A Sociological Analysis, Berkeley, Los Angeles and London: University of California Press. Latour, B. and Woolgar, S. (1979), Laboratory Life: The Construction of Scientific Facts, Princeton: Princeton University Press. Leicht, Kevin T. and Fennell, Mary L. (2008), ‘Institutionalism and the Professions’, in R. Greenwood, C. Oliver, K. Sahlin and R. Suddaby (eds), The Sage Handbook of Organizational Institutionalism, London, Thousand Oaks and New Delhi: Sage: 431–48. Leidner, Robin (1993), Fast Food, Fast Talk: Service Work and the Routinization of Everyday Life, Berkeley: University of California Press. Lemke, Thomas (2001), ‘The Birth of Biopolitics; Michel Foucault’s Lecture at the Collège de France: On Neo-liberal Governmentality’, Economy and Society, 30(2): 190–207. Lévy, Pierre (1998), Becoming Virtual: Reality in the Digital Age, trans. by Robert Bononno, New York and London: Plenum Trade. Lewontin, Richard (2000), The Triple Helix: Genes, Organisms, Environment, Cambridge: Harvard University Press. Lexchin, Joel (2006), ‘The Pharmaceutical Industry and the Pursuit of Profit’, in J. C. Cohen, P. Illingworth and U. Schüklenk (eds), The Power of the Pill: Social, Ethical and Legal Issues in Drugs Development, Marketing, Pricing, London and Ann Arbor: Pluto: 11–24. Liebeskind, J. P. and Oliver, A. L., Zucker, L. and Brewer, M. (1996), Social Networks, Learning, and Flexibility: Sourcing Scientific Knowledge in New Biotechnology Firms’, Organziation Science, 7(4): 428–43. Linstead, Stephen and Pullen, Alison (2006), ‘Gender as Multiplicity: Desire, Displacement, Difference and Dispersion’, Human Relations, 59(9): 1287–310. Linstead, Stephen and Thanem, Torkild (2007), ‘Multiplicity, Virtuality and Organization: The Contribution of Gilles Deleuze’, Organization Studies, 28(10): 1483–501. Lock, Margaret (2001), ‘The Alienation of Body Tissues and the Biopolitics of Immortalized Cell Lines’, Body and Society, 7(2–3): 63–91.
278
Bibliography
Lock, Margaret (2002), Twice Dead: Organ Transplants and the Reinvention of Death, Berkeley, Los Angeles, and London: University of California Press. Louis, K. S., Blumenthal, D., Gluck,. M. E. and Stoto, M. A. (1989), ‘Entrepreneurs in Academe: An Exploration of Behaviour Among Life Scientists’, Administrative Science Quarterly, 34(1): 110–31. Luhmann, Niklas (2000), The Reality of the Mass Media, Cambridge: Polity Press. Lybecker, Kristina M. (2006), ‘Social, Ethical, and Legal Issues in Drug Development, Marketing, and Pricing Policies: Setting Priorities: Pharmaceuticals as Private Organizations and the Duty to make Money/Maximize Profits’, in J. C. Cohen, P. Illingworth and U. Schüklenk (eds), The Power of the Pill: Social, Ethical and Legal Issues in Drugs Development, Marketing, Pricing, London and Ann Arbor: Pluto: 25–31. Lynch, Michael (1985), Art and Artifact in Laboratory Science: A Study of Shop Work and Shop Talk in a Research Laboratory, London: Routledge and Kegan Paul. MacKenzie, Donald and Millo, Yuval (2003), ‘Constructing a Market, Performing a Theory: A Historical Sociology of a Financial Market Derivatives Exchange’, American Jounal of Sociology, 109(1): 107–45. MacLean, D., MacIntosh, R. and Grant, S. (2002), ‘Mode 2 Management Research’, British Journal of Management, 13: 189–207. Markides, C. (2007), ‘In Search of Ambidextrous Professors’, Academy of Management Journal, 50(4): 762–68. Marks, John (2006), ‘Biopolitics’, Theory, Culture and Society, 23(2–3): 333–5. Marshall, Barbara L. (2009), ‘Sexual Medicine, Sexual Bodies and the Pharmaceutical Imagination’, Science as Culture, 18(2): 133–49. Massumi, Brian (2002) Parables of the Virtual: Movement, Effect, Sensation, Durham and London: Duke University Press. Maurer, Indre and Ebers, Mark (2007)¸ ‘Dynamics of Social Capital and their Performance Implications: Lessons from Biotechnology Start-ups’, Administrative Science Quarterly, 52: 262–92. Mauss, M. (1954), The Gift: Forms and Functions of Exchanges in Archaic Societies, London: Routledge and Kegan Paul. McNay, Lois (2009), ‘Self as Enterprise: Dilemmas of Control and Resistance in Foucault’s The Birth of Biopolitics’, Theory, Culture and Society, 26(6): 55–77. Merton, Robert K. (1973), The Sociology of Science: Theoretical and Empirical Investigations, ed. by Norman W. Storer, Chicago: University of Chicago Press. Meyer, John W. and Rowan, Brian (1977), ‘Institutionalizing Organizations: Formal Structure as Myth and Ceremony’, American Journal of Sociology, 83(2): 340–63. Miettinen, Reijo and Virkkunen, Jaakko (2005), ‘Epistemic Objects, Artefacts and Organizational Change’, Organization, 12(3): 437–56. Milburn, Colin (2004), ‘Nanotechnology in the Age of Posthuman Engineering: Science Fiction as Science’, in N. Katherine Hayles (ed.), (Nanoculture: Implications of the New Technoscience, Bristol: Intellect: 109–29. Milburn, Colin (2008), Nanovision: Engineering the Future, Durham and London: Duke University Press. Miller, Danny (ed.) (1995), Acknowledging Consumption: A Review of New Studies, London and New York: Routledge. Mir, Raza, Mir, Ali and Wong, Diana J. (2005), ‘Diversity: The Cultural Logic of Global Capital?’, in A. M. Konrad, P. Prasad and J. K. Pringle (eds), Handbook of Workplace Diversity, Thousand Oaks, London and New Delhi: Sage: 167–88.
Bibliography 279 Mirowski, Philip and Van Horn, Robert (2005), ‘The Contract Research Organization and the Commercialization of Scientific Research’, Social Studies of Science, 35(4): 503–48. Mol, Annemarie (2002), The Body Multiple: Ontology in Medical Practice, Durham: Duke University Press. Morley, David (2007), Media, Modernity and Technology: The Geography of the New, London and New York: Routledge. Mowery, D. C. and Ziedonis, A. A. (2002), ‘Academic Patent Quality before and after the Bayh–Dole Act in the United States’, Research Policy, 31: 399–418. Mulder, Arjen (2006), ‘Media’, Theory, Culture and Society, 23(2–3): 289–96. Munos, Bernard (2009), ‘Lessons from 60 Years of Pharmaceutical Innovation’, Drug Discovery, 8: 959–68. Murningham, J. Keith and Conlon, Donald E. (1991), ‘The Dynamics of Intense Work Groups: A Study of British String Quartets’, Administrative Science Quarterly, 36: 165–86. Murphy, Timothy S. (1998), ‘Quantum Ontology: A Virtual Mechanics of Becoming’, in E. Kaufman and K. J. Heller (eds), Deleuze and Guattari: New Mappings in Politics, Philosophy and Culture, Minneapolis and London: University of Minnesota Press. Murray, F. (2002), ‘Innovation as Co-evolution of Scientific and Technological Networks: Exploring Tissue Economics’, Research Policy, 31: 1389–403. Murray F. (2004), ‘The Role of Academic Inventors in Entrepreneurial Firms: Sharing the Laboratory Life’, Research Policy, 33: 643–59. Nerkar, A. and Shane, S. (2007), ‘Determinants of Invention Commercialization: An Empirical Examination of Academically Sourced Inventions’, Strategic Management Journal, 28: 1155–66. Nesta, Lionel and Savotti, Pier-Paulo (2006), ‘Firm Knowledge and Market Value in Biotechnology’, Industrial and Corporate Change, 15(4): 625–52. Nightingale, Paul (1998), ‘A Cognitive Model of Innovation’, Research Policy, 27: 698–709. Nightingale, Paul and Mahdi, Surya (2006), ‘The Evolution of the Pharmaceutical Innovation’, in M. Mazzucati and G. Dose (eds), Knowledge Accumulation and Industry Evolution: The Case of Pharma-biotech, Cambridge: Cambridge University Press: 73–111. Novas, C. and Rose, N. (2000), ‘Genetic Risk and the Birth of the Somatic Individual’, Economy and Society, 29: 485–513. Nutch, Frank (1996), ‘Gadgets, Gizmos, and Instruments: Science for the Tinkering’, Science, Technology and Human Values, 21(2): 214–28. Nye, David E. (1990), Electrifying America: Social Meanings of a New Technology, 1880–1940, Cambridge: MIT Press. Oliver, Amalya L. (2004), ‘On the Duality of Competition and Collaboration: Network-based Knowledge Relations in the Biotechnology Industry’, Scandinavian Journal of Management, 20: 151–71. Oliver, Richard W. (2000), The Coming Biotech Age: The Business of Biomaterials, New York: McGraw-Hill. Outlook (2010), Tufts Center for the Study of Drug Development, Tufts University, USA. Owen-Smith, Jason and Powell, Walter W. (2004), ‘Knowledge Networks as Channels and Conduits: The Effects of Spillovers in the Boston Biotechnology Community’, Organization Science, 15(1): 5–21.
280
Bibliography
Oyama, Susan (2000), Evolution’s Eye: A Systems View of the Biology–Culture Divide, Durham: Duke University Press. Parry, B. (2004), Trading the Genome: Investigating the Commodification of Bioinformation, New York: Columbia University Press. Pavitt, Keith (2005), ‘Innovation Processes’, in J. Fagerberg, D. C. Mowery and R. R. Nelson (eds), The Oxford Handbook of Innovation, Oxford and New York: Oxford University Press: 86–114. Petryna, Adriana (2006), ‘Globalizing Human Subjects Research’, in A. Petryna, A. Lakoff, and A. Kleinman (eds), Global Pharmaceuticals: Ethics, Markets, Practices, Durham and London: Duke University Press: 33–60. Petryna, Adriana (2009), When Experiments Travel: Clinical Trials and the Global Search for Human Subjects, Durham and London; Duke University Press. Pharma 2020: The Vision (2006), London: PricewaterhouseCoopers. Pickering, Andrew (1995), The Mangle of Practice: Time, Agency, and Science, Chicago and London: University of Chicago Press. Pisano, Gary O. (2006), Science Business: The Promise, the Reality and the Future of Biotech, Boston: Harvard Business School Press. Porter, Theodore M. (2009), ‘How Science became Technical’, Isis, 100: 292–309. Poster, Mark (2001), The Information Subject: Essays, Amsterdam: G+B Arts. Powell, Walter W. (1998), ‘Learning from Collaboration: Knowledge and Networks in the Biotechnology and Pharmaceutical Industries’, California Management Review, 40(3): 228–40. Powell, Walter W. and Grodal, Stine (2005), ‘Networks of Innovation’, in J. Fagerberg, D. C. Mowery and R. R. Nelson (eds), The Oxford Handbook of Innovation, Oxford and New York: Oxford University Press: 56–85. Powell, Walter W. and Snellman, Kaisa (2004), ‘The Knowledge Economy’, Annual Review of Sociology, 30: 199–220. Powell, Walter W., Koput, Kenneth W. and Smith-Doerr, Laurel (1996), ‘Interorganizational Collaboration and the Locus of Innovation: Networks of Learning in Biotechnology’, Administrative Science Quarterly, 41: 116–45. Powell, Walter W., Koput, Kenneth W., White, Douglas R., and Owen-Smith, Jason (2005), ‘Network Dynamics and Field Evolution: The Growth of Interorganizational Collaboration in the Life Sciences’, American Journal of Sociology, 110(4): 1132–205. Prainsack, Barbara, Gesink, Ingrid and Franklin, Sarah (2008), ‘Stem Cell Technologies 1998–2008: Controversies and Silences’, Science as Culture, 17(4): 351–62. Prasad, Amit (2009), ‘Capitalizing Disease: Biopolitics of Drug Trials in India, Theory, Culture and Society, 26(5): 1–29. PricewaterhouseCoopers (2009), ‘Pharma 2020: The Vision – Which Path will you Take?’, London: PricewaterhouseCoopers. Rabinow, Paul (1992), ‘Artificiality and Enlightenment: From Sociobiology to Biosociality’, in J. Crary and S. Kwinter (eds), Incorporations, New York: Zone: 234–51. Rabinow, Paul (1996), Making PCR: A Story of Biotechnology, Chicago and London: University of Chicago Press. Rabinow, Paul (2003), Anthropos Today: Reflections on Modern Equipment, Princeton and Oxford: Princeton University Press. Rabinow, Paul (2006), Essay on the Anthropology of Reason, Princeton and London: Princeton University Press.
Bibliography 281 Rafferty, M. (2008), ‘The Bayh–Dole Act and University Research and Development’, Research Policy, 37: 29–40. Ramirez, Paulina and Tylecore, Andrew (2004), ‘Hybrid Corporate Governance and its Effects Innovation: A Case Study of AstraZeneca, Technology Analysis & Strategic Management, 16(1): 97–119. Rheinberger, Hans-Jörg (1997), Toward a History of Epistemic Things: Synthesizing Proteins in the Test Tube, Stanford: Stanford University Press. Rheinberger, Hans-Jörg (1998), ‘Experimental Systems, Graphematic sSaces’, in T. Lenoir (ed.), Inscribing Science: Scientific Texts and the Materiality of Communication, Stanford: Stanford University Press: 285–303. Rheinberger, Hans-Jörg (2003), ‘“Discourses of Circumstances”: A Note on the Author in Science’, in M. Biagliolo and P. Galison (eds), Scientific Authorship: Credit and Intellectual Property in Science, London and New York; Routledge: 309–24. Rifkin, Jeremy (1998), The Biotech Century: Harnessing the Gene and Remaking the World, New York: Penguin Putnam. Ritzer, George (2005), Enchanting a Disenachanted World: Revolutionizing the Means of Consumption, 2nd edn, Thousand Oaks: Pine Forge Press. Roach, Michael and Sauerman, Henry (2010), ‘A Taste for Science? PhD Scientists’ Academic Orientation and Self-selection into Research Careers in Industry’, Research Policy, 39: 422–34. Roberts, John (2005), ‘The Power of the “Imaginary” in Disciplinary Processes’, Organization, 12(5): 619–42. Rose, Nikolas S. (2007), The Politics of Life Itself: Biomedicine, Power and Subjectivity in the Twenty-First Century, Princeton and Oxford: Princeton University Press. Rose, N. and Novas, C. (2005) ‘Biological Citizenship’, in A. Ong and S. J. Collier (eds), Global Assemblages: Technology, Politics, and Ethics as Anthropological Problems, Malden and Oxford: Blackwell: 439–63. Rosenberg, Charles E. (1992), ‘Introduction: Framing Disease: Illness, Society, and History’, in C. Rosenberg and J. Golden (eds), Framing Disease: Studies in Cultural History, Brunswick: Rutger University Press: xiii–xxvi. Rosenberg, Charles and Golden, Janet (eds) (1992), Framing Disease: Studies in Cultural History, Brunswick: Rutger University Press. Roth, Wolff-Michael (2009), ‘Radical Uncertainty in Scientific Discovery Work’, Science, Technology and Human Values, 34(3): 313–36. Rothaermel, F. T., Agung, S. D. and Jiang, L. (2007), ‘University Entrepreneurship: A Taxonomy of the Literature’, Industrial and Corporate Change, 16(4): 691–791. Rothaermel, F. T. and Deeds, D. L. (2004), ‘Exploration and Exploitation Alliances in Biotechnology: A System of New Product Development’, Strategic Management Journal, 25: 201–21. Rothaermel, Frank T. and Thursby, Marie (2007), ‘The Nanotech Versus the Biotech Revolution: Sources of Productivity in Incumbent Firm Research’, Research Policy, 36(6): 832–49. Salter, Brian and Salter, Charlotte (2007), ‘Bioethics and the Global Moral Economy: The Cultural Politics of Human Embryonic Stem Cell Science’, Science, Technology and Human Values, 32(5): 554–81. Sams-Dodd, F. (2005), ‘Optimizing the Discovery Organization for Innovation’, Drug Discovery Today, 10(15): 1409–56. Saxenian, AnnaLee (1994) Regional Advantage: Culture and Competition in Silicon Valley and Route 128, Cambridge and London: Harvard University Press.
282
Bibliography
Schatzki, Theodore R. (2002), The Site of the Social: A Philosophical Account of the Constitution of Social Life and Change, Pennsylvania: Pennsylvania State University Press. Scheper-Hughes, Nancy (2000), ‘The Global Traffic in Human Organs’, Current Anthropology, 41: 191–211. Schleef, Debra J. (2006), Managing Elites: Professional Socialization in Law and Business Schools, Lanham: Rowman and Littlefield. Schrödinger, Erwin (1944), What is Life?: Physical Aspects of the Living Cell, Cambridge: Cambridge University Press. Schweizer, Lars (2005), ‘Organizational Integration of Acquired Biotechnology Companies into Pharmaceutical Companies: The Need for a Hybrid Approach’, Academy of Management Journal, 48(6): 1051–74. Science 2020 (2006), Microsoft Research, Cambridge, UK (available on www.research.microsoft.com/towards2020science/, accessed 9 March 2007). Sconce, Jeffrey (2000), Haunted Media: Electronic Presence from Telegraphy to Television, Durham and London; Duke University Press. Scott, W. Richard (2008), ‘Lords of the Dance: Professionals as Institutional Agents’, Organization Studies, 29(2): 219–38. Serres, Michel (1995), Genesis, Ann Arbor: University of Michigan Press. Serres, Michel (1997), The Troubadour of Knowledge, Ann Arbor: University of Michigan Press. Shah, Sonia (2006), The Body Hunters: Testing New Drugs on the World’s Poorest Patients, London and New York: New Press. Sharp, Lesley (2000), ‘The Commodification of the Body and its Parts’, Annual Review of Anthropology, 29: 287–328. Sharp, Lesley A. (2003), Strange Harvest: Organ Transplants, Denatured Bodies, and the Transformed Self, Berkeley: University of California Press. Shields, Rob (2003), The Virtual, London and New York: Routledge. Shilling, Chris (1993) The Body and Social Theory, London, Thousand Oaks and New Delhi: Sage. Shostak, Sara (2005), ‘The Emergence of Toxicogenomics: A Case Study of Moleculatization’, Social Studies of Science, 35(3): 367–403. Shostak, Sara and Conrad, Peter (2008), ‘Sequencing and its Consequences: Path Dependence and the Relationships between Genetics and Medicaliz ation’, American Journal of Sociology, 114: S287–S316 (special issue on biomedicalization). Siegel, Donald S., Wright, Mike and Lockett, Andy (2007), ‘The Rise of the Entrepreneurial Activities at Universities: Organizational and Societal Implications’, Industrial and Corporate Change, 16(4): 489–504. Siggelkow, Nicolaj (2007), ‘Persuasion with Case Studies’, Academy of Management Journal, 50(1): 20–4. Simondon, Gilbert ([1958] 1980), On the Mode of Existence of Technical Objects, trans. by Ninian Mallahphy, London: University of Western Ontario. Sismondo, Sergio (2004), ‘Pharmaceutical Maneuvers’, Social Studies of Science, 34: 149–59. Smith Hughes, S. (2001), ‘Making Dollars Out of DNA: The First Major Patent in Biotechnology and the Commercialization of Molecular Biology, 1974–1980’, Isis, 92(3): 541–75.
Bibliography 283 Sommerlund, Julie (2006), ‘Classifying Microorganisms: The Multiplicity of Classifications and Research Practices in Molecular Microbial Ecology’, Social Studies of Science, 36(6): 909–28. Sørensen, Jesper B. (2007), ‘Bureaucacy and Entrepreneurship: Workplace Effects on Entrepreneurship Entry’, Administrative Science Quarterly, 52: 387–412. Sowa, Yoshihiro (2006), ‘Present State and Advances in Personalized Medicine: Importance of the Development of Information Service Systems for the Public’, Quarterly Review, 18, National Institute of Science and Technology Policy, Japan (available on www.nistep.go.jp/achiev/ftx/eng/stfc/stt018e/qr18pdf/ STTqr1801.pdf, accessed 9 March 2007). Spar, Deborah L. (2006), The Baby Business: How Money, Science, and Politics Drive the Commerce of Conception, Boston: Harvard Business School Press. Squire, Susan Merrill (2004), Liminal Lives: Imaging the Human at the Frontiers of Biomedicine, Durham and London: Duke University Press. Stavrakakis, Yannis (2008), ‘Subjectivity and the Organized Other: Between Symbolic Authority and Fantasmatic Enjoyment’, Organization Studies, 29(7): 1037–59. Stiegler, Bernard (1998), Technics and Time, 1: The Fault of Epimetheus, trans. by Richard Beardsworth and George Collins, Stanford: Stanford University Press. Strauss, A. L. and Corbin, J. (1998), Basics of Qualitative Research, 2nd edn, London, Thousand Oaks and New Delhi: Sage. Strauss Anselm, Schatzman, Leonard, Bucher, Rue, Ehrlich, Danuta and Sabshin, Melvin, (1964), Psychiatric Ideologies and Institutions, 2nd edn, New Brunswick and London: Transcation. Stuart, Toby E. and Ding, Waverly W. (2006), ‘When do Scientists Become Entrepreneurs? The Social Structural Antecedents of Commercial Activity in the Academic Life Sciences’, American Journal of Sociology, 112(1): 97–114. Styhre, Alexander (2009), ‘Tinkering with Material Resources: Operating Under Ambiguous Conditions in Rock Construction Work’, The Learning Organization, 16(5): 386–7. Styhre, Alexander and Lind, Frida (2010), ‘The Softening Bureaucracy: Accommodating New Research Opportunities in the Entrepreneurial University’, Scandinavian Journal of Management, 26(2): 107–20. Suchman, L. A. (2007), Human-machine Reconfigurations: Plans and Situated Actions, Cambridge: Cambridge University Press. Sundberg, Mikaela (2009), ‘The Everyday World of Simulation Modeling: The Development of Parametization in Meterology’, Science, Technology and Human Values, 34(2): 162–81. Sunder Rajan, Kaushik (2006), Biocapital: The Constitution of Postgenomic Life, Durham: Duke University Press. Sundgren, M. and Styhre, A. (2007), ‘Creativity and the Fallacy of Misplaced Concreteness in New Drug Development: A Whiteheadian Perspective’, European Journal of Innovation Management, 10(2): 215–35. Swann, John P. (1988), Academic Scientists and the Pharmaceutical Industry, Baltimore and London: Johns Hopkins University Press. Taylor, Janelle S. (2005), ‘Surfacing the Body’s Interior’, Annual Review of Anthropology, 34: 741–56. Thacker, Eugene (2004), Biomedia, Minneapolis and London: University of Minnesota Press.
284
Bibliography
Thacker, Eugene (2006), The Global Genome: Biotechnology, Politics and Culture, Cambridge and London: MIT Press. Thanem, Torkild (2009), ‘“There’s No Limit to How Much you can Consume”: The New Public Health and the Struggle to Manage Healthy Bodies’, Culture and Organization, 15(1): 59–74. Thanem, Thorkild and Linstead, Stephen (2006), ‘The Trembling Organization: Order, Change, and the Philosophy of the Virtual’, in M. Fuglsang and B. M. Sørensen (eds), Deleuze and the Social, Edinburgh: Edinburgh University Press, pp. 39–57. Thomas, Tom C. and Acuña-Narvaez, Rachelle (2006), ‘The Convergence of Biotechnology and Nanotechnology: Why Here, Why Now?’, Journal of Commercial Biotechnology, 12(2): 105–10. Thomke, Stefan and Kuemmerle, Walter (2002), ‘Asset Accumulation, Interdependence and Technological Change: Evidence for Pharmaceutical Industry’, Strategic Management Journal, 23: 619–35. Thompson, Charis (2005), Making Parents: The Ontological Choreography of Reproductive Technologies, Cambridge: MIT Press. Thompson, Jerry L. (2003), Truth and Photography: Notes on Looking and Photographing, Chicago: Ivan R. Dee. Throsby, Karen (2009), ‘The War on Obesity as a Moral Project: Weight Loss Drugs, Obesity Surgery and negotiating Failure’, Science as Culture, 18(2): 210–16. Timmermans, Stefan (2008), ‘Professions and their Work: Do Market Shelters Protect Professional Interests?’, Work and Occupations, 35(2): 164–88. Titmuss, Richard M. (1970), The Gift Relationship: From Human Blood to Social Policy, London: George Allen & Unwin. Tober, Diane M. (2001), ‘Semen as Gift, Semen as Good: Reproductive Workers and the Market in Altruism’, Body & Society, 7(2–3): 137–60. Tomlinson, Gary (1993), Music in the Renaissance Magic: Towards Historiography of Others, Chicago and London: University of Chicago Press. Torgersen, Helge (2009), ‘Fuzzy Genes: Epistemic Tensions in Genomics’, Science as Culture, 18(1): 65–87. Turner, Barry S. (1992), Regulating Bodies: Essays in Medical Sociology, London and New York: Routledge. Turner, Barry S. (1996), The Body and Society, London, Thousand Oaks and New Delhi: Sage. Tyler, Melissa and Abbott, Pamela (1998), ‘Chocs Away: Weight Watching in the Contemporary Airline Industry’, Sociology, 32(3):433–50. Tyler, Melissa and Taylor, Steve (1998), ‘The Exchange of Aesthetics: Women’s Work and “the Gift”’, Gender, Work and Organization, 5(3): 165–71. Urry, John (2000), Sociology Beyond Societies: Mobilities for the Twenty-first Century, London and New York: Routledge. Urry, John (2003), Global Complexity, Oxford and Malden: Polity. Utterback, James M. (1994), Mastering the Dynamics of Innovation, Boston: Harvard Business School Press. Van Maanen, John (1979), ‘The Fact of Fiction in Organizational Ethnography’, Administrative Science Quarterly, 24: 539–50. Vemuri, Goutham and Nielsen, Jens (2008), ‘Systems Biology: Is the Hope Worth the Hype?’, SIM News, September–October, 58(5): 177–88.
Bibliography 285 Vestergaard, J. (2007), ‘The Entrepreneurial University Revisited: Conflicts and the Importance of Role Separation’, Social Epistemology, 21(1): 41–54. Vico, G. (1744/1999), New Science, London: Penguin. Waldby, Catherine (2000), The Visible Human Project: Informatics Bodies and Posthuman Medicine, London and New York: Routledge. Waldby, Catherine (2002), ‘Stem Cells, Tissue Cultures, and the Production of Biovalue’, Health, 6(3): 305–22. Waldby, Cathy, and Mitchell, Robert (2006), Tissue Economies: Blood, Organs, and Cell Lines in Late Capitalism, Durham and London. Duke University Press. Washburn, J. (2005), University Inc.: The Corruption of Higher Education, New York: Basic. Weber, Max (1948), ‘Science as a Vocation’, in H. H. Gerth and C. Wright Mills (eds), From Max Weber: Essays in Sociology, London: Routledge and Kegan Paul: 129–56. Weick, Karl, E. (1989), ‘Theory Construction as Disciplined Imagination’, Academy of Management Review, 14(4): 516–53. White, Hayden (1978), Tropics of Discourse: Essays in Cultural Criticism, Baltimore and London: Johns Hopkins University Press. Whitehead, Alfred N. (1925), Science and the Modern World, Cambridge: Cambridge University Press. Whittington, Kjersten Bunker, Owen-Smith, Jason and Powell, Walter W. (2009), ‘Networks, Propinquity and Innovation in Knowledge-intensive Industries’, Administrative Science Quarterly, 54: 90–112. Wiener, Norbert (1950), The Human Use of Human Beings, London: Eyre & Spottiswoode. Williams, Robin and Edge, David (1996), ‘The Social Shaping of Technology’, Research Policy, 25: 865–99. Winner, Langdon (1977), Autonomous Technology: Technics-out-of-control as a Theme in Political Thought, Cambridge and London: MIT Press. Witz, Anne (2000), ‘Whose Body Matters? Feminist Sociology and the Corporeal Turn in Sociology and Feminism’, Body and Society, 6(2): 1–24. Wolkenhauer, Olaf (2001), ‘Systems Biology: The Reincarnation of Systems Theory Applied in Biology?’, Briefings in Bioinformatics, 2(2): 258–70. Ylijoki, Oili-Helena (2005), ‘Academic Nostalgia: A Narrative Approach to Academic Work’, Human Relations, 58(5): 555–76. Young, Gary J., Charns, Martin P. and Shortell, Stephen M. (2001), ‘Top Manager and Network Effects on the Adoption of Innovative Management Practices: A Study from TQM in a Public Hospital System’, Strategic Management Journal, 22: 935–51. Youtie, J., Libaers, D. and Bozeman, B. (2006), ‘Institutionalization of University Research Centers: The Case of the National Cooperative Program in Infertility Research’, Technovation, 26(9): 1055–63. Zielinski, Siegfried (2006), Deep Time of the Media: Towards an Archaeology of Hearing and Seeing by Technical Means, Cambridge and London: MIT Press. Zivin, J.A. (2000). ‘Understanding Clinical Trials’, Scientific American, April: 49–55. Žižek, Slavoj (1992), Looking Awry: An Introduction to Jacques Lacan through Popular Culture, Cambridge and London: MIT Press.
286
Bibliography
Žižek, Slavoj (1994), ‘Introduction: The Spectre of Ideology’, in S. Žižek (ed.), Mapping Ideology, New York: Verso: 1–33. Žižek, Slavoj (2008), ‘Structures on the Street’, paper presented at the EURAM Conference, Ljubljana, 14–17 May. Zucker, Lynne G. and Darby, Michael R. (1997), ‘Present at the Biological Revolution: Transformation of Technological Identity for a Large Incumbent Pharmaceutical Firm, Research Policy, 26: 429–46. Zucker, L. G., Darby, M. R. and Armstrong, J. S. (2002), ‘Commercializing Knowledge: University Science, Knowledge Capture, and Firm Performance in Biotechnology’, Management Science, 48(1): 138–53.
Index Academic capitalism 195 Academic entrepreneurs 196, 198–9 Academic entrepreneurship 196, 198 Academic inventors 196 Aesthetic surgery 60 AIDS 68, 98, 237 Ambidextrous professors 196 Amgen 153, 156, 160 Attention economy 43 Autosurveillance 229 Bayh–Dole act 195 Berzelius, J.J. 8, 193 Bichat, Xavier 3, 61, 71, 75 Biocapital 15, 42, 48, 51, 54–7, 64 Biochips 81 Biocomputing 81, 227 Bioinformatics 80–1, 122, 150, 170, 226, 238 Biomedia 78–81, 248 BioMEMS (biomicroelectronic mechanical systems) 81 Biomolecular body 205 Biopolitics 46–8, 78, 98 Biovalue 1, 155 Bipolar disorder 68, 90 Boundary work 21 Boyle, R. 8 Brain death 73–6 Candidate drug (CD) 104, 186 Canguilhem, G. 259 Cardiopulmonary death 76 central nervous system (CNS) 161, 164–9, 174, 263 Cetus 152 Chemistry 5–9 Chemo-therapeutic revolution 12 Chronic fatigue syndrome 65 Clinical research organizations (CROs) 247
Clinical trials 67, 87–8, 94–6, 104–7, 112, 120, 131–3, 150, 167, 245–7, 251, 253–4 Collective consciousness 23 Comte, A. 117, 192 Computational biology 82, 205 Corporate identities 35–6 Corporeal turn 58 Crick, F. 54 Cryobank 72 Cultures of prediction 112, 117, 120 Cybernetics 53 Cyborg 60, 80 Cyborg logic 60 D2 antagonists 174 D2-receptor 174 Darwin, C. 10, 47 Day science 113 Descartes, R. 57–8 Diogenes Laertius 6 Disciplinary society 50 Disease specificity 64–5 Doable problems 52, 94 Dolly the sheep 10 Double-bind situation 27, 115 Drug discovery 14, 93–6, 102–3, 110–11, 128, 130, 157, 180, 184, 250, 255 Du Pont 9 Economies of vitality 51 Economization 243 Edison, T.A. 6 Egg agencies 76–7, 244 Empedocles 6 Engineers 19, 34, 117, 210 Enterprise ideology 197 Entrepreneurial professor 196–7 Entrepreneurial university 196–7 Epistemic objects and things 111–12, 116, 120, 132, 263 Ereky, K. 9 287
288
Index
Executive coaching 34 Experimental system 111–16, 132, 232 Female sexual dysfunction 66 Feynman, R. 84 Fibromyalgia 65 Fluid phase (in innovation work) 37–8 Forecast 118–19 Future work 118–19, 194 Gay gene 238 Genentech 153–4, 160 General economy 56 Geneticization 49, 68–9 Ghostwriting 91 Gift relationships 71, 77 Good clinical practice 87, 127 Herophilus 70 Heterogeneous engineering 95 High-throughput screening (HTS) 14, 81, 95–6, 104, 136, 150, 250 Humboldtian University 11, 91 Huntington’s Disease 169, 174–5, 191 Illusio 24 In silico biology 201, 203 Infertility 76, 244 Informed matter 96, 120 Institutional agents 20 Irritable Bowel Syndrome 65 Jacob, F. 113, 150 Jésus de Montréal 1–3 Junk DNA 82 Kant, I. 58, 250 Kierkegaard, S. 47 Late modernity 32 Lavoisier, A.L. 8 Lead generation 103, 210 Lead optimization 103–4, 210 Liberalism 103 Lucretius 7 Lung death 76
Manic depression 90 McClintock, B. 9, 230 Media 78–80 Medicalization 61–4, 69 Me-too drugs 52, 178 Mega-technologies 13 Mendeleyev, D. 8 Mendelian genetics 6 Meteorology 111, 119 Minorities 35 Mode of regulation 98 Molecularization 50 Monetarization 243 Mullis, K.B. 152 Nanomedicine 81 Nanotechnology 60–1, 83, 153, 158, 235 Narrative 32–3, 36, 65, 105, 186, 236, 264 Necrocapitalism 72 Neoliberalism 48 New chemical entities (NCEs) 93, 99, 164, 186 Night science 113 Occupational legitimation 118 Omics technologies 131, 134–5, 137, 162, 164, 167–8, 179, 202, 205, 206, 214, 219, 231, 249 OncoMouse 10, 154, 260 Ontogeny 84–5 Operational philosophies 30 Organizational legitimation 118–19 Parkinson’s disease 167, 174, 191, 219 Paxil 67 Personalized medicine 96, 101, 123, 138, 185, 189, 217, 247, 251, 254 Pharmacogenomics 52, 78, 81, 85–6, 95–7 Pharmacologicalism 89–90 Phenotype 10, 84–5, 96, 98, 102–03, 131, 189, 205, 215, 234 Polymerase chain reaction (PCR) 96, 152, 154 Post-bureaucratic organizations 35 Post-structuralist feminism 58 Pragmatic objectivity 245
Index 289 Presentational legitimation 118–19 Professional identities 149, 223, 229, 239–40 Professional ideologies 15–16, 18–19, 21–3, 25–31, 33, 35–7, 40–1, 118, 146, 239, 242 Professional organizations 19 Professional work 18–10, 26–8 Proteomics 54, 81–3, 183, 163, 246, 249 Prozac 67 Psychopharmacological drugs 28, 30–1, 67 Psychotherapeutic ideology (in psychiatry) 28 Radical uncertainty 114–15 Radiologists 22 Regime of accumulation 3, 14, 19, 36, 43, 56, 98 Reikon 74 Schizophrenia 90, 174 Science-as-experimentation 113 Science-as-theory 113 Science and technology studies (STS) 42, 112, 260 Scientific credibility 234–6 Selective serotonin reuptake inhibitors (SSRI) drugs 67 Sexuo-pharmaceuticals 66 Silicon Valley 35–6, 241 Single nucleotide polymorphisms (SNPs) 68, 82 Situated actions 114–15 Society of control 50
Socratic method 27 Somatic expertise 51 Somatic ideology (in psychiatry) 28 Specific phase (in innovation work) 37–8 Sperm banks 76 Spinoza, B. 58 Stahl, G.E. 8–9 Stem cell science 2, 40, 138, 140, 161, 165, 171, 175–9, 184–5 Storytelling 32 Strain theory of ideology 23 Subjectification 50 Surface cynicism 27 Surgeons 22, 98 Synthesis chemistry 6, 9, 94, 171 Systems biology 201–20 Thought experiment 121 Tissue economy 46, 70, 72, 75, 78, 98 Trading zone 44 Transitional phase (in innovation work) 37–8 University entrepreneurship 195 US Food and Drug Administration (FDA) 67, 92, 99, 105 Visual Human Project 1 Wet biology 201 Wöhler, F. 9 Zymotechnology 9