DIAMONDS ARE FOREVER
COMPUTERS ARE NOT Economic and Strategic Management in Computing Markets
This page intentionally left blank
DIAMONDS ARE FOREVER
COMPUTERS ARE NOT Economic and Strategic Management in Computing Markets
Shane Greenstein Northwestern University, U S A
Imperial College Press
Published by Imperial College Press 57 Shelton Street Covent Garden London WC2H 9HE Distributed by World Scientific Publishing Co. Pte. Ltd. 5 Toh Tuck Link, Singapore 596224 USA office: 27 Warren Street, Suite 401-402, Hackensack, NJ 07601 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE
British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.
DIAMONDS ARE FOREVER, COMPUTERS ARE NOT Economic and Strategic Management in Computing Markets Copyright © 2004 by Imperial College Press All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.
For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.
ISBN 1-86094-451-5
Typeset by Stallion Press Email:
[email protected] Printed in Singapore.
b161-FM.qxd
27/05/04
7:05 PM
Page v
To Ranna, For your love and dedication. You also know when to laugh.
This page intentionally left blank
b161-FM.qxd
27/05/04
7:05 PM
Page vii
Contents
Preface Acknowledgments
ix xvi
Part I. 1. 2. 3. 4. 5. 6.
Musings Diamonds are Forever, Computers are Not A Birthday Even a Curmudgeon Could Love It has Bugs, but the Games are out of This World The Biology of Technology Virulent Word of Mouse An Earful about Zvi’s E-mail
1 3 7 12 17 22 29
Part II. 7. 8. 9. 10. 11. 12.
Observations, Fleeting and Otherwise Repetitive Stress Injuries To Have and to Have Not Uncertainty, Prediction, and the Unexpected When Technologies Converge Forecasting Commercial Change The Tape Story Tapestry: Historical Research with Inaccessible Digital Information Technologies
35 37 42 46 51 56
Part III. 13. 14. 15. 16. 17. 18.
Developing the Digital World The Salad Days of On-line Shopping Don’t Call it a Highway! Commercializing the Internet Building the Virtual World A Revolution? How Do You Know? PCs, the Internet, and You
vii
61 77 79 83 88 93 98 104
b161-FM.qxd
27/05/04
7:05 PM
Page viii
viii Contents
Part IV. 19. 20. 21. 22.
Internet Boom and Bust An Era of Impatience Shortfalls, Downturns and Recessions Explaining Booms, Busts and Errors An Inside Scoop on the High-Tech Stock Market Bust 23. The Crash in Competitive Telephony 24. Too Much Internet Backbone?
Part V. 25. 26. 27. 28. 29. 30. Part VI. 31. 32. 33. 34. 35. Part VII. 36. 37. 38. 39. 40. 41.
109 111 116 121 126 133 139
Prices, Productivity and Growth Debunking the Productivity Paradox Banking on the Information Age Measure for Measure in the New Economy Pricing Internet Access E-Business Infrastructure The Price is Not Right
145 147 152 157 162 167 173
Enterprise Computing Client-Server Demand and Legacy Systems Upgrading, Catching up and Shooting for Par How Co-Invention Shapes our Market Which Industries Use the Internet? Where Did the Internet Go?
179 181 186 191 196 201
Microsoft, from the Sublime to the Serious Not a Young and Restless Market Return of the Jaded Bill, Act Like a Mensch! Aggressive Business Tactics: Are There Limits? Hung up on AT&T Falling Through the Cracks at Microsoft
207 209 214 221 228 233 238
Part VIII. Platforms and Standards 42. Markets, Standards and Information Infrastructure 43. Industrial Economics and Strategy: Computing Platforms
243 245 270
Index
289
b161-FM.qxd
27/05/04
7:05 PM
Page ix
Preface
For many years I have been writing essays that make the economics of technical change accessible to a non-specialist. It is a big goal, so I try to get there in steps. Most of these essays are short, though not necessarily concise. I try to make them conversational in tone without losing the essence of the material. I have great enthusiasm for this field. To be sure, such enthusiasm translates into a wide array of topics. Some of them are useful and entertaining, others less so. This collection of essays arose from a variety of motives. They are not aimed at one specific type of reader with one specific interest. Different readers will find different parts engaging. Several of these essays were inspired by issues raised in a class room. Accordingly, many of these essays can provide supplemental reading for an undergraduate economics or technology strategy course at either the undergraduate or MBA level. I have used many of them in my own classroom and I hope other teachers find them useful as well.
What is here This volume contains eight parts. The first two parts focus on explaining events, notable birthdays and long-standing trends. Part I, Musings, contains many of the most off-beat and amusing essays. It contains many of my favorite essays and it contains most of the essays that have circulated widely. It begins with Diamonds are Forever, Computers are not, the piece I wrote after becoming engaged in 1995. Part II, Observations, fleeting and otherwise, ranges over a wide potpourri of topics. The unifying theme, if there is one, revolves around human economic behavior in the face of uncertainty and confusion. These were written with the intent to explain and educate, to get behind the merely obvious. ix
b161-FM.qxd
x
27/05/04
7:05 PM
Page x
Preface
Parts III, IV and V contain writing about the Internet, the most central technical and commercial development of my time. Things were changing as I was writing, so some of these essays feel like economic postcards from a bygone and fog-bound era, where the future was distant, and we were all still learning about how it would turn out. Still, many of the observations highlight factors that remained after the chaos subsided. That is why they were worthwhile to put together. Part III, Developing the digital world, talks about the development of the on-line commercial world, an area that I study in my academic career. It became the center of focus in popular discussion after 1996. The memory-less contemporary press misunderstood and mischaracterized events. My goal was to say my two cents, correct what I could, and move on. Together these essays do have something to say about how the Internet developed. The essays in Part IV, Internet boom and bust, analyze the macroeconomic side of the investment boom and bust related to Internet activities, a misunderstood and difficult topic. The title for Part IV is a bit deceptive. Part III covers most of the boom, while part IV discusses mostly the bust. Again, it is surprising how much the contemporary press misunderstood the bust and its origins. Once, again, I say my two cents and move on. Part V, Prices, productivity and growth, focuses on the measurement of economic activity in the digital economy, another topic I study extensively in my academic life. These essays are pedagogical postcards about traditional topics usually found in a macro-economics course. They seek to boil down something complex into something digestible, and, in so doing, educate, getting behind the merely obvious. The last three parts are quite different from each other. Part VI, Enterprise computing, is about how computers get used in organizations, a big topic that I study in my academic career. In fact, all of these essays except one (Upgrading, catching up and shooting for par) summarize research I have published elsewhere. These essays are just the tip of the iceberg on each topic, but most general audiences do not have sufficient patience for anything longer than this. Part VII, Microsoft, from the sublime to the serious, is another topic I study in my academic career, though this is also a topic that everybody talks about at dinner parties. This part begins with a couple pieces of parody, written when Microsoft was still seen as a heroic young firm that had just dethroned IBM. It ultimately ends with lessons about hubris, one of the central themes throughout the antitrust case. While it covers events that are now somewhat dated, I include these essays because I still think the issues are durable. Also, in some important respects, the mainstream
b161-FM.qxd
27/05/04
7:05 PM
Page xi
Preface xi
contemporary press misunderstood and mischaracterized what happened. Again, I want to say my two cents and develop lessons beyond the merely obvious ones. Finally, Part VIII, Platforms and Standards has two long essays from 1993 and 1998 about economic constraints on strategic behavior in markets where standards and platforms matter. The first essay got me started in this business of writing for general audiences, so I am still quite sentimental about it. The second summarizes a large body of work, and I find it very useful for framing classroom discussion. Friends who know my academic writing will see that these two papers bridge between the academic work I did with Paul David on standards (published in 1990) and Tim Bresnahan on platforms (published in 1999). I hope readers will find them useful. After each essay I have included an Editorial note in parenthesis. Since many of these articles were written a few years ago, I believe it is important to provide material about what happened after they were written. Think of these notes as postscripts or epilogues. Or, if you prefer, think about the closing credits to the movie American Graffiti, where you find out what happened to everybody after they grew up. These notes provide updates about the participants.
How to write an essay about the economics of technology Briefly, let me explain my philosophy behind writing these essays. Every essay is about something in the economics of technology markets, principally electronics and information technology. As such, every essay wrestles with three inescapable tensions. Economics is dry, general and aloof, while technical change is engaging, fleeting and concrete. First, start with the contrast between dry and engaging. Economic explanations are analytical, by definition. For better or worse — well, mostly worse — normal economic narrative has the innate ability to be boring, removed, and aloof by the standards of vernacular storytelling. That sort of aloofness can suck the life out of a description, even about the most engaging and dynamic of topics. It is something I try to avoid. It is especially worth avoiding because behavior in technology markets is so innately exciting. Economic growth arises from innovation, which is inherently fascinating and wondrous. That is the core of the problem. A topic might be quite engaging, but it is easy to muffle it with a boring explanation. I have to work hard not to
b161-FM.qxd
27/05/04
7:05 PM
Page xii
xii Preface
revert to becoming a boring economist, droning on and on. Warning! I do not always succeed in avoiding this error. Still, please give me credit for trying. Second, technology markets give rise to many fleeting events, but a good economic explanation should live to another day. Every essay must balance the event and the explanation. High tech markets are just filled with things that are temporary — newly founded firms that will not make more than a year’s worth of payroll, badly designed new products that will die soon after launch, halfbaked business plans one step from a garbage can, and many competitive situations energized with live-for-today tactics. Here today, a forgotten memory tomorrow. To be sure, fleeting is not the same as irrelevant. Economic insight requires attention to the details, no matter how fleeting they are. And, like the next person, I like a spectacular story. I appreciate the scandalous excesses or the reckless stupidity of a cornered executive, even if such failure signals that a firm is a step away from becoming a historical business statistic. Good economics is not necessarily the same as good story-telling, however. Good economics finds the generalities in events, avoiding an emphasis on the merely ephemeral. Good economic insight should identify a lesson that is still useful tomorrow, even if it is mundane and staid. Sometimes these lessons are just obvious (to you, but perhaps not to others). Sometimes these are not immediately apparent to anyone. It is my job to draw them out. Not trivially, durable economic insight also is not the same as good journalism. As far as I am concerned, I am not a journalist. Good journalism already takes care of itself. I assume that my readers know the basic outline of events from a good journalistic source. Instead, I am commentator, or a color analyst, to borrow a label from sports broadcasting. I put the spotlight on a slice of life, and try to make it support a non-obvious or novel insight. So I pick topics that have both an example from today and a general point relevant to tomorrow. It is tricky to do well. I try my best. Third, economic insight is reflective, though most of economic life is active. This contrast produces tension. Let me illustrate. It is just a fact of life that people with entrepreneurial personalities build and run most of the firms with the big money in high technology. These people hold the fortunes of many others in the balance, but still sleep well at night. Many of them are spectacularly savvy. I admire people who can do this. Some of them are my friends. I am even related to a few people who do this for a living. To be sure, these personality traits do not pose any problem for the participants. It is only a challenge for me, the observer. Frankly, my
b161-FM.qxd
27/05/04
7:05 PM
Page xiii
Preface xiii
personality is made up differently. I can be entrepreneurial and outgoing, but not for twelve hours a day and not constantly, while building a business over many months or years. On my best days I have been savvy, but the best executives in the world are just savvy in a different sense. More to the point, the process of writing requires isolation. It erases immediacy. It is infused with reflection. Reflection raises doubts. A good doubter considers alternative points of view. To see all alternatives in a fair light there must be a critical distance between an event and the observer. It turns me into a self-conscious Monday morning quarterback. The tension is inescapable. My two central goals are in conflict unless I make great effort. I want to be a reflective observer and I want to accurately portray events as they transpire by getting inside the shoes of a decision maker. There is never one single way to find a balance between these goals. So I pick an engaging topic and present it as I see fit. These essays try a number of different narrative devices. Some essays use parody, and some use sarcasm. Some explain straight economics, while others include much personal reflection. Sometimes it works and sometimes not. It is up to the reader to tell me when it works and when it falls flat. To be sure, I am optimistic that this collection has something to offer. Otherwise I would not have gone to the work of putting these together.
How did this start? I cannot point to many colleagues in my field and say, “That is who gave me inspiration for these essays.” Do not get me wrong. There are many writers who express admirable values through their writing. They try to be careful and insightful. They try to respect the reader’s intelligence by challenging it. Their professional experience allows them to learn something that others do not know, and they do not present their views arrogantly. In this broad sense — in putting their values to practice — there are many colleagues who I can point to as influences. That said, the model I used for this book is made up of a mix of everything and everyone. I wish to acknowledge my debt. I have always admired the ability of Nathan Rosenberg, Ed Mansfield and Richard Nelson to make the economics of technology accessible to non-specialists. It is a trait I try to copy. At formative times I also have been inspired by economic essays for wide audiences by such writers as Timothy Bresnahan, Alfred Chandler, Carlo Cipolla, Michael Cusumano, Linda Cohen, Paul David, Milton Friedman, Frank Fisher, Victor Fuchs, John Kenneth Galbraith, Steven Landsburg, Joel Mokyr, Carl Mosk, Roger Noll, Richard Rosenbloom,
b161-FM.qxd
27/05/04
7:05 PM
Page xiv
xiv Preface
Tibor Scitovsky, Carl Shapiro, Timothy Taylor, Hal Varian, and Benjamin Ward. They provide inspiration too. I have always loved reading Lewis Mumford and Stephen Jay Gould, even though they are distant academic cousins in such different disciplines than my own. They can make their fields accessible to the outsider. They write beautifully. I can never hope to write so well, but it is something I try to emulate. Yet, broad motivation is one thing. The actual and specific path to publication is quite another. Truth be told, this book was partly an accident. The whole story is a bit of a shaggy dog, but it is worth telling because it illuminates why these essays vary in topic and style. Here is how it came about. My first experiment at writing for a non-economics audience came in 1991. I wrote for a computer history journal (sponsored by IEEE, the Institute of Electrical and Electronics Engineers). I did this because the Charles Babbage Institute, housed at the University of Minnesota, gave me a fellowship during my final year as a student. Several members of the Institute, Bill Aspray in particular, sat on the editorial board of the journal. This essay (The Tape Story Tapestry, included herein) was a partial way to pay back the computer history society. It chronicles my experience finding data for writing a PhD. It is written like a travelogue in a conversational voice, which is quite unlike my professional writing. It was enjoyable to do. It made me receptive to the next opportunity. As it turned out, I was invited to present something at an interdisciplinary conference at Carnegie Mellon University, and decided to try again. That fell through. In that experience I met Michael Spring, a faculty member there, and through a series of steps that I no longer recall with any precision, the connection led to a special issue of Micro in 1993, also sponsored by IEEE. That essay (Markets, Standards and Information Infrastructure, also included herein) provided a summary of previous economic research about standardization. I started using that essay in my classroom, and to my delight, so did others. My appetite was whet. I was asked to become a regular columnist in 1995 by the editor, Stephen Diamond. He had been editor in 1993 too, but otherwise we did not have a long-standing relationship. In fact, we had never met face to face (and still have not). Over the phone he gave me a simple assignment and a lot of leeway: explain an economic topic to an intelligent non-economist (usually an engineer) in less than 2000 words. That is what I have tried to do ever since. Steve had no clue what he was going to get. Neither did I. Follow that chain of connections? I wrote something for somebody who sponsored my dissertation, which then led to another conference that
b161-FM.qxd
27/05/04
7:05 PM
Page xv
Preface xv
never took place, but introduced me to a faculty member who then introduced me to a magazine editor. The editor got to know me. A year later he asked for column but left me a lot of leeway. Starting was one thing, continuing quite another. Every time a deadline came I would find another topic and write yet another essay. Faced with the choice of stopping or continuing, every year I chose to continue. So did the editors at Micro. These decisions may look natural in retrospect, but at the time they were not as obvious as it sounds. Time becomes tight when new babies arrive and classes have to be taught. After eight years it started to add up. Some time ago I sensed that the whole would add up to more than the sum of the parts. To be honest, it was a guess, but once I put it all together, it was clear that my guess was right. The essays almost organized themselves around a few common themes. These themes did not emerge by design, but it seems to have worked out that way. In other words, this whole thing was a deliberate accident. It was an accident because it did not come about through design or foresight. It was deliberate since I participated in it and even encouraged it. And, now, here we are. The whole collection has something to say. Please enjoy!
b161-FM.qxd
27/05/04
7:05 PM
Page xvi
Acknowledgments
I am grateful to my wife, Ranna Rozenfeld. On many occasions she has lovingly let me sneak up to the den and write these essays, while I leave her alone to watch the kids. She has read many of them over the years, correcting and editing them. She knows how to tell me when I am boring and when I miss my target. She also provided the inspiration for my favorite essay, Diamonds are Forever, Computers are not. I am lucky to have her as a life long companion. My children also deserve thanks for their patience. They seem to know when to leave their Dad alone at the computer to write. They too occasionally appear in these essays as inspiration. I hope when they are old enough to read them they will not mind too much. My parents, siblings and relatives have also played their role, though somewhat unwittingly. Whether they like it or not, they are often my target audience. When writing I do ask myself what I would have to say to make a point plain to my father, my siblings, or my aunt and uncles or my cousins. They are smart enough to understand anything, so the fault is mine if the essay fails to communicate. This is also the moment to recognize my debt to almost every teacher and senior colleague I have ever known. In scholarship, as in most parts of life, communities play a big role. So has the community of senior scholars I have known. There are too many people to mention, but I would like to single out a few. For example, I have always been grateful to my undergraduate advisors, Benjamin Ward, Carl Mosk and Carlo Cipolla. They took the time to listen and provide advice when it would have been so easy to shut the door on a confused twenty year old undergraduate. I would also like to single out my dissertation committee, Tim Bresnahan, Paul David and Roger Noll. They each provide a role model for how to make extraordinary professional contributions and they provided help to me without quid pro xvi
b161-FM.qxd
27/05/04
7:05 PM
Page xvii
Acknowledgments xvii
quo. I just hope this whole group finds that this book is a partial return on their investment in my future. I would like to single out senior colleagues who have taken the trouble to mentor me in the first years of professional life, Tim Bresnahan, Jan Brueckner, Zvi Griliches, Pablo Spiller and Oliver Williamson. Finally, I want to thank my present colleagues, especially David Besanko, David Dranove, Donald Jacobs, Dipak Jain, Mark Satterthwaite, and Dan Spulber. They hired me without knowing that they would have to sit through renditions of prospective essays. Quite a few colleagues have read these essays, provided support in one way or another, and, on occasion, provided inspiration. I would like to thank William Aspray, Angelique Augereau, Severin Borenstein, Larry DeBrock, George Deltas, Tom Downes, Marty Feldstein, Steve Finacom, Frank Fisher, Christopher Forman, Barbara Fraumeni, Avi Goldfarb, Lenis Hazlett, Gretchen Helfrich, Adam Jaffe, Joyce Jacobsen, Tarun Khanna, Josh Lerner, Bill Maloney, Nancy Rose, Nate Rosenberg, Joshua Rosenbloom, Garth Saloner, Michael Spring, Ed Steinmueller, Scott Stern, Tim Taylor, David Teece, Marvin and Chris Theimer, Manuel Trajtenberg, and Ann Velenchik. I have almost certainly forgotten someone with whom I had an inspiring conversation. Many people read drafts of these essays and provided comments. I have surely forgotten someone who provided such a favor. I am sorry for the oversight. I want to thank them too. Several editors and I have worked together over the years. They deserve thanks too. I am grateful to the staff at IEEE Micro, and particularly Marie English, Kristine Kelly, Janet Wilson, Ed Zintel, Jeff Hurt, Jenny Fererro, Stephen Diamond, Ken Sakamura, and Prodip Bose. I am also grateful to IEEE for extending copyright permission so these essays could be put together. Kathleen Corrigan and Mary Hourican helped me in the effort to put together this collection. My wife, Ranna Rozenfeld, an unbelievably patient woman, looked over the whole thing. She found lots of errors. I thank them as well. I also want to thank Gabriella Frescura and Geetha Nair at Imperial College Press for their efforts. Some of the work here provides small lessons gleaned from academic work sponsored by many different sources. I want to thank these sponsors, including the Bureau of Economic Analysis, the Council for Library Resources, the Charles Babbage Institute, Stanford Institute for Economic Policy Research, the Haas School of Business at the University of California, the Stanford Computer Industry Project, the Bureau of Business Research at the University of Illinois, the Institute for
b161-FM.qxd
27/05/04
7:05 PM
Page xviii
xviii Acknowledgments
Government and Public Affairs at the University of Illinois, the Dean’s office at the Kellogg School of Management at Northwestern University, The GM Strategy Center at the Kellogg School of Management, the National Science Foundation, The Searle Foundation, and the National Bureau of Economic Research in Cambridge, Massachusetts. Lastly, but certainly not least of all, I want to thank students at the University of Illinois and Northwestern University, who have provided inspiration for some of these columns and patiently listened as guinea pigs to some of my half-baked ideas. The same could be said for many university and conference audiences. The hardest thing about my profession is that much of the learning takes place in public, so somebody sees the mistakes and has to critique the initial ideas that did not pan out. All audiences deserve thanks for patience and feedback. And, of course, the usual disclaimer applies. These audiences were just trying to help. If I fail to listen, then it is my fault. I am responsible for all remaining errors. S.M.G. Evanston, Illinois June, 2003
b161-Ch01.qxd
27/05/04
6:51 PM
Page 1
Part I
Musings
This page intentionally left blank
b161-Ch01.qxd
27/05/04
6:51 PM
Page 3
1 Diamonds are Forever, Computers are Not
Diamonds and computers have much in common. They are very expensive. They come in many shapes and sizes. Before any purchase, it is important to carefully research the market. They also have in common Lenis’s law, which is: “A good diamond ring costs about as much as a Toshiba laptop.” In the interest of full disclosure, Lenis’s full name is Lenis Hazlett. She is a Silicon Valley computer industry consultant, Stanford MBA, mother of two, and wife of Stanford professor and computer industry economist, Tim Bresnahan (with whom I often collaborate). Lenis illustrated this law with a particular brand, Toshiba, for dramatic effect. This should not be construed as a product endorsement. She informed me of the law when I recently entered the market for an engagement ring. While I did not do a scientific survey, nearly all my married friends have heard Lenis’s law or some variation on it. Everyone understands that Lenis’s law precludes cubic zirconium. If the budget is tight, then it is a choice between marital bliss and a new computer. Lenis’s law provides an indirect entry to the main point of this column. Except for the reference to the laptop, this story seems not to have changed in a dozen years or more. Then, the typical basic PC cost about as much as a decent diamond ring. Today it still does.
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, August 1995. 3
b161-Ch01.qxd
4
27/05/04
6:51 PM
Page 4
Part I
Markets and changing quality So here is the interesting economics: The better and fancier products of today dominate the leading technology from the mid-1980s at any price. Diamonds have stayed more or less the same, but as everyone in this industry knows, today’s typical PC hardly resembles the typical PC of a dozen years ago. Put in a different way, the market for old diamonds is still alive, but virtually no one buys old computers. The addition of new diamond designs to the market does not drastically alter the value of old diamonds. In the computer industry, the addition of new designs does drastically alter the value of old designs. In fact, it almost completely devalues old designs. Has anyone seen a new 286-based CPU for sale recently? What about a new 1,200-baud modem? Today’s systems have faster modems, better printers, brighter and wider screens, nicer keyboards, better sound, more backup technology, and everywhere you look, software with more features and better performance. This is not just an observation about better engineering; it is also a statement about market value. Prices for old designs are so low because no one desires an old design at any price. This pattern has prevailed for so many years that it no longer seems odd to most computer industry participants. Well, it is odd to anyone with a sense of the economics of technology. The common calculator illustrates the more typical pattern. When first invented, the calculator was expensive. Today it is so cheap that even most college students can afford several different models. To be sure, fancy calculators with lots of neat features are available. But stores mostly sell a simple product, the Chevy equivalent of calculators. Though it took a few years to reach this era, we have been here for some time now. Technical change resulted in moderate improvements in quality over the basic invention, but also in a comparatively large decline in price. Most examples of technical change to consumer-oriented products involve a period of improvement in features followed by a comparatively brief era of sharp declines in price. Solid-state stereos, photography equipment, televisions, VCRs, refrigerators, freezers, washing machines, dryers, copiers, electric typewriters, cash registers, and fax machines all fit this pattern.
Several phases Here is the general pattern. The first phase involves the introduction of an invention and its adoption by a small set of users. These users are fanatical. They experiment. They will pay almost any price. This phase potentially involves lots of product improvement. Price may be quite high. Phase one can last a long time.
b161-Ch01.qxd
27/05/04
6:51 PM
Page 5
Chapter 1 5
Phase two is associated with the diffusion of a refined version of the product to most consumers. Prices decline rapidly, and, compared with the first phase, sales are massive. The second phase usually does not last very long. Then there is the third phase. It never ends, but it is quite boring as an economic phenomenon. Now most buyers find that they have what they want. New features of new products no longer thrill them. These markets are calm, predictable, and not so turbulent. They’re not dead, just not turbulent. The main firms stay the same. A large used market thrives because the old models still provide satisfying service for many people. The used products are a bit cheaper than the new ones and not much worse. In other words, third-phase markets look a lot like today’s markets for automobiles, tractors, boats, construction equipment, VCRs, TVs, refrigerators, washing machines, and so on. The microcomputer industry is past the first phase, and it is not yet in the third phase. It has been in the second phase for a long time, and therein lies the puzzle. Most inexpensive business and consumer-oriented products do not stay in the second phase for long. Consumers buy the new product and then shift their limited disposable income to other discretionary purchases. Discretionary purchases are what people buy after taking care of food, clothing, and shelter, that is, things such as automobiles, vacations, jewelry, and an occasional diamond ring. Similarly, most businesses operate on a budget and a tight profit margin. The budget pays salaries, benefits, materials, rent, dividends, and usually interest on a loan. What is left does not go back into computer equipment unless the firm needs it to match the competition or to offer a new lucrative service to their loyal customers. Once that is done, the money gets shifted to something else. The economic factors at work in the computer industry are the same factors at work everywhere. These factors should slow the purchase of new systems. But they do not seem to do so very much-at least, not as much as one would predict from a sensible reading of the history of other major innovations. Put another way, product improvement in the PC industry keeps inducing more new purchases, and that defies every sensible reading of history.
The long view More than a dozen years ago, very few observers could see where PC improvements were leading. The mainstream press dismissed most of the visionaries as dreamers or technofreaks who were overly obsessed with their electrical toys. The press had a good case to make then, and they still do now. This is not to say that the mainstream business press lacks imagination. (After all, lack of imagination is no sin in this industry. Thomas Watson, Sr.,
b161-Ch01.qxd
6
27/05/04
6:51 PM
Page 6
Part I
is famous for predicting in 1955 that there was room for only 5 computers in the world.) Instead, this is a comment about how historically unusual is the pattern of change in PCs. The press is right to note that there are limits to the valuable improvements possible on any technological platform. Although phase two is fun to watch, and I hope it lasts a long time, the sensible reading of history has to be right eventually. The PC joyride will be over when new applications no longer offer big changes in features that users value; when every vendor complains of being unable to find “killer apps”; when a vibrant used-computer trade dominates the new market; when surveys indicate a shift in consumer and business spending priorities; when incumbent firms desperately discount prices, observers talk about a massive “shake-out,” and then, quite suddenly, many firms exit and only a few persist. Growth will slow, and minor innovations will demarcate differences between technical vintages. Despite the forces pushing toward the end of the second phase, I predict that its end is not near. Several years ago, product improvement was associated with the introduction of laptops, and last year it was CDROMs. For the next few years it may be client/server applications. After that, I do not really care to speculate. Some new component or software program will hit the market, and I will be as amazed as the next person at its functionality and low price.
Down on one knee It is best to put this prediction in terms of diamonds. On a clear, sunny, warm winter day, my girlfriend and I went for a walk along the shores of Lake Michigan north of downtown Chicago. At the appointed spot, I knelt before her, pledged my everlasting commitment, and presented her with a diamond ring. For my purposes, a ring was much better than a laptop. While diamonds are forever, new computers are almost immediately devalued. Manuel Trajtenberg, an economist and friend opined wistfully, “Do not tell your kids the story about the Toshiba laptop. In a few years, you will look really cheap.” More to the point, I might look really obsolete.
{Editorial note: My wife and I were married in August 1995.}
b161-Ch01.qxd
27/05/04
6:51 PM
Page 7
2 A Birthday Even a Curmudgeon Could Love
Mr. Processor, the wunderkind of the electronics world, had his twentyfifth birthday. Technical curmudgeons will have a field day griping that birthdays for inventions commemorate insignificant events. Give the curmudgeons their due. Most inventions are not very inventive. Often, an invention just marks when someone familiar with the state of the art figured out how to put two and two together. Some inventions are just incremental improvements on a long-run technical trajectory. Another inventor probably would have done the same thing for some other reason at some other date sometime soon. We should remember the magnificent inventions, historians say. History commemorates Bell, Pasteur, and Watt for a reason-they were way ahead of everyone else; they invented solutions to the interesting problems of their day and nobody got close to them for years. So, the curmudgeons ask, Was the microprocessor a magnificent invention? And they answer, no. Despite being a difficult engineering achievement, it was not far ahead of its time. Although I hate to agree with curmudgeons, they are partly right.
Why they’re (almost) right Mr. Processor’s birthday celebrates the genius of a few Intel managers and employees 25 years ago. Whether you assign credit to Noyce, Hoff, Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, April 1996. 7
b161-Ch01.qxd
8
27/05/04
6:51 PM
Page 8
Part I
Faggin, Moore, or any other employee is not really the issue. We all agree they were clever. They showed initiative and imagination. They appreciated the importance of improving their design. All this is true, but beside the point. If Intel had not invented the microprocessor, technical history would not have changed greatly. By the late 1960s the idea behind the microprocessor already existed at Bell Labs, IBM, Texas Instruments, Motorola, and many other firms. Not all these firms possessed the ability or desire to produce the architectural drawings or manufacture the final product in 1970. Eventually, however, somebody would have developed the microprocessor. Why? Because the microprocessor does (with hindsight) appear to be just one small step on a long technical trajectory unleashed by the invention of the transistor, which occurred much earlier. Observers often label this trajectory “the battle between digital and analog.” Bardeen, Brattain, and Shockley developed the transistor with a little foresight, partly to satisfy scientific curiosity, partly to satisfy military and NASA requirements, and partly to help the US telephone system grow. Doing something commercially useful mattered to the managers at Bell Labs, to be sure, but it those three were after fame more than fortune, at least at first. Once the commercial feud with analog gained commercial momentum, however, there was no contest: Analog was Neanderthal. Digital was smaller, lighter, more reliable, and eventually cheaper. Scientific instruments, broadcasting equipment, telecommunications equipment, and finally just about everything had to become digital. Digital technology spread for one simple reason: The firm that manufactured and sold the best digital designs made money. Once this profitmaking strategy became obvious — and it has been obvious to private industry for a few decades now — many firms would pursue the battle to its logical conclusion. In other words, it was not any specific invention that mattered, but the root idea behind generations of inventions: The transistor represented a new idea and started the shift from analog to digital. Thus, the invention of the transistor was magnificent, but the microprocessor, incremental.
Why they miss the mark That said, it is here that I usually part company with the curmudgeons. I do not think their analysis is wrong, just incomplete. The invention of the microprocessor may have been inevitable and possibly incremental; that fact does not render the actual inventive act
b161-Ch01.qxd
27/05/04
6:51 PM
Page 9
Chapter 2 9
insignificant. Inventions matter for more than the mere satisfaction of a technological trajectory. Economic impact. To put it another way, if an invention does not have much economic impact, it deserves and receives historical obscurity. In contrast, if an invention defines the contour of economic activity — usually because it initiates dramatic industrial change — then the defining moment may be the inventive act itself. The appropriate historical question concerns which economic contours were inevitable and which were not. Reasonable people may disagree on this question. By its nature it is speculative. It is an argument about events that never happened and could not have happened unless an actual and particular historical event had never occurred. For example, here is an inevitable event: Today, Mr. Processor can reliably operate almost any simple mechanical device that has electrical components, be it a toaster, lawnmower, automobile engine, telephone equipment, or a large boiler in a factory. As he grows bigger and stronger, Mr. Processor flies planes in good weather, steers ships in calm seas, and guides vehicles through light-rail tracks. These developments are fun to watch, but are not the main point. All the same gee-whiz stuff would have happened if a company other than Intel had invented the microprocessor. Maybe it all would have happened a little later if Intel had not invented it, but not much later. Thus, it would seem that this invention’s birthday simply serves as a focal point to remind us of the technical accomplishments that came later, sometimes much later. Co-invention. Focusing on geewhiz stuff is also wrong for a related and more subtle reason: All significant technologies only become economically useful after significant co-invention of complementary technologies. So, what is more important, the initial invention or the co-invention that must follow? Consider the personal computer, Intel’s cash cow and one of the biggest users of microprocessors today. Future historians will remember the PC as the first primitive attempt to develop decentralized computing applications for workers who did not have the time, skills, or inclination to learn complex operating-system commands. Among other things, the PC allows nonarchitects to design their own kitchens, nonaccountants to track financial transactions, secretaries to send e-mail, and nontypesetters to format and publish their own writing. Most of what the PC accomplishes comes from being easy to use. Insiders know that this simplicity is deceptive, however. Such functionality was hard to achieve and is difficult to improve. It took lots of time, energy, imagination, and inventiveness, at least as much and probably much more than it took to invent the microprocessor itself.
b161-Ch01.qxd
27/05/04
6:51 PM
Page 10
10 Part I
Admittedly, it is rather silly to compare the effort put into generations of microprocessor technology with that of generations of PC technology. Yet, the comparison illustrates an important idea: Much co-invention, quite apart from Intel’s initial invention, lies behind the microprocessor’s economic value. Thus the effort associated with the microprocessor’s invention seems insignificant in comparison.
So, what’s to remember? At the end of the day, then, where are we? The microprocessor was close to inevitable. It might have been only a small improvement in a longer trajectory. It required much co-invention to be useful. All this is true, but still beside the point. This particular invention’s birthday is worth remembering because first, it put Intel in the driver’s seat, giving that company the power to shape many industry decisions. Other firms could have assumed leadership in Intel’s absence or if Intel has failed to build on its initial lead. It matters, however, that IBM, DEC, Motorola, or AMD cannot assume such a position today. It matters to chip buyers, chip builders, venture capitalists, and stockholders. Second, the invention helped anchor the electronics industry in Silicon Valley. To be sure, many factors contributed to this outcome. Yet, many other regions of the country (particularly the greater Boston area) might have assumed that status if Intel had dropped the ball. Face it, no area comes close to the Valley now, nor could any area hope to. Many people find it relevant that so much happened there and not somewhere else. The electronics industry influences every business in the region, the labor market for engineers, and the spread of economic activity across the country. It also influences the distribution of wealth in the US. (Have you recently looked at Stanford University’s endowment?) Third, and probably most significantly, this invention occurred at what was then a small, independent firm. The door opened — ever so slightly at first and then massively later — to vertical disintegration in the electronics industry. Yes, many events contributed to that dramatic structural change. But it might not have happened at all if IBM or DEC or some other large, vertically integrated firm had technical control over the microprocessor’s development. I shudder to imagine what would have happened to this industry if a large, clumsy corporate giant with many intellectual property lawyers — such as, say, AT&T in the 1970s — had first invented and patented the microprocessor. So cheer up curmudgeons! Most inventions’ birthdays are worth ignoring, but not this one’s! The inventions worth remembering are those
b161-Ch01.qxd
27/05/04
6:51 PM
Page 11
Chapter 2 11
that shape our daily lives as employees, consumers, and market observers for years and years. So that leaves me with one question: Will we still commemorate the invention of the microprocessor when it turns 50?
{Editorial note: This was written on the occasion of the 25th anniversary of the birthday of the microprocessor.}
b161-Ch01.qxd
27/05/04
6:51 PM
Page 12
3 It has Bugs, but the Games are out of This World
Several decades ago Arthur C. Clarke forecast the birth of HAL, the oddly wired, soft-spoken computer of 2001, A Space Odyssey. In this science fiction classic, HAL became operational at the HAL plant at the University of Illinois on January 12, 1997. (Stanley Kubrick’s movie, which most people know better, made HAL five years older.) The last time I took a close look around the University of Illinois campus, HAL did not appear to be alive. In view of other advances in computing, not many students noticed this “nonbirth.” What does this failed prediction tell us about computing?
Innovative for its time It may be hard for the post-Star Wars generation to remember, but 2001 was an innovative movie for its time. 2001 was not supposed to be one of those silly science fiction movies, like Flash Gordon. Sigh. Have you seen the 2001 classic recently? With several decades of hindsight, many things look wrong. The space station looks like a toy that George Lucas rejected. More to the point, the vision for the future of computer technology is inconsistent with what has happened. HAL could not have come out of the University of Illinois, Silicon Valley, or Route 128.
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, April 1997. 12
b161-Ch01.qxd
27/05/04
6:51 PM
Page 13
Chapter 3 13
While this is so for many trite and deeply technical reasons, one thing in particular caught my eye. Most computers today have something HAL lacks. They are more fun. Humor me for a moment. There is a serious economic point buried in this observation. Aside from the cute videophone scene and the chess game, nobody in the movie plays with a computer. Nobody treats the advanced technology as if it were a toy. Indeed, there is nothing spirited about the interaction of human beings and their technology. In 1969 Kubrick did not, and probably could not, anticipate the culture that has since grown up around computing. The movie is simply too somber and sober. Today’s computer industry, particularly the PC industry, is in love with the culture of irreverence and adolescent behaviors. To get the point, just imagine how much more fun HAL would be if a consortium of Valley software developers were to design him today in a nine-month product cycle. HAL would still have bugs of course, but the video games would be out of this world. Why is fun so often associated with computing technology?
Clocks and toys First of all, for reasons that go back to the basic appeal of Tinkertoys, a large part of society just likes to build things and admire their creations. As we get older, the toys become more complex, but the appeal does not die. The Home Brew Club, a group of PC hobbyists on the fringes of the commercial mainstream in the late 1970s, is the best example of this. Except for Bill Gates, Steve jobs, and maybe a few others, nobody joined this society at its start to make a fortune. Virtually every PC made prior to 1980 was next to useless. The makers and the buyers thought of them as electronic toys, as electronic hotrods for geeks, not as harbingers of HAL. This spirit has never really gone away. Even today, many PC buyers wire their system, install a new card, and modify some software, all to get it to play a bar from Beethoven’s ninth symphony when a new e-mail message arrives. It would make Rube Goldberg proud. Many economists like me love to expound on the productivity advances in business associated with computing. After all, spreadsheets are useful, and networked computers might someday revolutionize work. Truthfully, however, many PC products are nothing more than a lot of fun (and also a big waste of time). No self-respecting industry expert testifying to a congressional committee would ever admit that users’ pursuit of fun fuels the industry’s growth, though that does not make it any less true.
b161-Ch01.qxd
27/05/04
6:51 PM
Page 14
14 Part I
Technical historians tell us that Europeans built clocks in town squares because many people enjoyed staring at the gears as the bells mechanically rang. (Only medieval monks cared about the time; they wanted to say their prayers together at the right moment.) Western civilization only learned centuries later that accurate time had commercial value, long after hundreds of years of innovation made clocks more precise (and more fun to watch). When we watch a fun computer program today, we are just a step away from the medieval fascination with mechanical clocks. If this historical example is any guide, our great grandchildren might be the first generation to actually figure out how to make computers contribute to economic growth. Go play with that new CD-ROM, and coo without guilt.
Designers’ motives Clearly, computers are fun partly because users want it that way. That still leaves open the question of how fun finds its way into products. The answer is both complex and simple. The complex part has to do with the marketing of games, the competition between formats, and the pricing of new products. This is something nobody really understands, but we all act as if we do understand it, particularly when one teaches MBAs, as I do. The simple part has to do with designers. Think about the self-images that propel their lives. Many of these people have almost no commercial motivation at heart. That is precisely why they are fun to have around and why they design fun products. Designers seem to come in several forms. One is a professional problem solver. These people tolerate commercialism when it allows them to be greaseless mechanics. They would be as happy to fix a Rube Goldberg machine as they would a best-selling product, just as long as they are occupied with an engaging engineering enigma. Electronics mechanics show up everywhere in all types of firms. Sometimes their firms use them effectively, but usually not. A mechanic’s definition of heaven is the arrival of frontier hardware to test beta software. Hell is duty on the product support phone lines answering questions from grandmothers. Other designers think of themselves as artists with electronic tools. These people have the same training as the mechanics, but seem to seek some sort of aesthetic order in their work. Their true aim in life involves either creative self-expression or bragging rights. These designers take Steve Wozniak as their idol. Cheerful, creative, overworked, devoted to the
b161-Ch01.qxd
27/05/04
6:51 PM
Page 15
Chapter 3 15
holistic art of design, and just a bit off center, by most accounts the wizard of Woz was in the industry because he loved it. Of course, the typical artists do not expect to become as much of a star or as rich as the Woz. They would not mind, but they would not complain bitterly if they did not. Then there are the hobbyists. They are close cousins of the artists but show up in different parts of the galaxy. They may have a day job and write software at night. Sometimes they download shareware and revise it. Often they are students, unrepentant tinkers, or quixotic stargazers. This activity is not a full-time job, just a side interest. On occasion, a hobbyist’s efforts result in small-time triumphs. Judging from the number of products in mailorder catalogs and on bulletin boards, there are tens of thousands of hobbyists in the US alone. The ironic thing is this: Designers invent, sometimes brilliantly, even if they remain oblivious to commercial factors. If they are lucky, a group of MBAs will later organize production, marketing, and finance efficiently, even if those same MBAs remain oblivious to technical issues. The success of products and the essence of commercial activity, therefore, depend on the strange brew that results from commercializing the activities of designers who have noncommercial motives. Sometimes, spectacular products unexpectedly emerge and become popular. The computer industry is full of stories of designers who drive their supervisors crazy and produce the most original software code in the world. Of course, sometimes embarrassments arise too. The industry is also full of stories about software applications that contain hidden calls to pornographic pictures or other nonsense. Out of this chaos emerge addictive video games, engaging spreadsheets, and thousands of other products that vary in their usefulness and playfulness. This is an unbelievable economic process, one with decidedly non-economic underpinnings. To really get the point, ponder this question for a while: Would the Internet have developed so fast if so many hobbyists had not wanted to post pictures of their babies online? Admittedly, these observations do not explain every new product. For example, they do not fully explain how Microsoft designs its products. Then again, that topic raises all the complex questions about competitive formats, marketing, and so on. As I said, Bill Gates seems to be in this game for more than just the fun of it. Indeed, he seems to understand something that most MBAs never grasp, but that’s another topic for another time.
b161-Ch01.qxd
27/05/04
6:51 PM
Page 16
16 Part I
Parting thoughts Noncommercial motives lie behind much of what is laudable and laughable about the PC industry’s entrepreneurialism. Would this industry have advanced rapidly if we were all not mesmerized by the technical prowess of the products? Would HAL have been so dull if he were designed by committee, like most client-server systems today? More generally, the preponderance of noncommercial designs in the service of commercial motives produces much of the fun in this industry. This also explains why new ideas still emerge from unexpected places and rocket to prominence.
{Editorial note: Hal’s “birthday” was celebrated on the campus of the University of Illinois in 1997. This was written for that event.}
b161-Ch01.qxd
27/05/04
6:51 PM
Page 17
4 The Biology of Technology
Why do commentators talk about the PC market as if it were a biological process? Industry watchers often talk about the “birth” of industries or the “death” of obsolete technology. Conservative investors discuss the stable stocks of “mature” industries (as if mercurial earning reports are found only at “adolescent” firms). Similarly, the “product life cycle” metaphor has become shorthand for regular and repeated patterns that accompany turnover of products. Biological metaphors, much like any metaphor, are incomplete and inexact, but suggestive. For example, the life cycle in computing has many different and overlapping meanings. There are at least three in common use today. Technology enthusiasts use the metaphor one way, buyers another, and sellers yet another. Differences in meanings say much about the person using the metaphor. Some people have warm feelings for the computing cycle, but others do not.
Technologists love the life cycle Enthusiasts focus on the invention — and only the invention — of technology. For technologists, the product cycle in the PC industry provides Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, August 1997. 17
b161-Ch01.qxd
27/05/04
6:51 PM
Page 18
18 Part I
entertainment, loads of it. Old products die and new products replace them. (The mundane part is that every new computer system contains more memory and faster chips at lower cost. This has been going on so relentlessly for so long that even the technologists take it for granted. It has become routine.) However, each new generation of products offers new features, extends the range of old capabilities, or lowers the cost of obtaining existing features. Imitations turn novelty into new standards. Technologists are also entertained by other by-products of the product life cycle. New and better programming languages diffuse to many systems. Peripherals such as printers, terminals, network connections, and countless other minor components also undergo innovation. One invention feeds on another. Somebody invents new software, fueling invention of faster chips to handle it, fueling invention of better printers to show the results. This then gives another software inventor an idea for a new CDROM program that starts the whole pattern again. It’s a virtuous circle. Technology enthusiasts typically care little about the firms that introduce a new bell and whistle. To be sure, sometimes a firm gets credit for being first. If so, that firm crows about it. (If the firm’s marketing strategy depends on having technology on the cutting edge-for example, as Apple’s did for many years — then that firm crows a bit louder.) Technology enthusiasts typically have short memories, however, and begin drooling over the latest thing as soon as it arrives. Occasionally the arrival of a new technical platform brings about the restructuring of old technologies. Technical standards change, emphasizing a new set of core applications and altering every system. Some observers call these changes technological “revolutions.” For example, today we are in the client-server revolution, and previously there was the notebook revolution. The workstation and microcomputer revolution came before that, and so on. Revolution is not a biological label, but I am not sure a good one exists for drastic shifts in technical standards or in the technical patterns of the product life cycle. Should we call these shifts a “change in the ecosystem” instead? I don’t want to split hairs over semantics, but “ecosystem” seems to stretch biological metaphors too far. That said, did you ever notice that technologists tend to value the births within a product cycle? They focus on the renewal of the cycle at the introduction of each innovation.
What do buyers see? For a buyer, the product life cycle involves installing new systems or upgrading, retrofitting, and improving existing systems. Here is the typical pattern. Most buyers know they are perennially out of date, but do not have time to catch up. So they periodically reevaluate
b161-Ch01.qxd
27/05/04
6:51 PM
Page 19
Chapter 4 19
their situations. Buyers learn about technological opportunities as new products are introduced and as old products become obsolete. From the buyer’s perspective, this is like a movie played by a VCR stuck on fast forward. Things change way too fast. In piecemeal fashion, buyers modify the memory and speed of their CPUs, but keep other useful investments in software or peripherals. Or buyers enhance particular software programs or peripheral components, but not other parts of their systems. Peripheral and software upgrades then induce bottlenecks in CPUs, which induce further memory upgrading. This, in turn, motivates the buyer to try further peripheral and software enhancements. For example, the recent craze for all things multimedia is just another example of the same pattern. That said, scientific and engineering users usually first take advantage of faster computing speeds and larger memories, not to mention other technical bells and whistles. At one time, these technical users were closely followed by hobbyists and then, many years later, by business users. In more recent times, the business PC market has split into so many camps — home, office, technical, sophisticated executive, administrative support, networked, and so forth — that it has become difficult to predict who will adopt which bell first and which whistle next. As an aside, the product life cycle for buyers has one particularly adolescent feature. Upgrading often occurs due to competition around the water cooler. Much the way teenage boys might compare the racing fins of their hot rods, many technical users compare the features of their work stations, upgrading to impress their buddies. More to the point, for most buyers the product cycle is like being in perpetual adolescence. Just when they think they have it figured out, something else shakes their view of the world. The future always holds promise, though it never seems to arrive.
The sellers see it differently From the vendor’s perspective, the product life cycle concerns product design, sales, and marketing issues. These arise at rapid speed, requiring decision makers to come up with quick, decisive answers, and inviting mistakes. This is an executive’s nightmare. Vendors expect a fraction of their customers to desire frequent upgrades of systems or backward-compatible improved designs. Vendors also expect a fraction of old customers and customers who are starting from scratch to compare the technical capabilities of all new systems. Therefore, vendors expect that all designs become technically obsolete with the passage of time, the entry of more competitors, the expansion of technical possibilities, and the expansion of buyers’ needs.
b161-Ch01.qxd
27/05/04
6:51 PM
Page 20
20 Part I
As already noted, this process feeds on itself. If a design meets with any initial commercial success, then later parts of the cycle involve potential upgrades and sales of complementary components. All parts of this cycle — designing, prototyping, manufacturing, initial rolling out, selling systems, servicing, and customer-upgrading — involve risk to the vendor, as well as much technical and commercial uncertainty. More so than most markets, the PC market is brutally unforgiving of marketing mistakes. Yet, mistakes are inevitable. Unpleasant experiences must arise often. If the vendor sells its product to customers through third-party distributors (such as national chains of electronics stores), the third parties will mercilessly drop it when it becomes obsolete or gets bad reviews. Large discount distributors will relegate shrink-wrapped software with no name recognition to a low shelf, indifferent to the design’s underlying merits. Catalogs and magazines put components from unknown companies in the back pages, favoring long time advertisers. To avoid these fates, vendors must advertise, push products on magazine reviewers, or establish “buzz” at conventions. This activity is as dignified as selling deodorant. A vendor that sells its product with its own workforce does not fare much better. Salespeople on commission make their quotas by any means possible. They push new versions before the product is ready (even if the customer doesn’t need it), invite lawsuits by misrepresenting the features of their competitor’s product, promise more technical support than any rational firm could ever profitably offer, and conveniently forget last year’s unkept promises. And, of course, every salesperson assures customers that a bug-free upgrade will be coming round the bend within a year — even when designers think this impossible. Perhaps the most unpleasant feature of the product life cycle is that every one’s stock options are tied to the commercial success of the firm’s products, which are, in turn, tied to the rhythms of the product life cycle. Long after the designers have done their job, everyone’s financial payback depends on marketing augury and a fickle market. In sum, for vendors the product cycle is Darwinian evolution at its most brutal. The cycle is about survival of the fittest and living to fight the next battle.
Parting observations The product life cycle in PCs has changed over time. Accordingly, so too have people’s views of it. With only rare exceptions, only technologists brag about the speed with which things change today. Vendors no longer
b161-Ch01.qxd
27/05/04
6:51 PM
Page 21
Chapter 4 21
boast about it; most just endure each new competitive episode and celebrate surviving another day. Except for the most technical user (or the most competitive office situation), the speed of change comes too rapidly for most users. It upsets old arrangements even when the old investments are still useful. A century ago, new technology was also associated with rapid change on this scale. Then, the development of national transportation and communication networks transformed the US economy, upsetting old economic relationships, eliminating old ways of doing business, and creating new ones. Is that where we are going today? If the PC revolution or the client-server revolution actually succeeds in upsetting old habits, perhaps we will stop using biological metaphors. Instead of talking about life cycles (or revolutions, for that matter), perhaps we will begin to look for metaphors associated with earthquakes and tectonic movements.
{Editorial note: For many years this was my most widely quoted essay on the Internet.}
b161-Ch01.qxd
27/05/04
6:51 PM
Page 22
5 Virulent Word of Mouse
Never underestimate the ability of enterprising firms to direct basic human behavior toward achieving commercial goals. It is in the spirit of that observation that this column discusses a clever strategy for the dissemination of client software built on the normal human propensity to make recommendations. This strategy is often given the cute and misleading label “word of mouse.” Alternatively, it sometimes has the more suggestive label of “viral marketing.” These labels have meaning, since the strategy is, in fact, a firm’s deliberate attempt to build what a non-virtual marketing director would recognize as word-of-mouth or contagion marketing into Internet software applications. The strategy has received its share of hype in the high-tech business press. For example, in June, 1999, The Industry Standard called viral marketing “perhaps the most influential idea in the Internet economy right now:” Such hype is not new to the Web, but does invite a slightly more skeptical approach to understanding this phenomenon. How is word-of-mouse different from standard diffusion by word of mouth? What conditions are crucial for employing the strategy? Why is it we mostly hear of viral marketing. used by new firms? Can any firm, even an’ old stodgy one, make use of it, and if not, why? When do companies that grow this way become valuable and why?
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, December 1999. 22
b161-Ch01.qxd
27/05/04
6:51 PM
Page 23
Chapter 5 23
Let me tip my hand. The hype exaggerates the merits of this strategy, whose usefulness is confined to uncommon (though interesting) circumstances. To be sure, when it works, this strategy makes it easy to build a new service with remarkable speed. Moreover, the types of opportunities it works well with, while specialized, will likely exist for quite some timeparticularly in Internet software. However, the strategy requires extreme luck or uncommon entrepreneurial savvy. Thus, most firms won’t find this strategy useful, particularly established businesses unwilling to take high risks. In other words, the strategy is hard to use, but interesting for observers to watch.
A background story An excellent illustration of this strategy is the history of Hotmail, an early, free e-mail service, presently owned by MSN. A Hotmail user acts as an advertiser for the service in two ways the word Hotmail is in a user’s e-mail address, as in “
[email protected]” and every e-mail footer contains “Get Your Private, Free E-mail at http://www.hotmail.com” Thus, with every e-mail, a user makes a recommendation, and the URL makes it easy for others to act. Hotmail grew phenomenally fast, reaching 12 million users in a year and a half. It spread quickly throughout the US and in other countries. Indeed, the installed user base was so large that the company became an attractive merger target. Hence, it’s sale to Microsoft was profitable for both the founders and the venture capitalists that backed them. This is about as close as high tech comes to get-rich-quick. As it happens, this type of growth is not as easy as it looks. Nor is it easy to replicate. The venture capitalist firm that backed Hotmail was Draper, Fisher & Jurvetson. In the popular telling of this story, Tim Draper, the managing partner, gets credit for suggesting the URL in the footer and for coining viral marketing. This firm, as well as several others, continues to back the strategy and develop the technique in new contexts. Today viral marketing is standard in many Internet start-up business plans.
Get behind the strategy No single strategy is ever a panacea for every firm. The question naturally arises: when does word-of-mouse work? The strategy works well, first, when it’s easy for the first user to make a recommendation, and second, when it’s easy for the friend to act on the recommendation. Third, and this will become clearer in a moment, it’s
b161-Ch01.qxd
27/05/04
6:51 PM
Page 24
24 Part I
easy when it demonstrates the value of a new function. Finally, this strategy depends on many factors that influence functionality on the Internet. The first two conditions, making it easy to make and act on a recommendation, are probably the strategy’s most appealing feature. If users like a product, they show their friends. Using a product is also an endorsement. The product spreads by its very use. Popular or useful products get more attention, spreading from one user to the next. Both firms and users benefit from this pattern. Internet greeting cards or online invitations are examples of how enterprises grow with viral marketing. In online greeting cards, one user sends another user an e-mail, notifying them about the card, which is actually a Web page that the receiver can inspect. These pages are mildly customized with names and greetings, and often include cute cartoonish graphics. If a receiver likes this, they can easily go from their card to the order page. Online invitations work much the same way. The web pages include the standard invitation with directions, and RSVP, and perhaps directions for what type of salad to bring to the party. Blue-Mountain Arts developed the greeting card service first (though they are now heavily imitated). In the first few years with this application they tried to cross-market their non-virtual greeting cards. Eventually they capitalized on their installed base and sold it. Evite, an online invitation company, illustrates a related approach. Its success depends on user cooperation; yet, user cooperation comes more readily when commercial motives are less explicit. That is, the activity is likely to grow if the installed base develops with a softer, gentler model for revenue generation. Hence, Evite explicitly downplays its commercial motives. If a firm offers a free service and soft pedals its own commercial motives while the service is growing, how does this strategy lead to revenue generations, profitability, and long-term economic viability?
One key condition To date, most viral marketing users focus on growing and developing an installed user base for a particular application. Once large, companies hope the user base remains — i.e., that users continue to use the original service. If the user base displays stability, the business can either go public, merge with someone else, or sell advertising. This pattern reveals the strengths and limitations of viral marketing. The strategy presumes that the first developer can limit competitors and imitators and keep its users. But there is the rub. Wouldn’t all firms love to own an application that grows fast and retains users even in the face of imitators? So why would a small firm necessarily better succeed at
b161-Ch01.qxd
27/05/04
6:51 PM
Page 25
Chapter 5 25
developing new sticky applications then a large firm? (Sticky is the Web industry’s description of users who often return to a site, stay there a long time when they do visit, and remain loyal in spite of the entry of imitators.) The answer is the high-tech version of the old joke about the $20 bill in the street gutter. Everyone passes it up with the thought, “If that were really a $20 bill, someone would have picked it up by now.” We would all like to be the finder, but doing so requires a combination of perception and lucky timing. In other words, if it were obvious how to develop a new sticky application, then many established firms would have done it by now. Successfully developing such a product depends on having the right combination of imagination and luck, and having it before anyone else. Once a lead is built, the product’s stickiness ensures that latecomers have a difficult time building similar networks. If a young firm identifies the right combination of features to make the strategy work, it can make a huge return with the sale of those assets. If not, if the idea is off by just a bit, it creates the opportunity for someone else to quickly imitate it and start the “virus” on their idea. The imitator will build the installed base and make the sale. As it turns out, some of the stickier applications are those that facilitate communication or club formation. For example, ICQ, the Internet conferencing program, keeps users because everyone needs the same protocol to communicate, Unsurprisingly, sponsors of online voice systems attempt similar techniques for their services. Interestingly, many online pages try to form buying clubs such as Mercata, which increase in value as the user base grows. Getting the service right, reacting to feedback quickly, staving off competition with rapid response to imitation, building a club — these are all tough to do well at any firm, whether big or small. The successful firms have particular characteristics and are generally found in the entrepreneurial, fast-moving, and risk-taking parts of the economy. Therefore, we get what economists and market ecologists often call a “selection bias.” Most successful users of viral marketing are lucky enough to get the right combinations of strategy and timing. We disproportionately hear about these firms. They are also excessively new. Hence, we should expect to observe an association between new uses and applications with viral marketing.
Another key condition Now consider the other half of the equation. Why do firms like Microsoft or AOL or Cisco prefer buying firms that employ viral marketing instead
b161-Ch01.qxd
27/05/04
6:51 PM
Page 26
26 Part I
of developing these applications themselves? It is important to large firms to find new crucial applications that fit into their future platforms, therefore this is not a trivial question. Most of the time, large firms have a natural advantage, particularly in setting interoperability and communications standards and developing new functionality. They have experience working with the IEEE and ANSI committees, they make alliances that favor their preferences, and they have access to distribution networks down which they can push particular standards or freeze out others. Large firms such as IBM and Microsoft have done this for years, and these advantages still exist today. There are four related issues that motivate large firms to merge with small firms employing viral marketing. • •
•
•
They don’t have much choice because venture capitalists fund many different and small firms. It is easier, and possibly cheaper in the long run, to let a hundred new firms experiment with different applications and users, then buy the most successful. From the platform-provider perspective, a firm that maintains a portfolio of applications for a variety of users-it is rarely worthwhile to take risks on large numbers of new applications. Certainly, platform providers will try to develop new applications, but they primarily monitor others, absorb firms in mergers, and grow though acquisitions. Platform providers have offensive and defensive reasons for buying new functions that grow through viral marketing. For example, AOL regarded ICQ as a potential tool for its already extensive set of communities, but also as potentially useful to their competitors for the same purpose. Similarly, Microsoft perceived Hotmail as a potential complement to its extensive investments in online network software, but also as a potential tool for a competitor to build an online platform. Indeed, sometimes it seems that Cisco merges with any frontier networking application that it perceives as a potential complement or threat. Many online users of viral marketing are complements to some other application. Online invitations and greeting cards are a natural complement to online calendaring. Online voice has applications in online games and conferencing. Firms building these functions are not really economical as stand-alone units and make more sense as part of a large-firm’s platform. Large firms know this and explicitly try to act as an aggregator of these functions.
b161-Ch01.qxd
27/05/04
6:51 PM
Page 27
Chapter 5 27
Commercial Internet technology is still in its cowboy days, characterized by widely distributed technical leadership and decentralized initiatives from unexpected corners. This is one of the Internet’s most endearing features, but it is also symptomatic of commercial adolescence. There is room for viral marketing as long as this adolescence persists, because it gives small firms a chance to succeed.
It depends on the web The commercial immaturity of the Web also contributes to the success of viral marketing because of its constantly changing nature. How many new sticky applications can there be? At any point in time, not too many. If it were easy, then there would be many. Conditions change every week as new development tools diffuse, as a wider population gets access to broadband services, and as yesterday’s new users gain greater familiarity with e-functions. A new function that did not gain much acceptance two years ago may find more receptive conditions today. Viral marketing is a perfectly good tool for building that application quickly. This leads to the final paradox. Viral marketing depends on developers taking advantage of changes to underlying conditions. Firms need to demonstrate new functionality, while Internet functionality is expanding and expandable and while the Internet continues to enable new applications. It seems reasonable to expect conditions to change frequently for at least a few more years. After all, the Internet is full of inventions that incubated among noncommercial users for many years before migrating into commercial use. This should remain consistent; there are simply too many students, amateurs, and hackers still at work, as well as commercial firms developing new tools. There are obvious applications which someone will someday get right: voice messaging functions like e-mail, pictorial greetings like Web greeting cards, conferencing functions with more sophistication than ICQ, and scores of others associated with enabling electronic commerce such as billing or identity verification. Of course, if I have already thought of them, then somebody else has too. I am unlikely to ever find that $20 bill. The future will bring many more popular but unexpected applications. These will come from somebody clever, possessing imagination and the entrepreneurial spirit to develop the vision. It is a good bet that the business plan will include a hefty use of viral marketing.
b161-Ch01.qxd
27/05/04
6:51 PM
Page 28
28 Part I
{Editorial note: By 2003 the use of viral marketing had declined, but it has not faded away. In some respects it has simply been transformed. For example, it played a role in the diffusion of Napster, the free (and ultimately illegal) program for sharing music on-line. Viral marketing and related phenomenon continues to play a role in the diffusion of other peer-to-peer applications. It also plays a role in the development of on-line reputations.}
b161-Ch01.qxd
27/05/04
6:51 PM
Page 29
6 An Earful about Zvi’s E-mail
Zvi is the last name listed in my e-mail nickname directory. Whenever I alphabetize stored mail, Zvi’s messages appear at the end, with his most recent one appearing last. So, as it happens, the last message in my alphabetized list is a short note from Zvi’s secretary informing me that Zvi was too sick to come into his office and that she would take my message to his home. I regularly clean up my e-mail account. I erase old solicitations, unfunny jokes, and complaints from students about grades. Yet I cannot bring myself to erase this note from Zvi’s secretary, even though Zvi died over a year ago. The personal and professional have crossed in such an unexpected way. The personal observation is mildly embarrassing. E-mail has integrated itself into my life — so much so that it can become an object of sentimentality. The professional observation is of more lasting significance. E-mail has become the seed for significant changes in society. Zvi would have said that the spread of e-mail resembled the spread of agricultural improvements. This resemblance is not self-evident.
Seeds of change Zvi Griliches was famous for many things, but here I will focus on his study of hybrid corn seed diffusion in the US. He showed how and why Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, August 2001. 29
b161-Ch01.qxd
27/05/04
6:51 PM
Page 30
30 Part I
the seed was first adopted in a few locations, spreading outward over several years. His thesis was written in the mid-1950s, and the kernel of the main idea was timeless. The diffusion of hybrid seed accompanied a broad and very significant set of changes in a vast array of operational procedures and harvesting methods. The statistical methods Zvi used to measure these changes were quite general. They provided a succinct way of summarizing how fast the seeds spread, and what circumstances boasted or hindered this spread. What does this have to do with e-mail? Simply put, sometimes an example from the past can illuminate the present by framing the broad patterns common to otherwise distinct phenomenon. New technologies transform economic activity in two ways: •
•
They can make existing activities less expensive, letting someone achieve something previously not possible because of declines in cost. When things become less expensive, resources can be redirected. New technologies can change the processes of economic activity, such as when a new technology restructures organizational routines, market relationships, and other activities associated with the flow of goods. Most technology revolutions are associated with such change in work processes.
The diffusion of both hybrid corn seed and of e-mail initiated both types of change. This is especially easy to see with the corn. Corn became cheaper to produce. Farmers were already planting corn, but better seed resulted in more output (more ears) per input (land, labor, and fertilizer). Farmers could use their spare time for other activities and their spare land for other crops or livestock. This transformation also contained the seeds — excuse the corny pun — for restructuring the process of planting, harvesting, and selling. Crop failures became less likely. Same-size harvests required less labor. Higher yields motivated the purchase of larger equipment. Greater output generated new uses for corn. These events motivated searches for new hybrids, a process that still continues today with genetically altered crops.
Parallels Both of these same broad types of changes are at work with e-mail. E-mail makes it cheaper to accomplish something most people were already doing — that is, communicating. It makes it easier to exchange the
b161-Ch01.qxd
27/05/04
6:51 PM
Page 31
Chapter 6 31
information that facilitates daily commercial, professional, and private transactions. Ask anyone who uses e-mail extensively: Distant collaboration is easier. Sales people say that e-mail makes maintaining relationships easier. Administrators say it makes coordinating meetings easier. Researchers say it makes exchanging ideas easier. Then there is the little stuff that keeps society moving. Rather than receive professional news and updates at conferences or other annual professional events, it is becoming a norm in many professions to use an e-mail newsletter. None of these changes alone makes e-mail revolutionary. More often than not these characteristics make e-mail more trivial. Like most people, my e-mail inbox is full of jokes, pictures of babies, and other stuff that I would not have received if e-mail did not exist. Linguists have compiled dictionaries of emoticons, those little symbols of smiling faces and expressions that spice up many messages. Yet, the sum of all these trivial changes, which seem to have occurred ever so gradually, alters the allocation of time and the basic rhythm of activity. Do you know anyone who spends less than a few hours on e-mail every week? Do you know anyone for whom e-mail is not a central part of his or her professional schedule? Most of my friends measure their vacations by the amount of time they spend away from e-mail and the number of messages waiting for them upon return. Publishing activity, in particular, has changed dramatically, though that is a topic for another day. Just consider this example: Odd as it may seem, I have never actually shaken hands with the editors, staff, or managers of this publication, even though I have been writing this column for over six years. We do everything by e-mail. For many of these same reasons, e-mail alters the structure of leisure time. Surveys show that e-mail is the most popular home use of the Internet. A close cousin to e-mail, instant messaging, is transforming teenage life. Though instant messaging is only a few years old, many US teenagers are already quite tied to it. Forecasters see more e-mail use on the horizon. The DoCoMo revolution has made wireless e-mail a huge hit with Japanese teenagers. What will US high schools look like when students exchange these little messages all day from portable devices? All these changes are bringing about the second type of broad change, the restructuring of the flows of goods and services tied to communication. This change is gradual, and a lot of it seems trivial while it diffuses, but the accumulation of many small things over a period of time results in radically different processes. This transformation is far from over. That said, e-mail’s diffusion possesses some features that corn’s diffusion did not. For example, many psychologists have observed that the electronic message elicits a different tone from a writer than either a
b161-Ch01.qxd
27/05/04
6:51 PM
Page 32
32 Part I
handwritten letter or a voice message. This has had all sorts of unforeseen consequences. This too is a bigger topic for another day, but a few examples will get the point across. How many of you have sent an e-mail message but later regretted it? How many romances were begun by e-mail, because it brings out a sincere and almost anonymous voice in someone, only to die upon a first meeting void of physical chemistry? How many organizational discussions have been changed because the discussions are recorded in print, then dissected and analyzed by multiple eyes? To be sure, it is not obviously good or bad that e-mail transforms the words it records. For example, every piece of e-mail correspondence in the White House is saved, documenting more in a day than Nixon recorded over months of audiotape. However, instead of learning about a president’s informal manners of speech, as learned from Nixon’s tapes, future historians instead will learn about mundane legislation development. The salty stuff will be missing, edited away by a self-conscious politician or staff member who knows the message will end up in an archive. In a way, the tapes were more informative.
Diffusion becomes personal E-mail’s diffusion has directly changed my life. For example, without e-mail I would never have had much of a relationship with Zvi Griliches. Sure, I would have seen him at professional meetings. I still would have felt affection for him because he spoke with the same Eastern European accent as my immigrant grandfather. And like other Jews of my generation, I would have continued to awe at his life story, which included an unimaginably cruel teenage existence in a Nazi concentration camp. Yet, the plain fact is that many people other than me had better reasons to talk with Zvi. I was not one of his many co-authors, colleagues, or past PhD students. My place was far down the standard academic hierarchy, so I had to wait my turn. In person, this meant that there was not much time for anything others than a casual remark, a quick retort, or an encouraging word. In contrast, e-mail let me occasionally send him a question or exchange ideas, and to reinforce the relationship. Although this came at the start of my career and near the end of Zvi’s, it still left something indelible within me. My most treasured e-mail message came almost a decade ago, after a presentation in front of one of Zvi’s workshops, when I was still pretty green. The workshop focused on topics in technology markets, and Zvi’s comments regularly dominated the discussion. I do not recall all of what
b161-Ch01.qxd
27/05/04
6:51 PM
Page 33
Chapter 6 33
happened at the workshop, except that I admitted difficulties with my research and was hammered for it by the audience. Later, I sent Zvi an e-mail in which I lamented about the pain that accompanies honesty. He sent me an avuncular response. I do not remember its details, but I do recall the tone and especially one phrase; he advised me not to become a “bullshit artist.” It was not much, just an encouraging word from an aging icon at Harvard to an insecure and naive assistant professor. But here is the bigger point — it could not have happened in person, as it was the sort of conversation that is awkward and for which the moment is never right. Over e-mail, however, it was possible. It also came at the right time. And I’ve always wished I had kept a copy.
Epilogue A few years later, and about 40 years after Zvi’s thesis on corn, a colleague at Tufts University, Tom Downes, and I traced the geographic diffusion of commercial Internet access providers across the US. We made a map, explicitly framing the analysis in terms of Zvi’s corn thesis, asserting that the Internet represented a trend of similar significance. Zvi seemed both flattered and amused by our aspirations, taking the pose of a grandfather embarrassed to see his own habits imitated by his neighbor’s children. Early into the project, I showed him our first map. He studied it and after a pause, he smiled ruefully, and declared that a good predictor of not finding Internet access was the presence of much hybrid corn seed. We “talked” a bit more by e-mail, which led to yet another presentation at one of his lunchtime workshops at Harvard. It was 1997. Our first map showed an uneven diffusion of commercial Internet access around the US. Someone expressed skepticism. It raised a small dispute about how fast markets work and how far-reaching they are. It was the sort of issue economists debate incessantly. As he usually did, Zvi spoke up. He noted that our map showed there were few ISPs in Cape Cod, a fact he could verify with personal experience. The previous summer he had run up a large phone bill while sending and receiving e-mail from the Cape. That settled the argument. Some weeks later, I received a package from Zvi. There was a little post-it note with FYI on it attached to a reprint of an article: it was something Zvi wrote for Science in the late 1950s about the diffusion of hybrid corn, a piece I casually referenced in my talk about the ISP maps. Almost 70 years old, and he still wanted to make sure he was cited properly!
b161-Ch01.qxd
27/05/04
6:51 PM
Page 34
34 Part I
Zvi taught me that statistics are only a trace of an activity, not the activity itself. Zvi preached about the importance of understanding the humanity behind these traces, so that researchers can nurture their insight and present their contours in an honest light. Maybe these lessons would have accumulated without e-mail, but I doubt it. The relationship extended over a great distance, increasing the force of small incidents, turning off-hand remarks into little treasures. Maybe that is why I do not erase anything now. I do not want to lose any trace of the seeds Zvi planted.
{Editorial note: Professor Iain Cockburn at Boston University has compiled a list of Zvi’s students and the students of Zvi’s students. He calls it the “tree of Zvi.” To see a copy, go to http://people.bu.edu/cockburn/ tree_of_zvi.html. For a copy of the maps of ISPs in the United States, see Thomas Downes and Shane Greenstein, “Universal Access and Local Commercial Internet Markets,” Research Policy, Vol. 31, 2002, pp. 1035–1052, and Shane Greenstein “Commercialization of the Internet: The Interaction of Public Policy and Private Actions,” in (eds) Adam Jaffe, Josh Lerner and Scott Stern, Innovation, Policy and the Economy, MIT Press.}
b161-Ch02.qxd
27/05/04
7:02 PM
Page 35
Part II
Observations, Fleeting and Otherwise
This page intentionally left blank
b161-Ch02.qxd
27/05/04
7:02 PM
Page 37
7 Repetitive Stress Injuries
My keyboard manufacturer places a warning against improper use on the underside of my keyboard. The warning begins “Continuous use of a keyboard may cause Repetitive Stress Injuries or related injuries.” Later it says, “If you feel any aching, numbing or tingling in your arms, wrists or hands, consult a qualified health professional.” This is serious stuff. The potential health problems can be painful, worrisome, and costly. Carpal tunnel syndrome, the repetitive stress injury receiving the most publicity, can cripple. Other related ailments should get immediate medical attention. By most accounts, the most vulnerable workers include journalists, typists, and computer programmers. Most observers blame the recent epidemic of RSIs on the diffusion of PCs into the workplace. Just below the surface of these medical issues lie confusion and some tragedy. Serious injuries go untreated as the medical system bounces sufferers around. Some of the afflicted cannot raise the money to pay for care. Large employers of typists and programmers fear major medical expenses. Manufacturers feel the need to place bland warnings on their equipment. Though RSIs connect all these events, they also have another thing in common. The confusion and uncertainty accompanying RSIs stems from the peculiar way insurance works in our market-oriented society. This may take some explaining. Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, October 1996. 37
b161-Ch02.qxd
27/05/04
7:02 PM
Page 38
38 Part II
The basics of insurance and RSIs To start, we observe that human beings are biologically geared toward handling a wide variety of muscular tasks, not the same one all day. For this simple reason, RSIs have been around ever since Western countries industrialized (and probably before that too). In the past, RSIs afflicted workers on assembly lines and violinists on the road to Carnegie Hall. Today’s RSIs trouble programmers under the pressure of a release deadline. Throughout the years, these same industrialized countries have treated RSIs in a variety of ways. In the past, a sufferer’s family paid for medical care and lived with the tragedy. Today, an insurance company often pays at least some of the time — which is where things get sticky. For insurance purposes, there are basically two types of situations: those with known risks and those with unknown risks. For example, some RSIs have been around long enough so that somebody (usually an actuary) can predict with reasonable confidence what fraction of a population will experience a problem. For example, carpal tunnel syndrome is becoming a known risk. On the other hand, other types of risk catch everyone off guard because a new technology creates a new situation. For example, it appears now that doctors can trace certain RSIs to (surprise!) the improper location or shape of a computer mouse. This is an example of an unknown risk because it is still unclear how many mouse users will need medical attention. Today’s RSIs are partly known risks and partly unknown risks. For several reasons, insurance markets have not established particularly good ways of handling unknown risks. To appreciate why, it is best to discuss how markets handle the known risks. To begin with, think sailing ships, not computers. The western form of insurance contracts began hundreds of years ago when some London-based shipping agents noticed that each year a predictable fraction of sailing ships sank at sea on the way to and from the New World. In exchange for bearing the risk over hundreds of ships, insurers agreed to reimburse the losses of shippers. On average, insurers did well, and ship owners slept easier. Soon these London firms (ever heard of Lloyd’s of London?) carried the same principles into the modem world. Other firms began to imitate them. Today, individuals can reduce their risk of financial loss in the event of a rare and unfortunate event. A big company bets that only a fraction of a particular group will suffer this problem and agrees to reimburse the unlucky few. We observe this in auto, catastrophic health, and tornado insurance, and so on. Insurance contracts get a bit more complex for work-related injuries, but the same historic principles apply. For example, firemen, policemen,
b161-Ch02.qxd
27/05/04
7:02 PM
Page 39
Chapter 7 39
miners, steelworkers, and construction workers risk physical injury. Not surprisingly, these jobs come with insurance. However, in the modern working world, the finer points of work-related insurance are often a bargaining point between employer and employee. This is not as odd as it sounds. Firemen and miners negotiate the insurance arrangements for all known risks with their employers ahead of time. Thus, when tragedy happens, as it does occasionally, everyone handles it and life moves on. Such an arrangement is never pretty, but it keeps society going.
Handling unknown risks All this machinery for handling known risks comes to a halt when something unanticipated occurs. Who pays for unanticipated injuries? Was this particular RSI part of the original insurance contract or not? In a nutshell, this is the key reason that RSIs associated with computer keyboards have everyone in such a frenzy. How does anyone take out an insurance contract against something that was never anticipated? Until about a decade ago, RSIs were confined to typists. The entire insurance system knew what to expect among this small and generally select work group. It consisted mostly of secretaries, administrative assistants, and editors — workers generally trained in good typing habits. Insurance companies did not anticipate that so many programmers, journalists, and other administrative workers would do their own typing on PCs. Neither did they anticipate that many such workers would be poorly matched to their equipment. Matters are worse because the medical profession is still not 100% certain what causes every RSI, nor does the insurance industry know what alternative equipment to recommend. Advising programmers to adopt better typing habits and more ergonomic equipment is sound advice (as far as it goes) but hardly a panacea. Many of the injured are going to court to figure out who is legally liable. In our litigious society, this is basically a fight over the insurance arrangement in the labor contract. The fighting is nasty because many people can lose big. Physically disabled employees need to pay for their medical care. Insurance companies need to estimate their future liabilities. Large employers of administrative help fear insurance rate hikes.
Potential resolutions Once we understand this confusion as an insurance market problem, it becomes clear that the situation can only resolve itself in one of a few ways.
b161-Ch02.qxd
27/05/04
7:02 PM
Page 40
40 Part II
One possibility is for courts to hold keyboard manufacturers liable for RSIs. My keyboard manufacturer’s warning is a hedge against this possibility. Such an outcome, however, seems unlikely. Though the law is never settled definitively, it appears that US courts will not hold keyboard manufacturers liable for RSIs contracted while typing. Another possibility is that courts will hold employers responsible for all RSIs and for the insurance costs. The US has already partly moved in this direction. Workman’s compensation plans at many firms have (effectively) assumed the expenses associated with RSIs. Simultaneously, these same firms have watched their insurance bills increase dramatically. Clearly, this is not the whole answer. Too many employers do not carry this type of insurance, so it does not cover all US workers. A third possibility is that the US government will mandate universal health care coverage, and thus these insurance coverage problems would no longer exist. It is possible to have an interesting debate over whether this solution will really solve the issue or reshuffle the problems to another part of the US health care system. However, we conduct such a debate only for its academic merit, since the US Congress will not pass a universal health coverage bill anytime soon. A fourth possibility is that the computer industry will find a technical fix to the problem. We are not there yet, but recent advances keep hope alive. For instance, today’s voice recognition software does replace keyboards for the severely disabled. Handwriting recognition software seems a bit less developed, but shows promise for simple writing tasks. New keyboard redesigns are also emerging each day, though the market is far from settling on an alternative. For most people it appears that these technical fixes are still too expensive, weird, or inconvenient. (Perhaps that is too harsh. I concede that, after experimenting with several experimental keyboard designs, I found one that eliminates my particular RSI 95% of the time.)
Muddling through Most likely, everything will continue to muddle through. This outcome really scares me, because so many people are at risk for uncovered injuries. Many employees cannot get their employers to pay for insurance and effectively become liable for the expenses associated with their injuries. Freelance programmers already assume this risk, as do many of the self-employed. I anxiously await the day when this epidemic ends, but it will probably not end soon. Meanwhile, heed this advice: Educate yourself in proper typing habits and make certain that you have the appropriate insurance.
b161-Ch02.qxd
27/05/04
7:02 PM
Page 41
Chapter 7 41
{Editorial note: There is a large and ever increasing fraternity of people who have suffered an RSI. Sometimes it is severe and sometimes just a minor nuisance. After suffering one myself and writing this piece, I found myself openly part of this fraternity. In any given collection of computer geeks it is always possible to find several who have suffered something. Conclusion: This is extraordinarily common among my generation.}
b161-Ch02.qxd
27/05/04
7:02 PM
Page 42
8 To Have and to Have Not
During the first two months of my son’s life, I became acquainted with the joys of walking through our home late at night with him crying. One evening my son and I happened upon my wife’s computer, inexplicably left on for the evening. It played a screen saver, catching his attention for a moment, quieting the room. I was grateful for the distraction, so we both stood there staring at the computer. Pointing at the PC, I turned to my boy and said in somewhat sanctimonious tones, “Someday son, this will all be yours.” I grinned at how silly that sounded. As bequests come and go, this PC would not be a valuable gift. The computer was already a few years old. This PC was certain to be obsolete by the time my son was out of diapers or even old enough to play with a computer. (I wonder which will come first.) Of course, that is too literal. These thoughts of inheritance were not just about my son’s access to PC technology; they were about an attitude. My wife and I intend to buy our children the latest educational software and a top-of-the-line PC when they get old enough to type their ABC’s. Alas, my son’s infant mind, temporarily amazed by movement on the screen, could not comprehend my sense of humor or my ambitions for him. He did not remain quiet for long. That night illustrated something simple and important. I want my son to have access to the best things possible-including the best technology. More to the point, I want my son to be a “techno-have.” This may take some explaining. Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, February 1998. 42
b161-Ch02.qxd
27/05/04
7:02 PM
Page 43
Chapter 8 43
What is a techno-have and why is it important? Let me render the discussion concrete with a short autobiography. My wife and I, like most of our friends, are walking clichés. We are white, over-educated, middle-class, dual-career professionals who laugh at Dilbert. We use cell phones regularly in our cars, bought a camcorder when we had a child, pump our own gas to save money, and know little about how the plumbing in our home actually works. We are not engineers, but we are part of our society’s techno-haves. My wife successfully employs the latest medical technology at her work. I touched a PC before any executive at IBM ever thought of making one. (Actually, I study technology markets for a living, which is pretty unusual, but let’s not get hung up on that detail.) In short, we are skilled professionals and regularly use technology in our lives. It turns out that there are millions of techno-haves in the United States. Being a techno-have is not about politics; we are Democrats, Republicans, and Indifferents. Being a techno-have is not about spirituality; some of us are religious and some are not (though in all likelihood we are not Amish). Being a techno-have is about being familiar with technology. It’s about having no fear of the newest design, and a desire (and income) to explore gadgets, whatever shape they take. That cuts across many dimensions of being American. Here is what interests me: We are not everybody. Indeed, we are probably not even a majority. While three quarters of Americans use a computer at work in some (even rudimentary) capacity, just over a third of Americans households have a home PC. Of this number, no more than half have Internet access from home. As a point of comparison, cable TV goes into two thirds of American homes. Further, over 90 percent of all US households have at least one TV and a telephone. In other words, for many households a computer is a luxury. It’s not even in the same class as having a phone.
Does any of this matter? Yes, this matters, but only in some subtle ways. In the short run, most households will survive just fine without Internet access at home. After all, we all know people who cannot program their VCR, but somehow make it through the day. Also, and this is not a trivial digression, it is important to view this issue in the long run. That means we should ignore related questions that frequently pop up in the news.
b161-Ch02.qxd
27/05/04
7:02 PM
Page 44
44 Part II
For example, occasionally a reporter will write about an activist who tries to get free e-mail accounts for the homeless. Similarly, one may read about the lack of computers in inner-city public schools that are also plagued by violence and high drop-out rates. These are extreme and deplorable situations. Yet, these examples do not really illustrate the subtle differences between the techno-haves and techno-have-nots for the average Jane and Joe. The important differences have to do with access to economic opportunity, and the rewards from economic activity over an individual’s lifetime. This observation does not make it into newspapers because the issues are too subtle. Obscure government statistics bury most of this evidence. Here are the facts in a nutshell: The earnings of those in the lowest 20 percent of income brackets are moving further away from those in the upper 20 percent of income brackets. This movement has been increasing for almost two decades now. There is also some evidence that regional growth where the techno-have-nots live is slower than the growth where the techno-haves live. Here is the provocative interpretation: Since PCs and other digital innovations are the fastest growing industries in the country, they are the main factor of growth in the higher income occupations. In other words, the wages for techno-have-nots are lower than the wages of the technohaves. This gap has widened in the last two decades due to the diffusion of advanced information technology. Every year the techno-haves come out better. That adds up over a lifetime. In the background are issues about the quality of life. The technohaves may wash dishes during the summer break from college, but many techno–have-nots never get jobs outside the kitchen. The techno-haves may change the oil in their cars if they have the time, but the techno-havenots might do it to earn money. In short, the techno-haves get access to better opportunities. While this is no guarantee for a better life — existence is far too uncertain for that — it is an advantage year after year. The birth of the PC did not start these divisions. It is just this era’s flashpoint. In previous generations, the flashpoints involved access to transportation (cars and trains), telephones (at work or at home), and other modern goods.
Are these divisions entirely bad? While they may not strike us as democratic, divisions in economic obtainment are a fact of market economies. The harder question is whether these divisions represent a temporary phenomenon or permanent problems.
b161-Ch02.qxd
27/05/04
7:02 PM
Page 45
Chapter 8 45
Here is one reason to think it is temporary. Lead users adopt most new technologies. They are the first to buy a technology, fund its initial development, and experiment with different designs. Divisions between lead users and others are a natural outcome for a while, but eventually go away. What determines who is a lead user and who is not? Countless studies show that lead users tend to have higher income, more education, and different social networks — in other words, the same conditions that correspond with the differences between the techno-haves and the techno-have-nots. Unfortunately, this explanation also gives us one reason to think the problem could be permanent. Some people, from an early time and onward, start on a path toward better jobs, high-paying careers, and wealthier retirement. Those people, by and large, have access to the latest technical toys and tools. Their schools, parents, and initiative contribute to this outcome. In other words, many techno-haves come from our society’s traditional elite families and cultures. How long will this division last? The history of technology does not offer many clues. Some technologies, such as radio and television, diffused quickly. Other new technologies took a generation. For example, the telephone was not in even half of all American households fifty years after its invention. It did not creep into 90 percent of US households until ninety years after its invention.
Looking forward If given the choice, I would rather be a lead user than not. It is fun to play with new gadgets and, yes, it pays off in the long run. I am going to encourage my son to do the same. I hope it will make him a techno-have. I do not expect the divisions between techno-haves and have-nots to go away in my lifetime. But I do hope that the issue will change by the time my son has kids (if he and I should be so fortunate). If his generation eventually takes for granted the diffusion of PCs, by historical standards that would represent impressive technical progress.
{Editorial note: By 2003, close to 60% of American households had a PC at home. Also, my forecast came true. My son could, in fact, manipulate a PC mouse before he was fully potty trained.}
b161-Ch02.qxd
27/05/04
7:02 PM
Page 46
9 Uncertainty, Prediction, and the Unexpected
There is a common (and I think, accurate) perception that today’s high technology markets contain an irreducible amount of uncertainty. It’s easy to explain part of this uncertainty. Few people are considered experts on many technologies. Thus, most high-tech watchers are frequently surprised, disappointed, and delighted by commercial developments in fields about which they know almost nothing. That said, more than just lack of expertise affects uncertainty. Uncertainty has comprised the industry zeitgeist for decades. Many young programmers living on Internet time may not believe that, but it’s no less true today than it was in the past. Only the sources of uncertainty change in each era — the presence of uncertainty does not. Previously, observers associated disruption and instability with the introduction of, for example, notebook computers, PDAs, networked computers, laser printers, PCs, minicomputers, or a range of other technologies. If you need further evidence, consider this: Frank Fisher, the foremost market analyst of the commercial mainframe era, characterized the previous age as one of constant disequilibrium. Even the mainframe market was volatile! If uncertainty is not associated with fleeting fads or with one wave of technological diffusion, where does it come from?
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, August 1998. 46
b161-Ch02.qxd
27/05/04
7:02 PM
Page 47
Chapter 9 47
Competitive environments Every reasonably knowledgeable engineer can describe the next five to ten years of frontier technical developments in their own field of expertise. Most can do this without prior preparation for the question. All experts understand the dimensions of frontier problems because they all refer to the same prototypes of new technology. Every expert knows about prototypes from reading trade publications and going to conferences. Those prototypes may be expensive, unwieldy, and unreliable, but their existence defines key issues in problem solving. Having said that, opinions (about details) differ dramatically from this point forward. Very few experts would dare to act so cocky outside of their own narrow field of expertise, nor would most technology experts pretend to know much about marketing a product. Put another way, in primitive prototype stages, a widespread disagreement usually appears detailing the appropriate mix of features, technologies, and prices that users want. For related reasons, successfully translating a complex prototype into a product takes intelligent guesswork. It usually takes a well-functioning, cohesive enterprise, populated by experts in a variety of fields, who trust each other’s judgement. It is not surprising that two different organizations and two different sets of experts translate in different ways. The ultimate twist? These experiments are not conducted in laboratories. Performed in commercial markets, these differences among firms either multiply during competitive interaction or ameliorate during imitation. There are several layers to this interaction and imitation.
The difference between prediction and inevitability For one, prediction and inevitability aren’t the same thing. Have you ever sat through a cocktail party where an inebriated and lugubrious former Apple employee observes that the development of a Window’s-based operating system on an IBM platform became possible only after Macintosh’s introduction? So goes this argument, since a working prototype existed in 1985, it was just a matter of time and managerial attention before somebody introduced it to an IBM platform. While sympathizing with any former Apple employee is essential — how the mighty have fallen — sympathy should not obscure how incomplete the argument is. Predictable technological developments do not necessarily imply predictable winners. In other words, even if people foresee technical developments well in advance, they can’t predict which firm or product will profit from commercialization, nor when success will occur.
b161-Ch02.qxd
27/05/04
7:02 PM
Page 48
48 Part II
More concretely, in 1985 the open question was who would make a Windows operating system for an IBM platform. It could have been Microsoft, IBM, Apple, or some team of frustrated undergraduates. Most bets in 1985 were on IBM. As it turned out, only one person foresaw IBM’s vulnerability: Bill Gates. For his strategic foresight and perseverance, Gates deserves credit. Of course, he was also lucky that his competitor at the time could not design and commercialize a reasonably competent operating system in a modest amount of time. The next layer of complication arises when products work together and eventually form a system of interactive functions. To use another simple example, remember the early days of word processing on the PC? Several vendors other than word processing firms marketed spell checkers, grammar checkers, thesauri, and all sorts of tools. Many of these tools were useful additions after several refinements, but foreseeing how all the refinements would work together proved difficult. Imagine looking at a 1982 version of WordStar and trying to predict the design for a 1997 release of Word. The latter product borrows, copies, and steals ideas from all those tools invented over the years. Any prediction in 1982 would have been laughably wrong.
Ideas from unexpected corners Competitive processes are often open to a certain form of serendipity. A technology oriented for one set of uses may unexpectedly develop capabilities valued elsewhere. If the technology adapts well to new applications, all hell breaks loose. The Internet’s diffusion typifies a current example of serendipity. It may seem hard to believe, but this level of disruption was not foreseen even ten years ago. TCP/IP technology was widely diffused in research communities by the mid-1980s and even then most experts foresaw many commercial applications for TCP/IP. Yet, most experts projected applications in text-oriented services that resembled the bulletin board industry or something similar. It looked like an interesting future, but not anything too exciting. A funny thing happened on the way to the market. A physicist invented Web technology in the late 1980s, a bunch of undergraduates pushed the browser further in a shareware environment, and many firms tried to commercialize parts of the whole. TCP/IP hardware and software became increasingly devoted to sending pictures and icons. Now nobody knows where this technology is going from a commercial standpoint.
b161-Ch02.qxd
27/05/04
7:02 PM
Page 49
Chapter 9 49
While it is true that Internet time seems faster, this is an artifact of its young age and the scope of the disruption associated with it. The big source of uncertainty at the outset of the commercialization of the Internet was that most firms (with the notable exception of those who sell the equipment) lost money on TCP/IP experiments. I predict the industry will settle down into regular commercial patterns as soon as somebody figures out how to make money or gain a competitive advantage with online technology. Of course, everyone will then imitate this successful business model.
Survival and planning Commercial survival depends on planning for the unexpected. Many wellknown managerial techniques do this, though these techniques are not always easy to execute. The last source of uncertainty, therefore, is a plain old managerial mistake. The most common mistake is made by inexperienced CEOs who introduce new products without making any plans to react to other firms. There are countless historical examples, but the saddest story belongs to Bob Frankston and Dan Bricklin, the inventors of VisiCalc, the first spreadsheet for a PC. These guys did not patent their invention (a challenging feat under the legal constraints of the time, but worth trying), made it for the Apple II, but did not make it fast enough for an IBM platform, and on and on. In the end, they became famous, but not rich. A rarer mistake involves a major product launch without a reasonable prototype for the technology. For example, John Sculley committed this cardinal sin when he agreed to link key Apple Newton performance features to a breakthrough in frontier handwriting technology. All the experts at the time said that three generations of change could not produce a technology that could adapt to the idiosyncrasies of the average user, not to mention the typical physician. Normally these types of mistakes sink a product quickly without any fuss, but Apple had a high profile at the time and the company mounted an extraordinary marketing campaign. This induced a remarkable number of people to try Apple’s PDA. Then users revolted against this high-priced toy because they wanted the technology to conform to them, not the other way around. Of course, what is so maddening about the Newton episode, especially in retrospect, is that other companies stripped their PDAs of frontier handwriting technology or employed extremely primitive versions of it. Then the concept behind the PDA sold rather well, with many other firms benefiting from the hype surrounding Newton’s launch.
b161-Ch02.qxd
27/05/04
7:02 PM
Page 50
50 Part II
The unexpected is a dependable part of the landscape. Good and inexperienced firms take chances with their investors’ money. Stockbrokers and venture financiers put odds on new products and other uncertain events. Occasionally everyone makes a spectacularly wrong prediction about IBM, Intel, and, amazingly enough, Oracle.
{Editorial note: Nathan Rosenberg piqued my interest in this topic with his essay “Uncertainty and Technological Change” in the book Mosaic of Economic Growth, edited by Ralph Landau, Timothy Taylor and Gavin Wright. It is the wisest essay on uncertainty I have ever read.}
b161-Ch02.qxd
27/05/04
7:02 PM
Page 51
10 When Technologies Converge
Pregnancy makes expectant parents nervous. Much medical technology plays to their anxiety and medical needs. Since my wife and I have recently gone through this for a second time, I know something about it, and she fills in the holes in my knowledge when I ask. (She’s a physician.) New medical technologies let expectant parents and their doctors probe, record, and investigate many details about an unborn child’s features. Ultrasound has received much attention from commentators. In practice, most expectant parents see only the tip of the technological iceberg during an ultrasound exam. Here, I consider only that tip. More to the point, ultrasound technology motivates my discussion about technological convergence.
Convergence and markets Two products converge in substitutes when users consider the products to be interchangeable. This happens when a product develops features increasingly similar those of other products. It also occurs when users put together common components to perform functions already performed by existing products. Two products converge in complements when they increasingly work together better than they worked alone. This occurs when different firms
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, February 1999. 51
b161-Ch02.qxd
27/05/04
7:02 PM
Page 52
52 Part II
develop products or subsystems within a product that forms a larger system. The system’s output can potentially exceed output from the sum of the parts. A modern PC performs most of a type writer’s functions. A modern cell phone performs most of the same functions as a landline phone. At a simple functional level they are examples of convergence in substitutes. Modern medical imaging equipment, such as an ultrasound or CT scanner, combines advanced processing capabilities with traditional sensor devices. This marriage of capabilities provides an example of convergence in complements. Convergence in complements is often associated with the creation of new capabilities. Yet, rarely is such creation not associated with at least some convergence in substitutes. For example, the increasing replacement of the X-ray by the CT scanner is an example of convergence in substitutes. At the same time, the CT scanner evolved so as to permit many functions previously unobtainable with even the best X-ray technology. In most markets convergence is difficult to date because there is always unavoidable ambiguity about the feasibility of specific products at a particular price at any given time. In addition, these products often are technically complex, so they produce an inherent ambiguity about what users know and when they know it. Similar issues arise in dating progress in the diffusion of new innovations through their life cycles. As such, the issues are not unique to convergence.
System and market levels Convergence may occur simultaneously at the functional and system levels. In a particular instance, convergence may be construed as convergence in complements at one level of analysis and, equally appropriately, as convergence in substitutes at a different level. For example, an operating system (say, Windows) may be a complement to a particular hardware platform (say, an Intel x86 chip), together performing the functions of a server. At the system level, different combinations of operating systems and hardware may also perform similar server functions. Hence, the Wintel-based servers converge with servers using older operating systems such as mainframes. In a “network of networks” system such as those linked by Internet protocols, convergence typically occurs over time due to the actions of different decision makers. This happens because the necessary scope and breadth of technical and market expertise are widely dispersed among firms and users, raising the possibility of initiatives for technological or market convergence from many corners.
b161-Ch02.qxd
27/05/04
7:02 PM
Page 53
Chapter 10 53
For example, the Internet was first used primarily for electronic mail and later for World Wide Web applications. The first developers and users of these capabilities were academics and researchers. Developments in commercial applications, which exploded with the Internet’s privatization by the National Science Foundation, came from different sources and were aimed at nontechnical users. These new capabilities replaced some old communication methods and also offered new channels for communicating. Faxing, broadcasting, and telephony using Internet protocols came later, building new capabilities onto the larger system. Since market and technological risks in any technically evolving market are already high, particularly when capabilities are widely dispersed in a network of networks, there is an open question over whether convergence in a network of networks raises any additional strategic and managerial issues. In this respect, one factor seems particularly salient: convergence in complements on a network of networks can lead to significant discontinuities in the competition for delivery of and how we evaluate the relative performance of traditional products. This arises due to the emergence of new capabilities having little precedent. Discontinuous change alone is not unusual in a market for technically complex goods. However, such discontinuity might arise due to initiatives from many providers of complementary goods. This would then feed the perception among established companies that markets can change rapidly for unanticipated reasons. This is a market risk that cannot be reduced significantly. At best, management can be alert to changes in external conditions by making strategic investments in information gathering, in tools for tracking market and technological trends, and in flexible organizational structures. That does not, however, eliminate the risk that a garage entrepreneur may invent a piece of software that will obliterate an established firm.
Ultrasound revealed Ultrasound combines specialized sensors with microprocessor-based computing hardware and display technology. As you probably know, the sensors emit sound waves, then records the echoes bouncing off biological structures. The computer reassembles these signals on the screen in a form meaningful to humans. In brief, this technology resulted from the marriage of routine display technology with some clever technical advances in signal processing, workstations, and sensors. If ever there was an example of the convergence of medical equipment with computing, this is it.
b161-Ch02.qxd
27/05/04
7:02 PM
Page 54
54 Part II
Despite its apparent novelty, ultrasound is a relatively mature technology. In most offices a “technician” operates the ultrasound equipment, which is moderately expensive. It’s so easy to use that the technician only needs a short period of training to learn how to get the equipment to produce a meaningful image in real time. Like much training in medicine, the basics don’t take long to learn. Most of the training time focuses on preparing the technician for rare, but terribly important, non-routine events. Ultrasound has become a routine experience for modern parents-tobe. Like magic, out comes an image of the baby growing inside the mother (and every image looks like the alien inside Sigourney Weaver). Next comes the agonizing debate about whether to have the technician reveal the baby-to-be’s gender. You might even hear comments from expectant grandparents about how this wasn’t possible in their day. Three desires usually motivate our using ultrasound: a picture for the parents, pictures for the grandparents, and clues about gender. However, a fourth, less prominent, motivator is medical necessity. Ultrasound reveals information about things such as the heart, kidneys, bone structure, and even the amount of amniotic fluid. The technician routinely checks all this, and if everything looks fine, says nothing. If there are problems, the technician alerts a doctor. Ultrasound exemplifies a useful convergence of medicine and computers, hence its development. This extraordinary tool assists in one of life’s most basic and nerve-wracking experiences. In fact, convergence in various forms has found its way into many routine areas of our lives. That ubiquity makes it worth trying to understand the moving parts that go into convergence.
More than technological determinism As noted, convergence may arise, in part, due to changes in key technological constraints. These could be increases in computing capabilities, reduced cost of data transmission, and technical improvements of integrated circuits. As shorthand for describing convergence at a system level and for making predictions about market developments, analysts often make a sweeping generalization and ascribe causation for convergence to a very few technological trends. This shorthand can carelessly become an incomplete theory of convergence with strong elements of technological determinism. The development of new technical opportunities must play a role in any analysis of the computer industry’s history. For example, the development of technological
b161-Ch02.qxd
27/05/04
7:02 PM
Page 55
Chapter 10 55
opportunities cannot provide, by itself, much of an explanation for changes in a firm’s behavior, changes in buyer choices of vendors, and changes in the locus of profitable opportunities. A subtler problem with technological deterministic arguments is that they may also fail to point at non-technical bottlenecks to further developments. For example, many worldwide communications and broadcasting systems involve significant government regulation of partially or wholly monopolized communications markets. This means that the rate and direction of convergence in networking applications often depends on critical government decisions. The historical emergence of convergence in complements in wireless technology in the United States depended on rules governing the development of analog cellular and digital wireless applications over the publicly governed spectrum. Similarly, convergence in substitutes in alternative modes of voice transmission depended on rules opening market segments, such as long distance telephony, to entrants using technologies other than traditional landline-based facilities. The next time you hear about convergence, consider carefully. Is it about putting things together or about replacing the old with the new? Is it about a single product or about sweeping changes to an entire system or market? Focusing on the questions’ scope and precision will help you understand the market factors that organize the evolution of change.
{Editorial note: Our first daughter was born in January 1999. We went through many ultrasounds. This essay was inspired by one of those office visits.}
b161-Ch02.qxd
27/05/04
7:02 PM
Page 56
11 Forecasting Commercial Change
Why is so little in the commercial world as dependable as Moore’s law? Doesn’t that seem odd? Why is it that a savvy engineer can forecast the rate of technical change, but it is impossible to find a market analyst who can correctly (and reliably) forecast anything about market events? Three very important facts shape this pervasive uncertainty. They are well known, though they are so obvious that nobody thinks to comment on them any longer. Fact 1: Information technology comprises a stunning variety of distinct technologies — much more than just the microprocessor. These technologies define the technical frontier. They include hardware, software, networking and communications, digital and analog systems, operating systems, operations software, tools and applications, communications software, central switches and PBXs, mainframes and microcomputers, storage devices, input devices, routers and modems, TCP/IP-based technologies, proprietary and other open standards, among others. In addition, a very wide variety of technical specialties, kinds of IT firms, and modes of invention help to advance these technologies. This variety means that simply characterizing the rate and direction of technical progress in IT is not a trivial activity. Rates and directions differ across products and components. Frontier technology may involve new products or processes, combinations of existing ones, retrofits on vintage components, or new systems of interrelated components. Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, June 1999. 56
b161-Ch02.qxd
27/05/04
7:02 PM
Page 57
Chapter 11 57
Fact 2: Users adopt new IT for different reasons. In many applications IT advances are not primarily aimed at reducing costs. Often the use of new IT permits improvements in the quality and reliability of products or, especially, services. Furthermore, frontier IT frequently enables the invention of entirely new services and products, which some users value and others avoid. At the firm level, these new services may provide permanent or temporary competitive advantages. When the new services are reasonably permanent, the firm may see returns to the investment in the form of increases in final revenue or other strategic advantages. If all firms in an industry imitate a new product or service, it quickly becomes a standard feature of doing business in a downstream market. The benefits from the new technology are quickly passed on to consumers in the form of lower prices and better products. In this case, the benefits to a firm do not appear as a revenue increase; but they exist, nonetheless, in the form of losses the business avoided. Fact 3: Adopting complex IT also involves a variety of investments by users and vendors. Many of the highest-value uses involve more than simply buying and adapting market IT capital goods. Complex IT is rarely sold as a turnkey system. More typically, it involves taking general technology and adapting it to unique or complex circumstances for which it may or may not have been designed. Users often contribute substantially to this adaptation process. These last two factors mean there is no direct relationship between investment in IT and productivity. More generally, the future does not arise solely because something is invented; it also comes from the adoption and adaptation of technology by users. The importance of the point is not to debunk the importance of invention, but to increase our understanding of the unpredictable factors that influence the flow of services from new technology. More to the point, changes to the flow of services evolve slowly. Only after the passage of time, and the gradual accumulation of many incremental improvements in processes and outputs, will the result be a dramatic change. It takes time to translate an invention into a viable commercial product. Business models must be developed and new distribution channels created to spread the invention geographically from its region of origin, one set of users must learn from another distinct group, and so on.
Waves of IT advances Due to its complexity and variety, there is not one adoption pattern for characterizing all IT, nor are patterns necessarily similar to some important
b161-Ch02.qxd
27/05/04
7:02 PM
Page 58
58 Part II
historical episodes of diffusion, such as those of radio, television, and the automobile. It is tempting to think of IT as the simultaneous diffusion of several tightly coupled, interconnected technologies, each with an adoption curve strongly dependent on the other. This has been the pattern for, say, for example, the interrelated development of airframes and jet engines, but this model too is deceptively simple. In reality, new waves of IT invention set off other new waves of IT invention by users, and each wave has its own diffusion curve of adaptation and adoption. For example, the invention of cheap fiber optic cable, one of the key elements in the communications revolution, did not immediately change the capability of phone service nationwide. Performance and features changed in fits and starts, as digital-switching technologies, repeaters, and other software that increased fiber’s capabilities were developed and adopted. Economic value changed slowly too, because new fiber networks brought about new services from phone companies and, more importantly, investments from users in digital equipment. These new services and new investments could only be built, tested, and marketed after the underlying infrastructure improved. Similarly, such important contemporary technologies as the World Wide Web and enterprise resource planning (ERP) have set off entirely new waves of invention. The Web is inducing a great deal of new application development. Along with TCP/IP-based technologies, new business models are emerging for delivering and using data-related services. Similarly, the unification of distinct systems associated with ERP is permitting a new wave of IT and business control. These changes are not merely the tail end of a diffusion curve that began long ago; they represent a renewed process.
Why forecasting is difficult When a new wave advances, it enables applications that have no historical precedents. So, today’s users of a new technology find it difficult to imagine or estimate the future demand for complementary products arising out of future inventiveness. Even if early versions of a general-purpose technology have partially diffused to leading adopters, whose inventive activities have been carefully observed, it will still be difficult to forecast the future population of adopters. They will be using the technology when the prices drop and the capabilities expand, and may have different characteristics and needs from the first users. History is full of such examples. Among the best known are those in which early users and industry leaders badly misforecast future demand.
b161-Ch02.qxd
27/05/04
7:02 PM
Page 59
Chapter 11 59
For example, in the US cellular phone industry, leading industry experts at AT&T and at the Federal Communications Commission vastly underestimated the demand for inexpensive cellular-based mobile communications. The consensus of many experts was shaped by their observation of mobile phone use over radio bands. The largest users of radiophones were ambulances, taxis, and wealthy real estate agents. As it turned out, these were hardly a representative group of users for predicting the adoption pattern for cellular phones as prices declined. In another well-known example, IBM’s management vastly underestimated the demand for inexpensive personal computing. Again, this was quite understandable in historical context. Even at the world’s largest and most commercially successful computer manufacturer, it was difficult to foresee the character of the demand for low-cost personal computing technology by extrapolating from the demand for high cost, centrally managed computing in minicomputers and mainframes. The former had an independent software industry developing many custom and shrink wrapped applications, and the latter had the manufacturers controlling the supply of both hardware and software. It is easy to bring the examples forward to recent events. It is very difficult now to forecast even the qualitative nature of the demand for inexpensive, capable long-distance networked computing applications. Forecasters can look at earlier experiences with inexpensive computing (PCs and workstations) and with expensive and difficult networked computing applications (NetWare and Electronic Data Interchange). However, this hardly represents the cost conditions and economic opportunities that future users will face after the deployment of extremely inexpensive computing capabilities and low-cost, high-bandwidth fiber and wireless communications technologies. These deployments will induce (and already have to some extent) the entry into this market of thousands of firms trying to solve previously nonexistent issues. The early users of TCP/IP were scientists and engineers, primarily in higher education and laboratories. These user groups engaged in inventive activity, to be sure, but the issues found in an engineering setting differ significantly from those found in today’s business setting.
Epilogue These observations are about more than forecasting under conditions of extreme uncertainty; they also relate to the central role of market behavior in producing that uncertainty and resolving it. Investment and use differ over time and are associated with different economic goals. The final output from organizations that use IT may also change over time.
b161-Ch02.qxd
27/05/04
7:02 PM
Page 60
60 Part II
Some changes may generate new revenue; some may induce the entry into the market of new firms with business models using the new IT in a radical way; and some may induce a market exit. The key features of the final output of the new IT may, therefore, change radically over time. This makes measuring the information economy difficult, to say the least. It makes comparisons across countries almost impossible. So, we’ll have this topic with us for a long time.
{Editorial note: This essay was inspired by research with Tim Bresnahan, originally written for the Organization for Economic Cooperation and Development. It was called “The Economic Contribution of Information Technology: Issues in International Comparisons.” A later version appeared as Tim Bresnahan and Shane Greenstein, “The Economic Contribution of Information Technology: Towards Comparative and User Studies,” Journal of Evolutionary Economics, Vol. 11, pp. 95–118.}
b161-Ch02.qxd
27/05/04
7:02 PM
Page 61
12 The Tape Story Tapestry: Historical Research with Inaccessible Digital Information Technologies
We all recognize that digital technology has lowered the cost of storing data and helped produce an increase in the amount of research data available. Yet, the invention of digital technologies alone was not sufficient to make the data useful. In the first place, stored data has to be inherently “useful” for answering a worthwhile research question. Second, just because information is “less costly” to store does not necessarily make it more “accessible,” especially for future research efforts. This essay recalls a shaggy dog story. It has two simple observations, neither of which is widely appreciated. First, if machine-readable data is to be made accessible to future generations, then a number of complementary activities must be performed, many of which we all take for granted when using other storage mediums, but do not exist for machine-readable data. Second, digital technology makes it incredibly easy to destroy what could not have been gathered without its invention in the first place. While digital technology makes the storage of more information less costly, it also makes the misplacing, erasure and elimination of historical data less costly. The result is that much useful information stored in machine-readable form is becoming lost to future generations of researchers. Often when the original purpose of private data is fulfilled, the user has little incentive to protect the data for historical purposes. When data is placed in the public domain, if it is placed in the public domain at all, it is not organized with Source: © 2003 IEEE. Reprinted, with permission, from IEEE Annals of the History of Computing, Fall, 1991. 61
b161-Ch02.qxd
27/05/04
7:02 PM
Page 62
62 Part II
the appropriate complementary aids. Hence, despite all the formal mechanisms designed to help researchers retrieve original archival sources, researchers still find that unsystematic informal communication is still essential for retrieval of digital information that is not kept with historical interests in mind. I personally experienced the consequences of these problems in the spring of 1987 and was fortunate enough to succeed despite them. In August of 1987 I received thirteen computer tapes in the mail. These were the last remaining copies of data recorded between 1971 and 1983. Describing the hunt for these tapes is not a trivial story because the data has great research value. The story contains a number of improbable events that led to success — including personal contacts, chance encounters, near misses, 11th-hour rescues and a maze of government agencies. Like any decent mystery, this story also has its share of heroines and heroes, many blind alleys and red herrings, and an ending full of lessons for the wise. Yet, in the end, it is those lessons that make the story worth the telling. The tapes were obtained not through the use of a special index, not with the aid of some of the best reference librarians on the Stanford University campus, and not as a result of a formal records management system. All those mechanisms failed because the information never was properly stored for future generations, or more to the point, the historical value of the data was never recognized. The required documents were obtained because I was fortunate enough to contact a number of the people associated with developing and using that information and who were kind enough to help with my search.
The unwinding of the tape story Since the late 1950s, when it was a relatively easy task, up until the present, when it became nearly impossible, one Federal Government agency or another has tracked the state of the entire government’s computer equipment inventory each year. The reasons they did this, and consequently, the form of the collection changed over time, but the basic task did not. Each year every federal agency and department completed a survey designed to record and categorize the holdings of computer systems. The oversight agency tabulated and published a summary of these answers. The sum of these surveys constitutes a federal inventory of Automatic Data Processing (ADP) equipment. Each year’s inventory is complete, carefully assembled, and informative. The inventories provide an unprecedented record of the federal government’s involvement with information technologies. For example,
b161-Ch02.qxd
27/05/04
7:02 PM
Page 63
Chapter 12 63
they document the growth of total federal government holdings from 531 “processors” in 1960 to 18,474 in 1982. The inventories show the significance of the government’s early involvement in purchasing computer equipment when the industry was young and technologically immature. To the knowledgeable observer, they provide a record of system configurations that track the changing technological norms of computer equipment use in the country. Later inventories contain evidence of the commonly held perception that the government’s systems have aged, heavily burdened by older investments that were not replaced as rapidly as in private firms. In other words, a collection of these inventories provides a historical record of the purchasing and management patterns of the largest buyer of computer equipment in the United States. It is a valuable tool for marshalling evidence of the technological history of computers and for studying developments in the industry as a whole. The inventories were recorded electronically. Each year the complete inventory was recorded, updated and disseminated on machine readable magnetic tape. In addition, books containing parts of the information were published and distributed to a limited number of buyers throughout the country, as well as government depository libraries. None of these details were known to me when this project began, but they are essential for understanding my efforts to retrieve this data. As is explained below, the books were relatively “easy” to find if you knew where to look. In contrast, the existence of any copies of the tapes was much in doubt throughout my search. Much of my efforts focused on finding copies of these data tapes.
Serendipity in the archives As a matter of fact, my involvement with the government’s use of computer equipment was rather serendipitous. I was a graduate student in economics at Stanford University, and an apprentice to the world of research. I had chosen to become a specialist on economic issues in the computer industry. During my third year of graduate school I often found myself in the Johnson Library of Government Documents in Stanford’s main library doing research for my thesis advisor. It was a great place to do research because I had learned through frequent visits that it served as a haven for some of the best reference librarians and reference materials on campus. Perhaps not by coincidence, it also tended to be where most of the largest and most important statistical tables were stored. While wandering through the stacks one afternoon, I found some infrequently used soft-bound books, each about the size of a telephone book,
b161-Ch02.qxd
27/05/04
7:02 PM
Page 64
64 Part II
resting in a corner. That these books were used infrequently was obviousthe binding cracked when I opened each book, and it is was still possible to smell the scent of ink on the pages. The books, it turned out, were the published versions of the inventories of computer equipment owned by the federal government, described above. They included not only aggregate summaries of government holdings, but detailed descriptions of the processor equipment: its type, make, and manufacturer, the office and geographic location at which each model was held. Two more days of systematic searching showed that the library contained not just one book, but almost a complete set of inventories from 1960 to 1983 spread between four locations on the stack shelves (The series ended in 1983 because legislation changed the nature of data collection and halted publication). This data provided detailed descriptions of the holdings and changes in the holdings of the federal government, the single largest buyer of computer equipment in the United States. This data could support more than just one research paper, it could support an empirical research program for years. In time more supporting documents were found. As the historical record began to grow, I wondered how it all got here and why no economist had ever used it. Stanford’s role as a Federal Depository library partially explains it. All Government Printing Office publications are supposed to be deposited there. Since such depositories are not too common, only a small number of researchers have ever had access to these books, and even if they did, they would have had to know what to look for. The second part of the answer is contained in what might be called “the necessity for assiduous archivists.” That is, there is no substitute for a documents librarian who diligently collects and saves every conceivable document, even to the point of filling the shelves with material of no apparent present value. It must be done indiscriminately because it is not obvious today what the historian of tomorrow will find of value, and it must be organized so that arbitrary documents can be found. Joan Loftus — government documents librarian for the Johnson Library, her colleagues, and the librarians that preceded her, are just such librarians. They are the people responsible for placing and preserving the printed inventories on the shelves. As a consequence, Ms. Loftus becomes the first heroine of this story. Of course, everyone takes for granted that assiduous librarians serve a useful purpose when information is stored in books. They not only collect, but also organize and categorize those books for us (indeed, they are generally not indiscriminate in their efforts and often specialize their collections, but this usually comes to good effect as well). Yet, digital information presents several new challenges when applying this old principle. As this story will show, participants in the age of machine-readable
b161-Ch02.qxd
27/05/04
7:02 PM
Page 65
Chapter 12 65
information have not yet made a conscious effort to find room for librarians like Joan Loftus in data processing center libraries, with possibly terrible consequences. If this generation wants to preserve and organize computerized information for future generations, it will be foolish to not have her like there in some capacity.
Searching for electronic records Close inspection of the printed data, especially the summary tables, led to the first disappointment. The published inventories were incomplete, setting on paper but a fraction (perhaps a third) of the available information in any given year. It was obvious from summaries contained in the printed books that the remainder of the inventory information had originally resided on a computer data base at the government office that collected and assembled the inventories. That office was the General Services Administration (GSA). I first spoke with GSA employees who had worked with the current (1986) version of this inventory data. They were generally helpful and friendly. They were accustomed to talking to computer firms who wanted information about current machine-readable files. Vendors use this information to direct marketing and sales efforts. To support this function, GSA employers knew how to get the information about every office in every federal agency. They could give a complete list of an office’s present processor and peripheral holdings. Indeed, if I was ever willing to buy this information, no matter what I wanted to do with it, they would be delighted to provide it. After our initial interactions, my questions quickly left the familiar territory of GSA employees and entered a zone of talk sans communication. I could hear the eyes of the GSA liaison glaze over whenever I used the word, “history.” • •
•
“Historical data?” she said, “Why, the old inventories were thrown out when we moved offices.” “Old data tapes?” she said, “Why, we just update our data base every year. We don’t keep track of any of the changes. We don’t keep backups from past years. The government only needs the present year’s data.” And “What happens to old versions? No one knows. Why, most people have only worked in this office a few years.”
Participants in history are not conscious that they are making history and as a consequence, do not make provisions to preserve records of their actions. Clearly, government agents in charge of developing information for government use are not an exception. The government was apparently
b161-Ch02.qxd
27/05/04
7:02 PM
Page 66
66 Part II
in the business of selling its information and using it for its own purposes. It was not in the business of monitoring that data’s change over time. This attitude had a straightforward implication for my search. Because historical information had no direct use for government employees, the current year’s tape contained little data of pure historical relevance. The present data supplemented little of what was published in the older inventories. What was of historical interest was preserved only by accident. Not surprisingly, automation has exacerbated the pattern. Computerized data tables provide an insidiously easy medium for updating and deleting. As a consequence, researchers cannot usually infer from the state of the table in the present, the path taken to that state — unless the table is designed to record explicit changes in the data base. Hence, this data base of computer inventories was not designed to keep track of what changes were entered because the table-keepers who designed the inventories had their own interests in mind and not the historian’s. This was much like getting the running tab of the score of a basketball game without any information about the path taken to that point in the game. Virtually all the useful historical information had been eliminated because it was so easy to do so. How does this problem usually get solved? In past eras, all of the information was stored in books, and several years of books would allow researchers to isolate important changes across time. In this case, however, the remaining books that had been kept were not quite up to the task. A complete print-out of the data tapes from the past years would also supply historical information, but the printouts were incomplete, and thus, inconclusive. Nobody thought to send each year’s tape to any Federal Depository (or its equivalent for machine-readable data). Several other characteristics of the published books motivated me to continue to search for the data tapes. What existed in published form was extensive, containing more information than most applied researchers ever expect to find, but it was less than 40% of all the information that was originally collected. As already noted, there was a great deal of data on the tapes, especially about system configurations and peripherals. In addition, the published form was expensive to analyze. How was the relevant data going to be entered into a computer for statistical analysis when the source documents were as thick as telephone books, and the cost of putting the data in machinereadable form for statistical analysis was certainly more than I could afford?
Private incentives to keep historical data My investigation to obtain the information in machine readable form branched into many angles. One such stop was the International Data
b161-Ch02.qxd
27/05/04
7:02 PM
Page 67
Chapter 12 67
Corporation (IDC), a well-respected private firm that regularly surveys the computer industry. Maybe they purchased and kept old copies of the tapes? Since the data was non-current, I naively thought, why wouldn’t they mind lending a copy to a purely academic project? In pursuit of this avenue, I wasted the time of a helpful and gregarious — yet anonymous — Washington IDC representative, the second heroine of this story. In truth, the conversation wasted her time, but not mine. She said that if IDC kept such tapes, it was at the head office in Framingham, Massachusetts. She gave me the number of the person who would know the answer. Oh, and incidentally, she suggested that I might get a good overview of some issues in the area from a recent publication of the (generally respected) Office of Technology Assessment (OTA), a congressional research office. They had just published a book with the odd title “Federal Government Information Technology: Management, Security, and Congressional Oversight.” Her first suggestion turned out to be one of many time-consuming false leads on this hunt. “IDC no longer collects or sells such information,” said the man in Massachusetts who had no time for me, the nonpaying supplicant. That answer should not have surprised me. People in the business of selling information about an industry have no reason to care about historical data unless there is a market for it. Hence, they are not likely to keep historical information. On the other hand, if the business did keep historical information and the information was valuable to someone, then they would make certain that they made profit from selling such information, whether their customer’s interests were academic or otherwise. In that case, the information would cost money. Therefore, because IDC was in the business of keeping industry information that could be sold, it had no reason to preserve historical data-bases or information of purely academic interest. Moreover, even if it did keep information, it would have no compelling reason to disseminate it through public libraries. Indeed, it would have every incentive to keep such information proprietary. The irony is that the people best at collecting information and analyzing it — after all, it is their job to do so — do the worse job at preserving it for historians. If it is valuable in the market place, then it is usually too expensive for academics. If is worth nothing in the marketplace, as is typical for much historical information, it won’t be disseminated, or even kept. In an elementary economics course, IDC would not be vilified for its focus and practices. Instead, the above anecdote would be used to illustrate the conceptual difference between the social and private incentives firms face when providing a public good, historical data in this case. In the classroom economists would say that the private incentives for collecting
b161-Ch02.qxd
27/05/04
7:02 PM
Page 68
68 Part II
historical data were less that the social incentives. In a classroom this observation would lead to the following simple policy prescription: since no private firm has incentives to provide historical data that society values, then a benevolent government-sponsored agency should do so. I already knew that the simple prescription had not been followed. Because the inventory has value to vendors who sell to the government, federal employees had adopted an attitude similar to IDC’s. The government did not keep historical data, not only because employees were myopic in their database’s design and updating, as mentioned above, but also because they could not regularly make money from selling historical information, even if they could conveniently collect it. Has the whole government lost its mind? I thought. What an attitude for the government employees to take! Fortunately, it would turn out that some government employees did have a bit more foresight. The other recommendation, the OTA publication with the odd title, opened up another avenue to pursue. The OTA booklet — it will come as no surprise that this was on the Stanford government documents library shelves — contained an extensive and current bibliography, with references to much of the important recent work, government and academia, written on federal automatic data processing (ADP). One among these many citations caught my attention. A five-year old special report from the National Bureau of Standards included in its title the phrase “Federal ADP: A Compilation of Statistics.”
The title held promise The actual publication and author’s name was found (in the government document library’s shelves, of course). Sure enough, the author had used the GSA data base quite extensively in the development of her compilations of statistics and their comparison with private firm’s holdings. At the time I had hoped that the writer, Martha Mulford Gray, could be helpful locating other copies of this data. In secret I had hoped that she might actually have the data, though I also suspected that such an outcome was unlikely. It would have been too easy, a departure from the patterns so far. Ms. Gray’s old office was identified in the front of the publication. The federal telephone directories indicated the office still existed. I prepared to call all the numbers in that office until I reached someone who knew her. So I chose an arbitrary phone number and asked for her. “Oh, why you have the wrong number,” the first voice in Gaithersburg, Maryland replied, “her number is …” Thus, I located the third heroine of this story, Martha Mulford Gray.
b161-Ch02.qxd
27/05/04
7:02 PM
Page 69
Chapter 12 69
From the first time we talked to the last, Ms. Gray has been the type of government contact every researcher dreams of finding. She knew almost everything about this data base, and where to find the answer if she didn’t have the answer. As it turned out, from 1977 to 1982, she had written four analyses of the compiled statistics, only the latter of which had been cited in the OTA publication, and she had a good sense of the data’s accuracy and limits. This would be the first of many times that personal recollection of the data would aid my understanding more than did printed matter. Yet, future generations will not have the luxury of being able to talk to Martha Gray about this generation’s computer tapes. This luxury illustrates what might be called “the necessity of informal contacts.” There is no substitute for talking to someone who has lived through the event you are studying or worked directly with the data you need. These people often know the solutions to anomalies and understand the idiosyncrasies of their data. In other words, there is no substitute for someone who really knows what is going on. Martha Gray had taken an interest in the federal computer inventory partly as a result of an independent political occurrence. The 1965 public law 89–306 (The Brooks Act) had requested that the agency study and provide technical recommendations to other government branches about federal automatic data processing. Because of this legislative directive, Martha Gray had produced four studies several years ago — today she researches another area and no analyst in the National Bureau has taken her place. This was not the last time that someone else’s completely independent actions would have direct consequences for the storage of this data. In fact, another was about to become apparent. The first time I inquired about the existence of the tapes, the answer from Maryland went something like this, “Hmm, we used to have some copies, but were going to get rid of them. Maybe we can just give them to you.” If Martha Gray had actually possessed the tapes, this would have shortened this story’s length and lessons. But, when we discussed it again the next week she said “I have some bad news for you. The tapes were thrown out last month. They were running out of storage space in the basement, so they asked to get rid of them. Isn’t it always that way? As soon as they get rid of something, someone always wants it.” This hunt began in March of 1987. The last existing copy in the federal government of virtually the government’s complete holdings of computer and peripheral equipment from 1972 to 1981 was thrown out in February of 1987! Even the office that printed the old versions no longer holds any copies. Stanford, and perhaps several other federal depositories with compulsive librarians, probably have the only remaining complete
b161-Ch02.qxd
27/05/04
7:02 PM
Page 70
70 Part II
collections of published materials. And some of this data was less than five years old! After I explained the hunt so far and its goals, Ms. Gray realized that the tapes in the basement probably had been the last existing copy of the data base in the government. Then she got upset. She had spent a good chunk of her professional career doing painstaking analysis of that data, she lamented, and she didn’t like the thought that it no longer existed. What happened at NBS suggests that two factors effect the preservation of such data files. First, a person with a sense of a dataset’s value (sentimental or economic or otherwise) will tend to preserve it. However, as soon as physical control of a dataset is given up by the original user to someone else, the probability of destruction becomes much larger. Second, material is put in archives by people that think they are making history, not necessarily by people who quietly go about their business. Thus, those who value history can never start collecting data too soon, or making an effort to preserve material that will be of obvious use to future researchers if the assiduous librarians are not there to do it instead. Martha Gray took the episode as a lesson. She vowed to start sending materials to the National Archives or Charles Babbage Institute. Ms. Gray generously sent me her own printouts of parts of the database for 1971 to 1979. These printouts, along with published sources, resulted in a data set covering approximately 80% of the original material collected on processors for those years. Her printouts and the published sources together constituted about 50% of the original data base for the 1970s. She became very interested in my data hunt and had a splendid idea for a new lead. She suggested contacting several private firms to see if they still had copies of tapes they bought annually in the past. She was certain that some large firms had purchased those tapes.
The search focuses on IBM Her idea looked like a long shot. Why should a buyer keep the information after buying a new inventory each year? I followed through on the suggestion, despite my pessimism, because it was my last chance. This resulted in unsuccessful inquiries with archivists and librarians whose institutions had large interests in the inventory — at Digital Equipment Corporation, the Department of Defense, again at IDC — because one of my dissertation advisors had received information suggesting that they might have the data, and at the IBM archives located in Valhalla, New York.
b161-Ch02.qxd
27/05/04
7:02 PM
Page 71
Chapter 12 71
The inquiry at IBM met with particular skepticism from friends who had followed the search to this point. How does one ask for this information over the phone from IBM, a corporation that learned from frequent litigation to play its cards close to its chest? One friend sarcastically suggested the following question: “Excuse me, would you happen to have any information on a set of government tapes that might help me demonstrate that IBM had market power?” (There is no doubt that the question would have received attention from the other end. However, I had not intended to focus on whether IBM had market power. At any rate, my later research with this data suggested that IBM’s market power with the government was limited). The irony of this was apparent to me: just as IBM’s unanticipated actions dominated the thoughts of so many firms in the industry, I too would soon find my future leveraged on the capriciousness of decisions made at IBM. At this point, I had given up hope of locating the tapes and began planning to spend several weeks keying in a small subset of the data from the published inventories into the economics department’s computers so that rudimentary statistics could be compiled and checked against Martha Gray’s original compilations. Even while I made plans to use the limited information available to me, I decided to pursue each possible angle in the hunt to the end, propelled, in retrospect, by what might best be described as an irrational obsessiveness. After following several red herrings that I won’t describe here, the search focused on one last possibility, that IBM had the tapes. Following this avenue to its logical end is what led to the most improbable sequence of the total improbable sequence of events. The archivist in Valhalla gave me the number of the IBM Library in Washington, an office run by Ms. Nan Parley, the fourth, albeit reluctant, heroine of the story. In our first of many conversations, Ms. Farley did for me all she ever did for me, though it was more than enough. In response to the tape question, she replied effusively that IBM had bought tapes of the government’s inventory in the 1970s and had entered them as evidence in the antitrust suit brought against IBM in 1969. She knew this to be true, she continued, because she had been the representative for IBM who annually traveled to GSA to pick up the tapes. Yes, IBM had bought the tapes, she assured me, but it was anyone’s guess what had happened to them after they were entered as trial exhibits. Of course, it was sheer luck that I had happened to talk with virtually the only person in the country who knew that the tapes had existed, that IBM had owned them at one point, and that they had been entered in the trial as evidence.
b161-Ch02.qxd
27/05/04
7:02 PM
Page 72
72 Part II
The trial record involved literally hundreds of thousands of records, and as a consequence, it is quite impossible to locate anything without exhibit numbers. Ms. Farley said she would try to find me IBM’s exhibit number for the tapes. It was still a long shot, but why not try it, I thought. So I waited for her return call, and phoned her periodically to remind her not to forget me. My patience ran out after a month of these phone calls. Perhaps, I thought, there was another way to get that exhibit number, such as directly from accounts of the trial or from the 2nd District Court in New York. Perhaps, the tapes were odd enough that someone familiar with the trial record would recognize them right away. As with virtually everything at this stage of the search, it was worth a shot, even though it was a long shot. I first consulted the two computer industry histories that based much of their analysis from information in the US v. IBM trial exhibits. There was no mention of the Federal inventory tapes in either of them (Folded, Spindled, and Mutilated: Economic Analysis and US vs. IBM by Fisher, McGowan, and Greenwood, or IBM and the US Data Processing Industry: An Economic History by Fisher, McKie, and Mancke), so I called the federal court in New York. A few awkward calls and a few unbelievable days of uninterrupted busy signals finally resulted in a discussion with a clerk of the court who was familiar with the IBM record. She had one of those fascinating Brooklyn accents that swallow the vowels somewhere between the throat and nasal passages, and I had a hard time concentrating on her words as a result. The attitude, however, was definitely of someone who had lived in New York all her life. The confidence of her answer seemed to finish the search. The court had kept three rooms full of documents, she asserted, but when the trial was dismissed all non-paper evidence was sent back to the lawyers. The court most certainly did not now have any tapes now, even if it once did, which she could not recall. The search then shifted to the defendant’s lawyers. Even if the tapes once existed, the prospects of retrieving them from the law firm that defended IBM against the government seemed low. There was no good reason for this suspicion, but I had little information from the momentum of events preceding these to dissuade me at this point. The law firm of interest was called Cravath, Swain & Moore, and it had quite a reputation. This was the law firm that strategically used information to successfully defend IBM against the government anti-trust lawsuit. The firm’s strategy involved an ironic tactic: the firm entered as evidence so many internal IBM office memos that it overwhelmed the government’s ability to process and organize hundreds of thousands of pages of documents. If the tapes were entered as part of this tactic, I conjectured, then they would be of no real value to the lawyers after the case
b161-Ch02.qxd
27/05/04
7:02 PM
Page 73
Chapter 12 73
was dismissed. Hence, they should not mind sharing the tapes. Unfortunately, this conjecture also implied that the lawyers may not have considered the tapes worth saving after the trial. Faced with this situation, I reviewed what I knew: (1) Because IBM and not the government entered the data, someone at the law firm probably did not see harm in the data at one time. (2) Even though the data was pretty harmless to IBM, there was still no reason to believe the lawyers today thought that. (3) Even though the data had historical value, there was no reason to believe that anyone at IBM thought they were worth saving once the trial was done-unless, perhaps, there was some sort of paranoia at IBM about never throwing away past trial evidence, no matter what its apparent value. To repeat, the assignment now was to trace what happened to the tapes once they passed to the lawyer’s control after the trial. Since, as a general rule, firms who have been through multi-million dollar anti-trust suits are usually reluctant to surrender their proprietary information, finding the tapes would require getting inside the law offices and the corporation. However, there was no conceivable way a stranger like myself would be able to call IBM’s law firm and hope to get the kind of cooperative response that was needed to resolve the issue. Thus, at the end of the trail and burning to know whether these tapes still existed or had been destroyed, and burning to get a certain answer, one way or another, and not being very optimistic to boot, I took the actions of a desperate man. I got in contact with the only man in the nation who could possibly find those tapes. That man was Dr. Frank Fisher. He is the last hero of this story.
Unlocking the archives at IBM Professor Fisher is no Lone Ranger, nor any other masked super hero with a cape. He is a well-known and widely respected professional economist and MIT professor, who happened to be IBM’s chief economic witness in the anti-trust trial. He prepared many of IBM’s economic arguments and has published, along with others, the two books mentioned above and a series of articles inspired by the issues in this case. He is a large figure in the economics of the computer industry. Any work about the market must be certain to review his writings. If there was anyone in the profession who could open a door at the law firm or at IBM, it had to be Frank Fisher. So then, how does one go about asking a well-respected researcher, though total stranger, like Frank Fisher for a favor? It just so happened that Professor Fisher had attended Harvard as an undergraduate and graduated summa cum laude at the same time as one of the future economics faculty members at Stanford, Paul David, did the same. Professor David is a hero
b161-Ch02.qxd
27/05/04
7:02 PM
Page 74
74 Part II
to me in his own right, though in another story only loosely connected to this one. It also just so happens that Professor David had been following my tape hunt and had an interest in my work. More importantly, he sat on my dissertation committee. Professor David had told me that his old friend and I should talk about issues in the computer industry. He had frequently stated that he could ask his friend for help if IBM’s cooperation was needed for something I was doing. I had always hesitated at this suggestion in the past because there had not yet been a concrete task that I could not, in principle, do myself. So when I finally reached the end of my options, as was now the case, it finally seemed appropriate to ask Paul David to ask Frank Fisher for a BIG favor, one which required a bit of effort from Dr. Fisher. Professor Fisher’s end of the story is also interesting and amusing, involving several telephone calls through the law office and across divisions of the corporation to the litigation chief, Eugene Takahashi, who had to give final approval. Anyway, the bottom line is this: Professor Fisher made a direct hit. IBM not only held copies of tapes for most of the years of interest (1967–1979, 1983), but IBM was willing to donate copies of them to Stanford so I could do my research. This last outcome defied all the logic that shaped the character of events up to now. Why had IBM archived these tapes (back to 1967?) when they seemingly had no real historical value for IBM? Why had IBM become so cooperative on this request when they had a reputation for the opposite? Professor Fisher really proved to be something of a trump card.
After information is stored will anyone be able to retrieve it? The transactions with IBM from here on out went so smoothly, they stand out as a stark contrast to every event that preceded them. Professor David put me in contact with one Tony Bouchard, who worked with Eugene Takahashi. He knew about the tapes’ location and some of their technical features-Martha Gray knew about the rest (informal contacts again). We worked out the transfer of a copy to Stanford in about a month’s time, a virtual record for speed if such speed records are kept. In total, the tapes arrived five months after the government inventories were originally discovered on the library shelves in Stanford’s government document library. For all intent and purposes, the end was a happy one. The hunt was successful. I wrote my thesis using the data. In the world of research, this is equivalent to living happily ever after — that is, if you don’t count the two years it takes to write the thesis.
b161-Ch02.qxd
27/05/04
7:02 PM
Page 75
Chapter 12 75
A retrospective view of the situation inevitably leaves me dazed. The last existing copy of the history of changes in the government’s complete inventory of computer equipment did not reside with any government agency, with any government archive, or with any firm that specialized in selling information about that industry. Rather, it resided in the hands of a private firm that bought the tapes for an anti-trust trial and its own internal use. That firm kept the tapes for no apparent reason, and was gracious enough to donate them for research after being prodded by an economist they hired for their own anti-trust defense. He, in turn, just happened to be good friends with a one of my thesis advisors. Those tapes were located and obtained not through the use of a special index, not with the aid of some of the best reference librarians on Stanford campus, and not as a result of the compulsive collection of a librarian. All those mechanisms failed because the information never was properly stored for future generations. Rather, the actual path successfully taken to retrieve these tapes required twelve stops. Its starts at the Johnson Library of Government documents at Stanford University and ends with Tony Bouchard, an IBM employee, who sent the tapes to Stanford. Those twelve stops only count the successful path. It does not count the numerous false leads, many of which were not recounted in this telling. All this leads to the two maxims of doing research with computerized information. The first sums up all the difficulties I encountered when archives failed to be accessible, documentation failed to be available, and information about the tapes failed to be preserved. One might call it “the necessity of complementary investments.” Just because data is “less costly” to store does not necessarily make it more “accessible.” If computerized data is to be made accessible, then a number of activities complementary to digital storage must be performed, many of which we all take for granted when using other storage mediums, but do not exist for computerized data. This includes the systematic and organized preservation of digital information and related materials. The difficulties I encountered retrieving this data also lead to the second maxim, which is best the called “the necessity of informal contacts.” In a world dominated by electronic storage instead of books, informal and personal interaction still provide the background necessary to locate, assemble and understand machine-readable information. Future generations may not have it so easy, because all the potential heroes for their hunts will be gone.
b161-Ch02.qxd
27/05/04
7:02 PM
Page 76
76 Part II
{Editorial note: Today one set of tapes from IBM sit in my basement. Another copy of each tape also sits in the archives of the Charles Babbage Institute at the University of Minnesota.}
References Fisher, McGowan and Greenwood. 1983. Folded, Spindled, and Mutilated: Economic Analysis and US vs. IBM, MIT Press. Fisher, McKie and Mancke. 1983. IBM and the US Data Processing Industry: An Economic History, Praeger Publishers. General Services Administration. 1972–1982, 1986. Automatic Data Processing Activities Summary in the United States Government, GPO, GS 12.9:972–982. General Services Administration. 1960–1983, 1986. Automatic Data Processing Equipment Inventory, GPO, GS2.15:973–983, GS2.15:967–972, Pr EX2.12: 960–966. Gray, Martha Mulford. 1981. An Assessment and Forecast of ADP in the Federal Government, GPO, NBS. C13.10:500–579. Gray, Martha Mulford. 1982. Federal ADP Equipment: A Compilation of Statistics—1981, GPO, NBS C13.10:500–597. Gray, Martha Mulford. 1979. Computers in the Federal Government: A Compilation of Statistics 1978, GPO, NBS C13.10:500–546. Gray, Martha Mulford. 1977. Computers in the Federal Government: A Compilation of Statistics, GPO, NBS C13.10:500–507. Greenstein, Shane M. 1989. Computers, Compatibility, and Economic Choice, Ph.D. Dissertation, Department of Economics, Stanford University. National Bureau of Standards. 1977. A Ten Year History of National Bureau of Standards Activities Under the Brooks Act (Public Law 89–306), NBSIR 76–1113, NTIS, C13.58:76–1113. Office of Technology Assessment. 1987. Federal Government Information Technology: Management. Security and Congressional Oversight, GPO, Y3.T22/2:2F31/.
b161-ch03.qxd
27/05/04
7:02 PM
Page 77
Part III
Developing the Digital World
This page intentionally left blank
b161-ch03.qxd
27/05/04
7:02 PM
Page 79
13 The Salad Days of On-line Shopping
I have seen the future of on-line shopping, and it is green. The logos are green. The sweaters are green. Two people walking abreast look like two peas in a pod. This may take some explaining. There are many competing visions of on-line shopping. The most down-to-earth version goes like this. Consumers use their modem to examine the merchandise on their computer screen, order the goods, pay for them electronically, and specify a convenient delivery time to their door-all for a low fee. The concept has been talked about in information technology circles for some time, but reality usually disappoints. For instance, experiments with interactive shopping on television have been maddening at best and cartoonish at worse. Shopping by catalog — L.L. Bean, Land’s End, and countless others — does not need to be on line because most customers prefer to talk with an operator. And while Internet shopping is fun and exciting, it does not yet influence the shopping habits of America. On-line shopping sometimes sounds like a candidate for a future that everyone discusses but no one really wants. That is where the green logos come into play. They belong to a Chicago-based company called Peapod, which claims to have the right formula for on-line grocery shopping. So let’s discuss Peapod, or more Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, February 1996. 79
b161-ch03.qxd
27/05/04
7:02 PM
Page 80
80 Part III
accurately, Peapod’s experience. The company’s history illustrates some of the economic constraints shaping the on-line shopping market. I recently visited Peapod as part of a project examining innovative computing. Before we begin, please note this disclaimer: Despite all the nice things I say in this column about Peapod, buyers should make up their own mind about the product. Investors should also do their own homework. Do not sue me. Got that?
Now we get to the interesting economics Consider the basic service. What does the customer experience? She — 70 percent of Peapod’s users are women — calls up a complete list of the local grocer’s products and that day’s prices (including specials). She has the aid of some slick software that includes category icons (for example, meat, bread), sorting tools (sort by price, sort by fat content), and product lists (his, hers, the kids). An unobtrusive ad may occasionally appear at the bottom of the screen. She is free to ask for more information or ignore it. She chooses items, delivery time, and payment medium. She can also tailor her order in special ways (two ripe tomatoes and two green ones). In effect, she hires a shopping agent to select, bag, and deliver the desired items. If she is unhappy with the delivery for any reason (tomatoes are too ripe), Peapod guarantees to fix it, even to the point of going back to the store. Look for the wizard behind the curtain. What makes this product work? In a nutshell, it takes technology, management, profitability, and competition. Technology. As with all high-tech ventures, good technology is necessary for commercial success. In this instance, software is crucial. With a good piece of software Peapod has a chance; a bad piece of software would sink the whole venture. Fortunately for Peapod, the PC software (which the customer sees) has many attractive features and works quickly, despite accessing each store’s enormous database. It is simple enough for any educated and non-technical computer user to figure out. So far so good. Management. As with all entrepreneurial ventures, the organization must be good at several things, not just R&D. In this instance, Peapod is in the delivery business — just like UPS, Federal Express, and Domino’s Pizza. There is an important difference, however. Peapod delivers frozen foods, perishable fruits and vegetables, dairy products, potato chips, and anything else found at a grocery store. This business operates well only when the delivery people — and there are hundreds of them in Chicago — know what they are doing. Peapod’s management has to train their workers, screen out the few bad apples in their workforce, supervise the regulars, and organize the
b161-ch03.qxd
27/05/04
7:02 PM
Page 81
Chapter 13 81
whole operation daily. Those workers must follow the customer’s idiosyncratic directions, figure out what to do when the store is out of stock, correct billing errors quickly, and smile in front of the customer. It is hard to profitably run a service like this; it is a daily grind. Hundreds of employees must be efficient, conscientious, and not overpaid. This operation would be enough to scare off most high-tech entrepreneurs. Profitability. The venture has to make money-enough to pay the bank loan, pay the management, and buy new equipment. Peapod, like most young firms, has bet on growth — not rapid growth, but enough to keep the enterprise afloat and moving forward. Since Peapod charges a minimal flat fee per delivery and a small percentage on top of the total sale, Peapod makes its money from big orders. That brings us to the core economic problem: The grocery business carries over into Peapod’s business too. Since most home computer users tend to be from higher income families with little free time, Peapod tends to see the demographic group with the largest grocery bills. Yet, even these people are not willing to pay very much to have their groceries delivered. Peapod needs many deliveries a day to make up for a small return per transaction. Even after six years of growth, the biggest uncertainty for Peapod’s future is simple to state: Will enough people sign up and use the service often? Competition. All high-tech entrepreneurs must be able to fight off the potential competitors that will inevitably imitate whatever succeeds. In this instance, what will happen if IBM, Microsoft, America Online, or UPS throws serious money at a competitive alternative? Consider Peapod’s chances: First, Peapod’s PC technology will not be hard to superficially imitate. The software is good, to be sure, but there are many talented programmers in Silicon Valley who could write it. However, it might be expensive to hire them because most artistic programmers would prefer top dollar from a game company. Second, Peapod’s backbone computer operation will be hard to copy. Certainly it is not impossible. Several firms have employees who can set up the appropriate links between databases — which were a nightmare to create — and update them daily. However, it will be a time-consuming task. Third, and not incidentally, does any other firm really want to be in the delivery business? It is laughable to imagine IBM, Microsoft, or UPS in the grocery business. More to the point, those big companies may not possess the appropriate entrepreneurial drive and flexibility of a small, growing firm. Perhaps they would need to set up separate subsidiaries with stock options for the management. Again, this is not impossible; it is just a big hassle. Fourth, the arrangement between Peapod and a store requires a good working relationship with the store’s management, and also with their
b161-ch03.qxd
27/05/04
7:02 PM
Page 82
82 Part III
information technology people, cashiers, and butchers. Can any of Peapod’s competitors also establish these links — overcoming the idiosyncrasies of grocery stores in different neighborhoods, cities, and states? In sum, all this is not impossible for another company to do, just very costly and time consuming. Some other companies might put it together; then again, they might decide that it is not worth the trouble. Peapod’s experience illustrates the most binding economic constraints on the growth of on-line shopping. How many retail markets other than groceries could generate enough volume to make on-line shopping and delivery profitable? Books? Records? Videos? Laundry service? Flowers? Furniture? Customers often prefer live operators in these markets; and none of them generate enough high-volume (or repeat) business to make it profitable to set up and maintain an on-line shopping and delivery network. It is also not surprising that on-line Internet shopping for these goods has so far produced more hype and experimentation than profitability. On-line banking is about the only other market where the high volume might make up for the low returns per transaction; and the jury is still out on the profitability of that on-line application. The management at Peapod is no longer green — in the sense of being new at this business. They are taking their company into its adolescence with good technology and several important lessons under their belt. Soon enough, their business may mushroom. Then potential competitors may be also green — with envy.
{Editorial note: After a decade of entrepreneurial existence, and after some expansion to other cities, Peapod sold its operation to Royal Ahold, the Dutch grocery chain. As part of the Internet boom, many other entrepreneurial firms, such as Webvan, tried similar modes of grocery and goods delivery in the late 1990s, often at different scales or with different service models. All of them failed commercially. The basic economics of volume delivery were very binding. The only successes, such as they were, came to those firms who remained small and focused, as Peapod did. The other successful approach was taken by Amazon and its many imitators, who substantially relied on a third party, such as UPS or FEDEX, to deliver the order.}
b161-ch03.qxd
27/05/04
7:02 PM
Page 83
14 Don’t Call it a Highway!
Where did the information superhighway label come from? Highways and information technology seem to have nothing in common. Ask yourself, Would it help me understand my IT business if I compared it to a highway? The answer, of course, is probably not. IT business strategy has little to do with driving anywhere. Even modern sages such as Bill Gates figured this out. (But after stating such good sense in his book, Gates then let his publicists title it [groan] The Road Ahead.) Try a related question: Did it help any government policymaker to think about IT as if it were a road? Again the answer should be probably not. However, that does not stop Al Gore and his staff from pushing the highway metaphor; they use it to justify subsidizing IT. President Clinton even used the label in his acceptance speech at the Democratic convention in Chicago. While I find it is entertaining to watch the president and vice president act as cheerleaders for my favorite industry, it is still not immediately obvious why highways offer a relevant comparison. ⬍sarcasm alert⬎ My own personal theory is that this label is part of a grand conspiracy by Nebraska business people. This is how I figure it: Nebraska is so long that its motto is “Our state adds an extra day to your drive across the US.” No other state benefits more from the federally subsidized national highway system (with apologies to California and Texas).
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, December 1996. 83
b161-ch03.qxd
27/05/04
7:02 PM
Page 84
84 Part III
Subsidizing IT will also help Nebraska’s biggest business today. Due to an abundance of workers without accents and the presence of a strong work ethic, the state has become the center of US telemarketing. Federally subsidized fiber optics will definitely help telemarketing. If it worked with highways, they figure, why not again? ⬍end of sarcasm alert⬎ By now, we have all had tongue-in-cheek fun talking about “detours” on the “I-way” or describing “traveling” on the “Infobahn.” Yet — call me an old-fashioned professor — I believe a good label should do more than keep headline writers busy or subsidize a Corn-husker’s business. It should also inform, which this one does not. We need a contest to rename the information superhighway. Let’s discuss how misguided this metaphor really is.
Highway economics Governments subsidize roads for four reasons: Everyone uses them, governments can easily finance their expansion, private industry can only play a secondary role, and everyone holds governments responsible for them. IT has none of these four features. It is not even close. First, virtually everyone uses highways. Sure, some industries use them more extensively than others (the trucking industry comes immediately to mind); yet, everyone needs good roads. In turn, governments throw taxpayer money at road maintenance to increase road quality. Too many people depend on them. In contrast, increasing the quality of IT does not benefit everyone. Many visionaries claim that someday advanced IT will be a household necessity, but so far the information revolution is still a phenomenon for the educated white-collar professional. While over three quarters of the US population use a computer somewhere at work, only a third of all households include a PC. Furthermore, about the same percentage of the US population uses the Internet regularly. Yes, the Net is growing, but it will be a long time before the Net is as ubiquitous as, say, the telephone. When we all leave cyberspace and come down to earth, we can plainly see that most people live just fine without advanced IT. More to the point, there are simply too few users of advanced IT to justify government subsidies with taxpayer money. Second, the financing of highway operations and building is closely tied to government taxation and revenue. This is so natural you may not even notice it. When we buy gas at the local pump, we also pay taxes for road maintenance. When we register our cars, we also pay for highway patrols.
b161-ch03.qxd
27/05/04
7:02 PM
Page 85
Chapter 14 85
In contrast, no natural financial ties exist between government and IT, except for two interesting exceptions, which I will discuss in a moment. To get the point, examine the operation, maintenance, and funding of computing or communications networks where you work. Further, think about the supply of computing and communications equipment, the development of third-party and in-house software, the market for outsourcing or for system integration, and the sale of on-line access either through Internet service providers (ISPs) or through other online services. In these markets, government intervention is largely incidental. There are only two places in IT where governments regularly intervened in the past, but it is hard to see how a few exceptions justify the comparison to highways in the future. One exception is the Internet backbone. In the past this was managed by a sub-agency within the US Department of Defense. It was recently privatized, which means the government plays almost no role in future capacity plans. The other exception is local telephone service. Most states use the taxes on local phone calls to subsidize phone services for the poor, cover the costs of rural telephone hookups, or pay the salaries of 911 operators. Neither exception looks like government financing of roads. This is not an argument to remove government from the IT industry altogether. For example, the Department of Defense may want to fund research and development for a new chip, or the Federal Communications Commission may need to reallocate spectrum for new types of communications equipment. All that activity is fine, but this is still small potatoes compared with financing every thoroughfare. Third, governments plan the highway backbone or put limits on product designs, and count on private industry to fill in the details. For example, governments set aside zones for service stations, but private firms operate them. Furthermore, government mandates safety and gas-mileage requirements, but we all choose the car we want. This arrangement occasionally has its tensions, but seems to work pretty well. In contrast, private industry’s role is not secondary in IT markets; it is primary. Private firms regularly develop new technical frontiers, design new products, and test the boundaries of business models. Most IT firms do not have to check with a government bureaucrat before setting up a host, hooking up a modem, or taking a hundred other actions. Private industry has its biases, to be sure, but nobody argues that government employees should help plan or design most advanced IT. Fourth, governments cannot escape responsibility for planning and maintaining most highways and roadways. While jurisdiction for highways falls into many different laps — federal, state, and local — those splits are well organized. Most state police look after their highways.
b161-ch03.qxd
27/05/04
7:02 PM
Page 86
86 Part III
Federal transportation officials worry about national safety and pollution standards. Local officials are in charge of fixing potholes and clearing snow. (Indeed, some years ago the mayor of Chicago, Jane Byrne, was summarily voted out of office for her perceived inability to manage the snow-removal department.) As far as IT markets go, there is nothing like the perceived governmental responsibility and organized federalism that characterizes policy for roadways. Information infrastructure policy is an assortment of the Telecommunications Act passed last year by Congress, the 1934 act that established the FCC, international treaty, state telephone regulatory decisions, patent and copyright law, the IBM consent decree of 1956 (which was recently softened), the AT&T divestiture decree (plus later modifications), and tons of related court decisions. What a mess to sort out! By the time any regulatory arrangement became comfortable, it was obsolete. Oddly, this is not as bad as it sounds. The day we do not see a regulatory mess — as if one government agency has organized the whole structure for us, just like with highway policy — is the day that this industry has stopped its unguided and disruptive growth toward new frontiers. Thus, it seems especially absurd for federal policy commissions to issue recommendations as if IT policy were well organized. For example, several commissions recently issued portentous pronouncements calling for a greater federal role in “building the Information Super-Highway.” The interesting issues are really these: Are ISPs doing a good job of providing Internet access? If not, why is the federal government in any position to do anything about it? Should the feds pay for local communities to hardwire local libraries and public schools? Should it pay for rural communities to adopt digital telephone switches? The comparison to highways adds little to the debate of these issues. In summary, the comparison between IT and highways breaks down at virtually every turn, especially when it comes to public policy.
What is the right metaphor? If the highway metaphor is a dead end, what label do we use? What is the best metaphor: plant, animal, or mineral? I am serious about this in a tongue-in-cheek sort of way. The new label describe IT’s salient features, whatever you think those are. In the meantime, resist the temptation for bad puns and take the high road. Make a New Year’s resolution: Pledge to stop calling it a highway.
b161-ch03.qxd
27/05/04
7:02 PM
Page 87
Chapter 14 87
{Editorial note: As part of the 1996 Telecommunications Act, Congress set up the “e-rate” program. It taxes long distance telephone bills and uses the revenue to subsidize development of Internet access in rural areas and inner-cities. This provision of the bill was a favorite of Vice-President Gore. It survived court and other political challenges and proved to be quite popular. It collects and distributes over two billion a year in revenue. Not trivially, after passage of this act the use of the term “information superhighway” lost its political expediency. Gore stopped referring to it, and the phrase has largely fallen out of use.}
b161-ch03.qxd
27/05/04
7:02 PM
Page 88
15 Commercializing the Internet
The “commercialization of the Internet” is shorthand for three nearly simultaneous events. They are the removal of restrictions by the National Science Foundation over the use of the Internet for commercial purposes, the founding of Netscape, and the rapid entry of tens of thousands — perhaps hundreds of thousands — of firms into commercial ventures using technologies that employ the suite of TCP/IP standards. These events have now turned every PC into a potential client for Internet applications. The explosion of activity in 1994–95 caught many mainstream and potential market participants by surprise. Until then, the Internet simply failed to make the radar screens of many legal and commercial futurists in the computing and telecommunications industry. For example, as has been widely noted (in the context of antitrust scrutiny), TCP/IP received almost no attention in the first edition of Bill Gate’s 1995 book, The Road Ahead, which ostensibly provided a detailed look at Microsoft’s vision of the future. As another example, the US 1996 Telecommunications Act, the first major overhaul of federal regulation for the communications industry in 60 years, mentions the Internet only a few times. This occurred even though this piece of legislation is over 1,000 pages long and was the subject of several years’ worth of lobbying from all the major incumbent telecommunications firms. What happened and why? Enough time has Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, December 1998. 88
b161-ch03.qxd
27/05/04
7:02 PM
Page 89
Chapter 15 89
passed for us to look at these events from a distance. Several coincident events led to this unexpected explosion. Understanding this past also helps us understand why the future may be very different.
TCP/IP origins By the time of commercialization, many of the technologies associated with the Internet were already 20 years old. As is well known, TCP/IP technology dates back to DARPA (the US Defense Advanced Research Projects Agency) experiments aimed at developing communications capabilities using packet switching technology. For all intents and purposes, DARPA issued the first contracts for Arpanet in 1969, which involved a few dozen nodes by the time of the first e-mail message in 1972. After a decade of use among a small group of researchers in the 1970s, DARPA and its working oversight group established the protocols that would become TCP/IP, orienting them toward moving text between hosts. By 1984, DARPA had established the domain name system, the DoD (the US Department of Defense) sponsored the TCP/IP standard, and we used the term Internet to describe the system. In the early 1980s, the DoD began to require the use of TCP/IP in all Unix-based systems, which were in widespread use among academic research centers. These policies encouraged extensive use of the Internet among virtually all research universities in the United States by the mid-1980s, which, in turn encouraged the further adoption of complementary applications such as FTP, Telnet, and so on. In 1986, oversight for the backbone moved to the National Science Foundation, leading to a dismantling of Arpanet and the establishment of a series of regional networks. The NSF pursued policies that encouraged use in a broad research and academic community, subsidizing access to the Internet at research centers outside of universities and at non-research universities. The NSF also subsidized deployment of Internet nodes to many institutions. This had the intended effect of training a wide set of network administrators, students, and users in the basics of TCP/IP technology. In conjunction with this spread, the NSF sponsored development of changes to TCP/IP that let it be applied to more varied uses. By 1989, the Internet connected more than a 100,000 hosts worldwide (most were in the US). This is a slightly deceptive number since almost any computer with an IP address can act as a host. In the late 1980s, the NSF began working with a wide variety of private companies, including IBM and MCI in prominent roles. The goal was
b161-ch03.qxd
27/05/04
7:02 PM
Page 90
90 Part III
to develop practical standards for network backbone communication that could be deployed on a large scale (routing protocols and addressing systems). At the same time, the NSF retained policies restricting use of the Internet backbone to research purposes. No advertising for commercial products was permitted nor was widespread use of the technology for the sales and distribution of products. As insiders knew, this restriction was widely disobeyed by many students who operated bulletin boards on campuses. However, this activity was small and decentralized.
Commercialization of the Internet By the early 1990s, the Internet was confined to the scientific, academic, and military communities. The text-based activities in use provided only an inkling of TCP/IP’s commercial potential. There was no consensus among insiders or futurists over where commercialization would lead. Commercial interest in the Internet began prior to the invention of Web technology, and by itself, would have motivated commercialization of the Internet in some form. By the late 1980s, the suite of TCP/IPs offered an alternative technological approach to existing online services, which had demonstrated the efficacy and desirability of online activity. For example, bulletin boards already numbered in the thousands in the US. TCP/IP technologies also offered an alternative to services such as Prodigy, CompuServe, and America Online, which had several million home customers across the country by the early 1990s. There was also a less concrete interest in developing any technology that might fuel further developments in electronic commerce, especially in business-to-business transactions. However, EDI (electronic data interchange) had not widely caught on by the early 1990s, so the fuel for electronic commerce was weak. Perhaps most importantly, the client-server revolution was beginning to take hold and gain momentum with business users around the early 1990s. TCP/IP offered an alternative or possibly complementary technology for achieving further connectivity in a networking environment. The demand for commercial applications based on TCP/IP technology took a leap in a new direction with the unanticipated invention of the browser standard. As is now widely recognized, using the Internet for visualbased applications opened up a whole set of commercial possibilities. This capability was first invented in 1989 for the purpose of sending scientific pictures between physicists. That was followed by a set of experiments with browsers at the University of Illinois, leading to the basis for Mosaic, a browser using Web technology. This browser was widely
b161-ch03.qxd
27/05/04
7:02 PM
Page 91
Chapter 15 91
circulated as shareware in 1993–94, exposing virtually the entire academic community to the joy of sharing pictures. The Mosaic experiments served as the inspiration for the team at the University of Illinois laboratory that wrote Mosaic. Marc Andreeson, who was part of that team, later went on to become chief technology officer at Netscape. This browser experiment also served as the technical basis for licenses from Spyglass, Inc. to Microsoft. Microsoft put TCP/IP technology into Windows 95 and sold a browser soon thereafter. (Microsoft, in the heat of antitrust scrutiny, claims it would have adopted a browser even if it had not licensed the technology from Spyglass.) Plans for commercializing the Internet were put in place in the early 1990s. Commercialization principally involved lifting the NSF’s restrictions on commercial activity, which could only come about when the NSF passed governance over the backbone to private hands. These plans were made prior to the invention of Web technology and were implemented at about the same time as the diffusion of the browser. Thus, when Netscape commercialized its first browser in 1995, it diffused to a receptive set of users, making a spectacular impression on most computing and communications industry participants. At the same time, most of the planners for commercialization had not seen it coming.
Why commercialization was so explosive TCP/IP technology was developed in a research community that focused narrowly on a set of uses and users with needs were quite different from the average commercial user today. When the technology became commercialized in 1994, many new uses and users outside of the research community became potential users of a new set of applications, thus creating a mismatch between the frontier and potential users. In addition, the unexpected invention of Web technology, which occurred just prior to commercialization of the Internet, further exacerbated the gap between potential uses and the technical frontier. In the environment of 1995, we saw the primary open issues as commercial, not technical. What business model would most profitably provide Internet access, content, and other services to users outside the academic environment? No consensus had yet emerged, so firms experimented with different business models for mediating between the frontier and the user. The frontier had developed without much use outside of the research environment and there were questions about its application in other settings. What would users be willing to pay for and what could developers provide?
b161-ch03.qxd
27/05/04
7:02 PM
Page 92
92 Part III
The commercial opportunities in 1995 called for what looked like a one-time expenditure to set up connections and access for commercial and home users. This involved (and still involves) setting up a network in many different locations for many different applications, and customizing it to existing information networks. Seen in this light, the value of being in the Internet business is fleeting in one sense and possibly enduring in another. It is fleeting because some of it depends on a one-time business opportunity, translating Internet access into a reliable and dependable standard service for nonacademic users. This opportunity is an artifact of the development of the technical capability in an environment that expressively forbids its use for commercial purposes. The value of long-run business models is still uncertain because TCP/IP technology touches on many business processes, generating experiments in the use and delivery of Internet access. These experiments should spur other complementary developments whose economic value will remain unknown for some time. If any of them succeeds, future generations will remember this era as the explosion that preceded enduring and general advance.
{Editorial note: The Internet stands as one of the few cases where a government sponsored technology transferred into commercial use with such ease. Those origins have much to do with how commercialization played out. As it turned out, as forecast here, there was a one time business opportunity (in the US) associated with converting private IT capital over to TCP/IP compatibility. That boom in investment lasted until 2000, after which it declined dramatically.}
b161-ch03.qxd
27/05/04
7:02 PM
Page 93
16 Building the Virtual World
Stop and think about the building and delivery of electronic commerce, all the activity that goes on behind your PC. It is not just a routine stringing of computer to network server to backbone connection. The framework for characterizing that activity is more complex and involves commercial and market factors along with the technology that makes electronic commerce possible. Any reasonably comprehensive framework would have to answer four general questions. Has the vertical chain for delivering electronic commerce settled? Does the creation of value depend on how firms commercialize technology? Do vendors approach similar commercial opportunities with similar or different strategies? How important is adaptive activity to a firm’s growth strategies?
Vertical chain Consider the vertical chain for delivering electronic commerce. It is comprised of all the various pieces that are active when a user engages a producer in some sort of electronic activity. Is this chain settled? I think not. Conservatively, the vertical chain is comprised of at least two dozen distinct categories including client applications, operating systems, hardware, processors, etc.; server hardware, database management software, Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, August 1999. 93
b161-ch03.qxd
27/05/04
7:02 PM
Page 94
94 Part III
system software, etc.; enterprise software, middleware, office applications, etc.; and network access, operations, and data transportation, etc. This vertical chain is not settled for several reasons. There is no single firm that dominates all phases. Leading firms include Microsoft, Dell, IBM, Intel, Sun, Compaq, AOL, UUNET, MCI, AT&T World Net, Cisco, Novell, Oracle, Hewlett/Packard, EDS, Andersen Consulting, SAP PeopleSoft, Baan, and Computer Associates, as well as many others. Because firms specialize at different layers of the vertical chain, there is no organizational consensus on how the chain should be organized, nor is there any stability in its market dominance. Rather, there is a situation of divided technical leadership, where many firms possess roughly similar technical capabilities. With a few notable exceptions, if a firm doesn’t maintain its place in the technical frontier or doesn’t satisfy its immediate customer base, it will quickly be replaced by a more viable and more highly organized entity from a nearby competitive space. The structure of the vertical chain also influences its instability. There is new technology here, especially in the wires, cables, hubs, routers, and new switching equipment. Yet, it is also a retrofit onto the old telephony communications system, as it is an incremental change to the established data transport system and to many users’ existing client/server systems. This design makes everyone unhappy; if firms were going to build a system from scratch around Internet protocol, this is not how it would be done. Why is this vertical chain so confusing? It is because it defies existing classifications of economic activity. It changes too rapidly for stable definitions. Moreover, economic activity involves a mix of tangible durable assets and intangible ever changing business processes, a combination that tends to defy documentation by all but the most dexterous consultant. In addition, the mergers of the last few months garnered headlines, but this is hardly the end of the restructuring.
Creation of value Does the creation of value along the vertical chain depend on how technology is commercialized? I think so. Yet, this bland question masks complex detail. The first detail is well known to most telecommunications insiders, but new to some PC industry watchers. That is, data transport is cheaper at high volume. This is because of economies of scale in aggregation and density. This was true in the voice network and is still true of data networks, whether it has a PC at the end of it or a mobile intelligent Internet device. It should continue to hold true in the future. Therefore, we can
b161-ch03.qxd
27/05/04
7:02 PM
Page 95
Chapter 16 95
expect this part of the vertical chain to contain only a few suppliers in any given location. Second, the “last mile” of data transport, where the densities are not too large and the volumes are not too high, often offers a good business position. This is particularly true for some electronic commerce applications such as business-to-business automation. Yet, sometimes it is not profitable, which gives rise to hundreds of vexing business decisions. Does it make sense for small and medium-size firms to have a large online presence? Does it make sense for someone to get in the business of helping a rural farmer check the financial markets regularly with a Palm Pilot? Does it make sense for a cable company to deliver high-speed data over its cable lines to neighborhoods where only 25% of the households have a PC? This part of the market is getting so much attention right now for good reason; this is where the have/have not split gets decided. Third, key technical enablers for this structure are the processes underlying data interchange. These are comprised of TCP/IP, W3C, and many other nonproprietary standards. There is no natural reason why it had to be this way, nor for it to stay this way. Indeed, plenty of firms would love to overlay this system with their own proprietary processes. That said, today, at least for now, interconnection is easy and doesn’t reside in any firm’s unique domain, not even Microsoft’s. At least, not yet. Though everyone understands that electronic commerce creates value, it is quite difficult to document even the basic trends. Prices change frequently, but it is not clear what economic activity has produced them. The diffusion of new technology has rapidly moved across income, geographic space, and type of application. There is no consensus on the best way to regard these commercial developments.
Same opportunity, different strategy The Web can be overwhelming because there is so much variety. This arises for good reasons. To put it mildly, electronic commerce is just like any other uncertain competitive market where vendors approach similar commercial opportunities with different strategies. This is partly a reflection of the newness of the market and its goldrush hype. In the present era, it is not hard to start something on the Web. Entry costs are low in all but the most technical of frontier activities. It is inexpensive to put up a Web page. It is inexpensive to open an ISP. In most urban areas it is not hard to find programming talent. And for most users of electronic commerce, new applications are only an incremental change in their lives. Hence, many small firms can, at least, get a start.
b161-ch03.qxd
27/05/04
7:02 PM
Page 96
96 Part III
Variety exists for another-somewhat less faddish-reason. Many vendors are deliberately targeting unique user needs; tailoring their service to the peculiar needs of local markets, to their own special niche, or to offering their own peculiar background. Whether this is associated with different visions of the future or a different core capability, it results in variety. To put it more starkly, variety thrives because of divided technical leadership. Two types of strategies characterize electronic commerce entrants. First, a firm may try to combine electronic commerce with nonvirtual activity in some unique way. For example, established media is full of efforts like The Wall Street Journal, The New York Times, and Business Week that try to maintain both a virtual and non-virtual presence. There are also plenty of less-established media doing the same, such as The Industry Standard, Wired, and Red Herring. Endeavors also exist in retailing, where some firms use their Web presence to complement users’ non-virtual shopping experiences. Dell Computer’s presence on the Web is one of the best-known examples. But many other examples stand on their own and also complement the specialty of the home office, such as E-Schwab, Microsoft Network, AT&T Worldnet, and so on. There are also the pure electronic commerce undertakings, tailored to visions of unique online needs. Amazon.com is among the largest of these firms, as it builds a large one-stop shopping experience across books, videos, and other things. e-Bay, the auction house, is another example. AOL, Priceline, Travelocity, e-Toys, and many others are making similar efforts. There are tens of thousands of these, and it is not an exaggeration to say that all are a dynamic moving target.
Adaptiveness Finally, how important is adaptive activity to most firms’ growth strategies? This last question is perhaps the most important. Adaptive activity is central to growth. Yet, it is often the activity that goes undocumented. What is adaptive activity? Firms take an adaptive role when they use the many possibilities of technology to meet the unique needs of the user. Firms accomplish this when they package new offerings, when they launch new services, and even when they survey a business and tailor the new technology to the user’s peculiar needs. There is one type of adaptive role in which entry is not common and another in which it is frequent. The uncommon role is that of tool builder, a specialist in an upstream section of the vertical chain: defining standards for equipment makers, ISPs, operators of Web services, network users,
b161-ch03.qxd
27/05/04
7:02 PM
Page 97
Chapter 16 97
and others. Cisco, Microsoft, Oracle, Sun, IBM, and other well-known firms desire this role. The more common adaptive role is consulting. To put it bluntly, everyone is a consultant in this industry. This is not a bad thing; indeed, I think it is the essence of economic development. Consulting services are either a stand-alone service or bundled with the sale and installation of equipment. The key observation is that in a dynamic environment, every active market participant sells knowledge along with other services. My opinion of the present situation is that adaptive activity matters a great deal in the current era. Some firms specialize in making electronic commerce easy to use, while others push the technical frontier. Some vendors specialize in a small set of activities, while others seek to offer general solutions. What should a vendor do? For now, there is no right answer. It is important to drop any hardware-centric view of electronic commerce. The development of intangible assets, especially new business processes, is a key part of economic growth. The transfer of knowledge, especially about the processes necessary to make use of this new technology, is the activity that leads to that development. The biggest irony is that transfer of knowledge activities is key to building the Information Age and yet, it is the transaction for which it is most difficult to collect information.
{Editorial note: This essay was inspired by interviews with ISPs, who I studied for academic purposes, and by the boom in hiring at business schools by consulting firms doing IT projects. I watched many of my MBA students take jobs affiliated with “adaptive activity,” often labeled as “consulting.” This column summarizes conversations with many of them about the types of jobs they were taking and their place in the value chain. Unlike the typical job in operations, these consulting positions seemed founded on a value chain in transition from an old frontier to a new, which surely was a temporary foundation for business development. Sure enough, this value chain was temporary. Yet, even I did not see how fast that transition would take place, how fast the boom would diminish, and how fast the consulting houses would have to retract.}
b161-ch03.qxd
27/05/04
7:02 PM
Page 98
17 A Revolution? How Do You Know?
How do we know that electronic commerce is a revolution? While Karl Marx wouldn’t recognize the meaning of the word “revolution” in this context, many firms that promote e-commerce often use the word. It’s interesting to consider why. Even Business Week and The Wall Street Journal, the usual beacons for mainstream commerce in the US, have noticed e-commerce’s ubiquity. Although, mainstream publications mostly concentrate on the activities of empires and their builders — Microsoft, AOL, AT&T, Intel, Cisco, IBM, Sun, and their friends — the sea change in focus is unmistakable. Some Internet valuations are too high by standard metrics. There are more IPOs this decade than ever before. Something dramatic must be happening. Another information source is the publications of the unestablished. As public discussion of e-commerce has grown, a loose coalition of prophets for the new economy has emerged. They write for such publications as The Industry Standard, Business 2.0, Wired, Red Herring, Fast Company, and more electronic magazines than anyone can list. Spend some time reading these magazines and you will notice a worldview containing two principal features. First, the prophets declare a business revolution in all information-intensive activities, such as broadcasting entertainment, retail marketing, supply-chain management, other coordinative activity, and research. Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, April 2000. 98
b161-ch03.qxd
27/05/04
7:02 PM
Page 99
Chapter 17 99
Next, and this is related, these same prophets proclaim that e-commerce technology’s novel characteristics dilute standard lessons from the past. That is, because of its many unique features, it’s ushering in a new commercial era that operates with new rules. There is probably a grain of truth to these declarations, and the euphoria is intoxicating. However, as a commentator on economic activity I am obligated to be skeptical, particularly of the second declaration. Euphoria does not, and should not, justify too simplistic a retrospective view of what actually happened, or of what is about to happen. We are experiencing a revolution, but probably not for the reasons most people think. Why does this market phenomenon look like an actual revolution to a long-time and professionally skeptical observer of computing markets? Because of the three trends that comprise this revolution: •
•
•
E-commerce grew rapidly, attracting the interest of tens (and possibly hundreds) of thousands of firms and millions of users, quickly achieving the trappings of mass-market status. The infrastructure behind e-commerce, that is, the Internet, has almost become geographically and industrially pervasive, a diffusion pattern rarely found in new infrastructure markets. Firms didn’t quickly settle on the offered menu of business services, indicating no consensus about the appropriate business model for commercializing Internet services.
These trends don’t usually arise together in a new product or service, whether it’s high or low technology. For example, the jet engine is nearly pervasive in all economic activity today, but that took several decades to achieve. The Furby market grew rapidly, but that’s because there was a pretty clear consensus about how to commercialize a Furby. Nobody knew how to commercialize cellular telephony for two decades after its invention, and it still is not pervasive — still less than half of US households have a cell phone. Rapid, pervasive, and unsettled conditions don’t appear frequently and certainly not together. They are symptomatic of a revolution. Explaining all three traits should provide insight about the wider forces at work and where this revolution might be going.
Pervasive and rapid change The diffusion of the Internet infrastructure influenced the origin of e-commerce. Originally a government-sponsored project, the Internet backbone initially served only noncommercial purposes. Its commercialization began in 1992 with selling the backbone, rescinding management
b161-ch03.qxd
27/05/04
7:02 PM
Page 100
100 Part III
responsibility for any public data exchange points, and privatizing domain name registration. This let private decision making further develop the contours of network infrastructure. How did e-commerce’s pervasive and rapid traits show up together after commercialization? Two contributing factors deserve special mention: the absence of both significant technical and commercial incongruity. Most technologies developed under government auspices have significant technical incongruity during commercialization. Government users, procurement, and subsidies often result in technological features mismatched to commercial needs. From a technical or engineering viewpoint, technology used exclusively for noncommercial purposes may appear primitive in civilian use and require considerable complementary inventions or entrepreneurial imagination. Commercial incongruity arises when commercial markets require substantial adaptation of operation and business processes to use technologies. That is, government or research users often tolerate operational processes that don’t translate profitably to commercial environments. The Internet incubated under government auspices for over two decades, but wasn’t technically incongruent with private computing use. Why? Because academic modem pools and computing centers tended to use technologies similar to their civilian counterparts, such as bulletin board operators, buying most equipment from commercial suppliers. Therefore, moving this activity into the civilian sector didn’t necessitate building a whole new Internet equipment industry. Similarly, computer users had already developed a routine set of applications for the Internet protocol (e-mail, FTP Telnet, and so on). These immediately transferred into commercial use because many similar applications were already in use in networking environments. The Internet-access business lent itself to small-scale commercial implementations, as was already occurring in academic computing centers. The marginal costs of providing (and using) dial-up services were low, and the marginal costs of expansion fell quickly. The feasible economic thresholds for commercial dial-up service encouraged small firms and independent ISPs.
Pervasive and unsettled characteristics Against this backdrop, two key events in 1995 set the stage for pervasiveness and unsettled markets. The first was the Netscape IPO. The other was the marketplace entry of AT&T WorldNet. The Netscape IPO brought extensive publicity to this technology and, not trivially, the commercial success of this firm prior to the IPO caught
b161-ch03.qxd
27/05/04
7:02 PM
Page 101
Chapter 17 101
Microsoft unprepared. The World Wide Web began to diffuse early in the history of Internet commercialization, providing an unexpected and potentially lucrative set of opportunities. This new technology opportunity provided firms with strong incentives to experiment with new business models. Web technology developed extraordinarily fast, fueled by decentralized adaptation. While not all regional localities — where the Internet was available — experienced the same type of competitive choices, nor did all firms perceive the same opportunities, many companies developed opportunities quickly, seeding lessons for other localities. Pervasiveness in the form of widespread adoption came about as a by-product. The number of firms maintaining national and regional networks increased and moved into almost every regional market. At the time of the Netscape IPO, for example, most of the national access firms were recognizable. Such established firms as IBM, AT&T, Netcom, AOL, and others entered the ISP business. By 1998 many entrepreneurial firms maintained national networks, and few of these new firms were recognizable to anyone other than user consultants for this service. These two types of firms could bring the Internet to almost anyone almost anywhere in the country. AT&T’s actions mattered for another reason. It developed a nationwide Internet access service, opening with as large a geographic spread as any other contemporary national provider. It also grew quickly, acquiring one million customers after a few months of publicity on the strength of its promise to provide reliable, competitively priced, and easy-to-use service. It was deliberately aimed at households, providing a brand name and a professionally operated implementation of the ISP-service subscription model. AT&T’s entry as an ISP had consequences, but more for what didn’t happen rather than what did. AT&T’s action didn’t end the growth of other ISPs, such as AOL, nor end entry of small new firms, such as Mindspring. It also didn’t initiate a trend toward consolidation around a few national branded ISP services from AT&T, IBM, and other well-known incumbent players. The inference was clear, the known empires didn’t immediately dominate the offerings from other firms, nor did it end the business model restructuring that e-commerce had prompted. In brief, the “old boys” were not controlling the commercial side of this new technology, at least not immediately. Now that really was a revolutionary trend.
Business experimentation moving forward The unsettling part of Internet growth followed quite naturally. For one, the Web was a new technology and a primitive one at that. It still needed lots of test drives before it would work well.
b161-ch03.qxd
27/05/04
7:02 PM
Page 102
102 Part III
Second, and ultimately of more significance, geographic pervasiveness entered commercial calculations soon after the Netscape IPO. Nationwide (and eventually worldwide) Internet pervasiveness changed the economic incentives to build applications on the backbone and altered the learning process associated with its commercial development. All users and Web pages are now loosely tied to each other on a daily level, and on an interdependent level, in terms of their network security and reliability. Many new applications, such as virtual private networking or voice telephony over long distances, can require coordinating quality of services across providers. Sites like eBay, which make their mark by matching buyers and sellers who otherwise would not transact, were largely unthinkable without a pervasive network. Third, and this has not played itself out yet, the rules for empire building also changed. Microsoft got its vengeance on Netscape, but seemed to lose its focus as a result and has not yet disentangled itself from the US Department of Justice. Cisco just kept getting bigger as it bought other companies. AOL bought Netscape, ICQ, Time-Warner, and almost every other content source. AT&T cut Lucent loose to fight Cisco, and began buying TCI, @Home, and MediaOne. This indicates how different the future will be. It’s still unclear whether new business models are needed to take advantage of applications that assume geographic pervasiveness. If so — and these new services are valuable — it will provide a commercial advantage to firms with national backbones and assets. If not, and if new services continue to diffuse and require outside advice and integration, then local firms with geographic focus may continue to thrive. Moreover, the diffusion of broadband access — the widely forecast future for the Internet — has an entirely different set of economic determinants and constraints than the dial-up market. The origins, the costs, and the preconditions differ significantly. There will not be two decades of incubation of broadband technology by noncommercial researchers. Competition for broadband access is taking on a more typical pattern for new technology, where limitations to the pattern of diffusion arise from both technical and commercial issues. What is the low cost method for the delivery of broadband services? What type of services will motivate mass adoption of costly high-speed access to the home? To summarize, we know it’s coming, but are unsure about how fast, the cost, and what basic functionality it will provide. This may take a while to shake itself out. The fantastic and explosive growth of the recent past may represent a special pattern and not necessarily serve as a useful predictor of the near future. With the passage of time the Netscape IPO may indeed seem a
b161-ch03.qxd
27/05/04
7:02 PM
Page 103
Chapter 17 103
unique event. Perhaps someday we will regard the date of that IPO, 9 August 1995, as the unofficial birthday of e-commerce and the present age as kindergarten for the new baby.
{Editorial note: This essay was inspired by the increasingly unrealistic discussion in the trade press. Public discussion had stopped being careful about why something was or was not revolutionary. The dot-com bust started soon after this was written, though I cannot claim to have seen that coming (if I had I could have made quite a killing by selling short many stocks). Several magazines of the new economy struggled and then went under, The Industry Standard and Red Herring being the most notable of those bankruptcies. Of the experiments mentioned above, only Cisco and Microsoft would be regarded as finding sustainable commercial success in the world beyond 2001. The moves by AOL, AT&T, @Home, Media One, Time Warner, Mindspring, and Lucent met with considerable problems.}
b161-ch03.qxd
27/05/04
7:02 PM
Page 104
18 PCs, the Internet, and You
As I sit and drink my eggnog at the start of the new millennium, I pause to reflect on where the PC industry is going. For the last two decades private firms have developed PC businesses relatively unimpeded by the government. For all intents and purposes, the PC suffered almost no government regulation — with the possible exception of the Microsoft antitrust trial. It seems unlikely that this freedom will continue much longer, particularly as the PC begins to embody more communications capabilities in one form or another. That is, because communications industries in the United States have been subject to government regulation for almost one hundred years, this history will tempt politicians to meddle in the PC industry as they have in other communications markets. I’m not sure I like this future, but it appears unavoidable. The future of the PC is tied to the future of the Internet, a future that, in turn, is tied to whether the Internet is regulated like all past communications industries. This raises concerns that soon somebody will try to eliminate the asymmetric treatment of different access modes, promote common carrier regulation over the Internet, and apply principles of universal service to the Internet. We have quite a future in front of us.
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, December 2000. 104
b161-ch03.qxd
27/05/04
7:02 PM
Page 105
Chapter 18 105
Asymmetric treatment of access Not all PCs connect to the Internet at the same cost, nor the same speed. Therein lies a tangled tale and the possibility for future regulatory intervention. The first firms offering Internet access to the home simply copied their academic predecessors. These firms charged for a password, of course, but otherwise Internet access looked the same as a university modem pool. As with the previous systems used by researchers, the first Internet systems ran on top of the local phone system. This was expedient, inexpensive, and easy. It was also quite lucky for the US. To the surprise of many observers, it turned out that flat-rate pricing encourages Internet use. This wasn’t obvious when the Internet started, but it is now, especially when compared with the rest of the developed world’s pricing structures and Internet use. Do you have friends in Europe or Asia? They will tell you that the phone systems usually charge several cents a minute. A one-hour Internet session can easily cost several dollars in phone costs alone. Hence, potential Internet users in many countries don’t bother to use the Internet at home because the phone call costs too much. This is where politics enters the discussion. The pricing of telephony is an extremely sensitive political issue. To be simplistic, politicians hate raising prices. People get upset when that happens and vote against politicians who permit it. This has understandable consequences. Ergo, politicians just can’t help but meddle if they can find a way to take credit for keeping prices down or making services widely available. When politicians saw the Internet grow, at first they were caught off guard. But enough time has passed, and now many see an opportunity for action. Some simply see a possibility to grandstand, while others see a chance to make good on their vision of what’s right for society. For purposes of this discussion the distinction won’t matter. Here’s what worries me. When politicians look at Internet access, they see it through their experiences with telephone pricing. They see that they got lucky last time. Local phone calls had flat-rate pricing for a variety of reasons, none of which were associated with the unanticipated diffusion of the Internet. It was the right choice for encouraging Internet diffusion, but for unrelated reasons. To put it another way, how should future access technology be priced? The future probably won’t involve thousands of ISPs retrofitting access over the existing telephone network, as in the recent past. DSL can use existing phone lines but this is difficult to implement, and only a couple dozen firms will do this across the country. Another possibility is Internet
b161-ch03.qxd
27/05/04
7:02 PM
Page 106
106 Part III
access delivered over cable, which has even fewer providers. A third is some sort of wireless system modeled after NTT DoCoMo in Japan. In any of these modes it’s not clear what type of pricing makes sense for the next generation of technology. Don’t get your hopes up that such confusion will keep the government from meddling with pricing. There’s too long a history of it.
The end of common-carrier regulation Through the eyes of long-time telephony regulation, the Internet is also part of a greater change, the end of common-carrier regulation. I’m not saying that this is good or bad; it’s just different. Meddlers in the government don’t know what to do about it, but will be tempted to take action. Common-carrier regulation has a few distinctive features. In particular, it draws a distinct line between content and distribution, largely restricting any single firm from owning too much of both parts. This is the regulatory structure familiar to telephony, TV, and broadcasting prior to the diffusion of the Internet. Common-carrier regulation arose out of concerns about the concentration of ownership over media assets or bottleneck broadcasting facilities. The thinking isn’t very complicated. Society benefits from multiple sources of information. Separation of ownership brings more players into the game, reducing the likelihood of a bottleneck. It’s not obvious that the Internet should have the same regulatory structure. No such principles have ever applied to the Internet and it’s not clear that any will. Therein lies the potential tension. Because such principles could be applied, at least in theory. But should they? Let me be clear. At the moment several large firms have explicitly violated joint ownership over content and distribution. For example, AOL/Time Warner owns both substantial facilities in access and content. Microsoft also owns both access facilities (through WebTV and investments in AT&T) and content (through MSNBC and MSN). AT&T’s ownership of Telecommunications Inc. cable TV network raises many of the same issues too. Does this need to be regulated? First of all, there’s really not much to discuss as long as multiple channels and a competitive supply is maintained. Competition assures multiple options and obviates any need for regulation. But supply isn’t competitive in the short run anywhere except, possibly, the Bay Area and Manhattan. After all, the number of DSL and cable providers is quite limited in almost every neighborhood in every city of the country.
b161-ch03.qxd
27/05/04
7:02 PM
Page 107
Chapter 18 107
Second, a limited supply of access is actually not a problem if access firms interconnect with multiple content firms without friction. Yet, it’s naive to count on that. Neither the cable companies nor the telephone companies have stellar histories in this regard. In truth, there’s almost no positive history here and only a future with many unknown technical constraints. Technical unknowns also make these issues hard to understand, much less settle. Said another way, it’s hard to get a firm to promise anything to the government, but it’s even harder when even the country’s best experts cannot forecast what the technology will look like in a few years.
The end of universal service Through the eyes of long-time telephony regulation, the Internet also raises multiple concerns about universal service. To put it more starkly, we got lucky with flat-rate pricing; it helped diffuse the Internet. It’s not obvious what policy will accomplish the same goals as we move forward. In brief, universal service is putting a telephone in every household. Recognized many decades ago as an important social goal, it became the excuse for keeping local telephone prices low. This is accomplished through a variety of mechanisms including flat-rate pricing. The equivalent of a universal service debate for the Internet arises under the label “digital divide”: This debate has two flavors. One is about geographic dispersion of Internet access across the country. The other flavor is tied to training, education, and income. The geographic issues are easy to understand. As it turns out, dial-up access is more expensive in low-density areas, if it’s available at all. So if universal service is politically desirable, then it also turns out that it’s relatively easy to address this issue in the present era. For example, the federal E-Rate program presently collects money from long distance telephone bills and distributes it to access firms in lowdensity areas to subsidize Internet connectivity. As it has turned out, this amounts to two billion dollars a year. Pretty soon dial-up access will be available in every rural library, hospital, and public school. It just isn’t expensive to do this. The next generation of broadband Internet will be a much different story. Nobody forecasts that DSL upgrades will be inexpensive or necessarily feasible in low density areas. Cable TV doesn’t even exist in many rural areas. In the absence of a new satellite technology, the country is headed toward a different quality of access between its cities and farms. Therein lies a political time bomb.
b161-ch03.qxd
27/05/04
7:02 PM
Page 108
108 Part III
Universal service issues are even harder to address when they are attached to lack of income, education, or training, as they are in inner-city areas of the US. These sorts of problems are much harder to solve with just a little bit of money. It’s no exaggeration to say that government experts today are not sure what to do about it, if anything. So where does this leave us all? The start of the millennium is also the beginning of a new era in competition policy. The more the PC resembles a telephone, the less competition in this market will resemble the unfettered market we have all known until now. So let’s all celebrate the new era and try not to be too wistful about the bygone past.
{Editorial note: Rarely, if ever, is it wise to predict anything about regulatory issues, but the end of the millennium seemed to call for it. This column was motivated by a venture capitalists’ declaration that he invested in the Internet, but not areas where the value of the investment could be shaped by government regulation. Gee, I thought, the Internet was already a creature of government meddling, and was at increasing risk for more. These predictions were surprisingly prescient about regulatory matters. The final consummation of the AOL-Time Warner merger took place just after this column was published. It placed a number of restrictions on the openness of access to cable lines. In addition, in 2003 the Federal Communications Commission began to alter the rules for interconnection between incumbent telephone firms and those who purchased resale services from incumbents. The FCC has also sought to reduce restrictions on cross-ownership of multiple modes of media distribution, such as newspapers, television stations and radio stations. Surely, this is not the end of the meddling.}
b161-ch04.qxd
27/05/04
7:02 PM
Page 109
Part IV
Internet Boom and Bust
This page intentionally left blank
b161-ch04.qxd
27/05/04
7:02 PM
Page 111
19 An Era of Impatience
Impatience is not a new characteristic of high-tech markets, but Moore’s law used to take all the blame. Today’s marketplace impatience is related to something new, the overriding belief that we are living through a once-in-a-lifetime opportunity. Many players are anxious to develop their business now, while they can, irrespective of future stability. Rapidly developing technology still adds to impatience somewhat, but in a different way. The first time I encountered this new impatience, the messenger was a student with a dilemma. It was 1994 and I was on the faculty at the University of Illinois when he arrived in my office. College students are impatient, almost by definition, but this was something unique. This student wanted to drop out of school and start a business. He wanted to move to Silicon Valley, where I had lived until a few years earlier. He said he wanted my advice. I tried to respond in an avuncular manner, repeating the standard mantras: start-ups come and go; there will always be new opportunities; it’s important to get a degree while you can; all students think that a revolution is occurring outside the university’s boundaries; this is just an illusion; Bill Gates is a famous college dropout, but he’s an exception; someday you will look back on this decision and regret leaving; and so on. Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, May 2000. 111
b161-ch04.qxd
27/05/04
7:02 PM
Page 112
112 Part IV
Of course, nothing I said made a difference. This twenty-year-old student was impatient for a justification to leave school. There was, in fact, a revolution just starting. It was called the Web, and he did not want to miss a second of it. The message he heard was “Invest young man!” In his ears that also meant “Go West, young man!” As it turned out, he was the first of several students I would know who had similar dilemmas. Increasingly, I would observe students who attempted to stay in school, but hold a lucrative job on the side. Others simply disappeared from the university and send e-mails to me a year later, greeting their old professor. Was it an illusion, or was this happening with alarming frequency among my most imaginative and technically sophisticated students? This is a roundabout way of introducing my topic: how impatience alters economic conduct and changes the way businesses develop. It makes everyone grab the day and willfully ignore the future.
The source of impatience If Moore’s law is not to blame, then what is? Dot-coms, networking businesses, and business wanna-bes help explain this impatience in four ways. Today’s gap between the technical frontier and user needs. Many firms are better educated about their technological capabilities than the user and would like to profit from this. However, this gap will only exist until it is filled, which will occur reasonably soon. This theme arises especially among those in networking markets. Business is lucrative because filling technical gaps can involve more than one project or transaction. In most cases, the situations involve a periodic and planned review of the user’s state of technology relative to new frontier developments. This reinforces impatience because there is a nagging sense that some of today’s business opportunities will not exist tomorrow. There is a related sense that tomorrow’s business will not occur unless the user feels some loyalty to the provider, loyalty that business can best develop today. Buyers want answers now: Many purchases today are part of a larger project associated with implementing electronic commerce, which may provide permanent or temporary competitive advantages. If temporary, the benefits to business do not necessarily appear as an increase in revenues but may come from avoided losses. When users are in a business, their needs depend on legacy information technology applications, their product line, and many other features
b161-ch04.qxd
27/05/04
7:02 PM
Page 113
Chapter 19 113
of the firm. These all influence their purchase. The costs from new applications will also differ for the same reasons. Buyers are impatient because they may lose business if they do not implement electronic commerce quickly. Since the value to users depends on the competitiveness of their market, this gets translated into impatience. The unpredictable value proposition: Because a general technological advance may enable applications that have few or no historical precedents, contemporary users of a new technology find it difficult to imagine or estimate future demand for services. Even early versions of a technology that have partially diffused to leading adopters may not provide businesses with information about future value. This is because other potential adopters—who will use the technology when the price drops and the capabilities expand—may have different characteristics and needs. More subtle still, future users may require entirely different complementary inventions. Firms may face a different set of problems tomorrow than today, and these problems may or may not use the lessons learned in the past. Contingency planning is hard enough in mature industries, but in this uncertain environment it leads to pathological paranoia. As tools develop or as customer needs evolve, firms in the dot-com world are especially uncertain about the appropriate commercial model for their own services. As a result, every participant is obsessed with guarding against potential new developments. The myth of the first-mover advantage: Even though some markets lend themselves to advantages for those who move first and others do not, most people are convinced that their industry does. This is especially true with the dot-com market. Many markets, particularly in custom software, provide first-mover advantages. Usually these are markets where users make large applicationspecific investments. Hence, the first application provider to get to market often locks in many users. It is tough to be second. However, there are many markets, especially in dot-com software tools and related equipment, where it is advantageous to be an imitator. A young upstart may have a new idea, but an old firm with a solid reputation may have more assets to bring to the market and may benefit if they aren’t too slow to imitate. Hence, the general case for first-mover advantages is mixed at best. Yet, this mixed evidence does not seem to matter. Most dot-coms do not want to take the chance that second movers may win. There is an almost mythical appeal to “first mover” and it shows in every business plan. It fuels paranoia about getting to market quickly before the next guy has the same idea.
b161-ch04.qxd
27/05/04
7:02 PM
Page 114
114 Part IV
Trade-offs everywhere Why does impatience matter? Because impatience changes the usual trade-offs between investing and incurring operating costs today, and receiving revenues tomorrow. As a routine matter, every technology business, from pharmaceuticals to ISPs, must consider investing a fraction of funds in equipment and other durable goods. If it’s a pharmaceutical company, the investment goes into laboratory equipment. If it’s an ISP the investment goes into servers, software, and modem pools. If it’s a dot-com, the investment goes into software, hosting facilities, and soda machines. In mature businesses, like much of pharmaceuticals or chemical manufacturing, the firms simply allocate a regular fraction of revenue to longterm investments. The accountants understand this, as do the CEO, the employees, and the Wall Street analysts. In these businesses, impatience represents potential imprudence, which stockholders do not like. Firms make their investments, and there are rarely quick payoffs. That is the way of business. What is so striking about today’s business environment is the impatience of so many young firms. Some of this impatience is warranted, but some of it seems over the top. I see it in cavalier attitudes about equipment purchases that will become obsolete tomorrow; in IPOs where the raised money is burned with abandon by MBAs; in stock valuations that presume tomorrow’s sales will be exponentially higher; in unethical competitive tactics, which everyone knows are too expensive to police; and on and on. I teach in a business school, and I see impatience in the hearts of MBAs. There are stories of headhunters walking into classes and trying to entice students to take jobs beginning the next day. And every major business school in the country is experiencing an increase in the number of students who decide to leave before earning their degree to grab that oncein-a-lifetime opportunity.
Carpe diem and all that So why do I have this nagging feeling that today’s impatience may not offer an immediate payoff tomorrow? Eventually, there must be a return on all this investment, excitement, and zealous burning of energy. Some outcomes will eventually prove revolutionary, and I am willing to wait a while for them. Then again, I do not work at a firm with a business plan that presumes exponential revenue growth, so that gives me more patience than your average Joe at a dot-com.
b161-ch04.qxd
27/05/04
7:02 PM
Page 115
Chapter 19 115
More to the point, “eventually” falls somewhere between tomorrow and not quite forever. That could be a very long time indeed, yet, I worry that few businesses seem to acknowledge this or put it into practice.
{Editorial note: I should have titled this “worrying out loud.” After the dot-com bubble burst in financial markets, IT investment fell dramatically in 2001 and 2002 in comparison to earlier years. As I have repeatedly been told, businesses had little need for more investment, since they were still “digesting” their previous purchases.}
b161-ch04.qxd
27/05/04
7:02 PM
Page 116
20 Shortfalls, Downturns and Recessions
Many high tech firms have recently issued poor earnings reports. Industry analysts wonder if high tech is in a recession or headed into one. The answer depends on the definition of recession. Finding a definition is not so easy. Most people wouldn’t recognize a recession. The people who lived through the last deep recession in high technology markets are the same people who now listen to oldies on the radio. There is little collective memory of that time. To be sure, bad news happens. The big question is what to call it. In popular discussion it seems as if bad economic news is called an earnings shortfall when it happens to your commercial rival. It’s called a downturn when it happens to both your firm and a commercial rival. It’s called a recession when it happens to you, your rival, and your brother who works in another city. This is probably too loose and confusing. A remedial review is called for.
Official definition The official definition of a recession is used in all the newspapers and is endlessly quoted by reporters. But this definition does not connect to a reality that anybody — except US Federal Reserve chair Alan Greenspan — recognizes. Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, June 2001. 116
b161-ch04.qxd
27/05/04
7:02 PM
Page 117
Chapter 20 117
A recession officially occurs in the US when its economy sustains two successive quarters of negative aggregate growth. That’s it, precisely. Notice a few peculiar things about this definition. First, it’s retrospective. The status is not official until after six months of recessionary activity. So this definition might be useful for economic history, but it’s useless for forecasting. More to the point, your firm and your nearest rivals could be doing badly long before any official declaration emerges in The Wall Street Journal. These lags reinforce a certain public skepticism with official declarations. What is the good of somebody declaring a recession six months after you and your friends have been laid off? Second, this definition only considers the aggregate economy in all 50 US states plus territories. It’s not about any specific sector or any particular location; nor does it consider the global economy. So, even though the US economy may not be (officially) in a recession yet, it’s entirely plausible that specific sectors of the economy — such as the PC industry — could be in a serious decline. It’s also plausible that specific locations, such as Northern California or the greater Boston area, could experience extraordinarily painful economic adjustments even when most other places, as well as the average economy, are just fine. This is another reason why an official declaration is disconnected from your daily business life. Finally, some official has to formally declare a recession. Since it is quite a sensitive declaration to make, a decision maker in the US executive branch could find it very tempting to change the definition to suit their needs — especially around election time. As a result, the US government does not make these pronouncements but the National Bureau of Economic Research, a private nonprofit think tank, has the official authority to declare a recession. They use the same government statistical data as everyone else, so there is no mystery to these declarations. Nonetheless, part of the caution you may sense in the newspapers’ use of the term arises from this institutionally peculiar assignment of duties. As I discussed previously, official declarations of recessions have only a weak relationship to the present downturn in many facets of high tech. Is the current situation a recession? Officially, the answer is “no” or “not yet” and that is how the picture will stay for a while. In this case, officialdom is useless.
High-tech downturn Is there presently a high-technology market recession? Possibly yes, but why? The present downturn is widespread. It started innocently, as
b161-ch04.qxd
27/05/04
7:02 PM
Page 118
118 Part IV
downturns usually do, and is now known as the dot-com crash. It took more than a Web page to build a successful e-tailing business. Online advertising was less lucrative than first thought. Many dot-corns developed businesses but found a lack of demand. In other words, unsteady revenue streams and too many costs are a recipe for bankruptcy in a new business. Many firms cooked the same goose. The dot-com decline produced a domino effect, hurting the equipment and consulting firms who supported the boom. It was what business professors call a classic decline in a whole value chain. That is, demand for the end product dropped. So all the upstream suppliers lost a fraction of their customers at the same time. This spread the pain into different markets. In particular, the dot-com crash pushed a lot of almost new, used computing equipment onto the secondary market. This hurt sales of new PCs, exacerbating sales problems at major PC and equipment firms. So it all adds up. Bad news has arisen at PC suppliers, such as Dell or Intel; at Internet equipment suppliers, such as Cisco; at software firms, such as Oracle; and so on. It has also arisen at the consulting houses, such as Accenture and EDS, who used to make a lot of money implementing solutions to problems in ecommerce and ERP issues among others. The decline in the value chain coincided with — some say caused — venture capitalists’ withering interest in dot-coms. The money spigot turned off, exaggerating the decline’s speed and sharpness. That said, it’s not just dot-corns running out of cash and declaring bankruptcy. If this were just a burst bubble in an isolated sector it could work its way out in a few months. But now the situation seems to be more serious. The present downturn seems to have multiple causes. For example, the dot-com crash coincided with a steady rise in interest rates, hurting demand for durable goods such as cars, trucks, and machinery. When Ford Motor, General Electric, Unilever, Boeing, and their cousins see demand dropping, they buy fewer PCs. This would have had consequences for PC manufacturers and hence the aggregate economy even without the dotcom crash. To make matters worse, the broadband revolution spread more slowly than anticipated. Demand for data is growing, but not quite as fast as the biggest forecasts from a few years ago. Many vendors anticipated too much demand for their service and their equipment. Many ISPs, hosting companies, competitive local exchange carriers (CLECs), and so on are now left with equipment, debt, grand plans, and not enough revenue. It’s not easy to fix multiple causes of an economic downturn. If you were Alan Greenspan, where would you start trying to fix this sort of mess? A lowering of interest rates will help but may not make much difference in the long run.
b161-ch04.qxd
27/05/04
7:02 PM
Page 119
Chapter 20 119
Why downturns are bad It’s commonly recognized that high tech is a volatile sector of the economy. Entrepreneurs seek to develop markets where they have no identified customers, just sketched business models, and endemic technological disruption. It’s apparent that there will be ups and downs in conducting this type of business. Until recently, however, the “down” part of volatility was isolated to specific firms or product categories. It had been a long time since the bad times visited many firms at the same time. So why does this matter? What is the real difference between good and bad times? In good times business failure comes about for an assignable reason. There is someone or something to blame and lessons to be learned. Not necessarily so in bad times. So, when firms fail in good times it’s possible to identify why. Reasons could include a business plan that was too optimistic about a specific customer’s needs, a chief financial officer who set up an inept cashflow tracking system, or a product whose second generation simply did not work. These are isolated mistakes, unrelated to each other, though these can often be enough to kill a young business. During a recession, in contrast, firms can do everything right and still fail. It’s as if success or failure is out of the firm’s control. The margins for error are small; so luck plays a large role. A few examples will make this idea concrete. Pets.com was going to fail under all circumstances, good or bad. Many people own pets, but few need to buy their food online. I always liked eToys, but there were not enough people like me out there. e-Toys was doomed because there was not enough online toy business to justify warehousing expenses. The business model behind Webvan is also not set up to succeed: Any delivery business is expensive and not profitable without huge volume, which Webvan is not getting. Webvan is toast when they run out of cash. And, let’s face it, Go.com had little going for it that could have made it a success. But there are many firms who are still working out their business models and might squeak through if the times allow it. NetZero, for example, is not yet profitable, but it has not nose-dived either. It still has considerable cash to survive losses for a while. A few experimental years would help NetZero work out its kinks, but a bad overall economy makes its business marginally more difficult. The economic times influences what the company can do and how it experiments. Related to this idea of the times, many firms built their capital structures in anticipation of large growth. Now they need a return on those investments. If that growth does not materialize for a few more years
b161-ch04.qxd
27/05/04
7:02 PM
Page 120
120 Part IV
because of an unanticipated decline in economic conditions, these firms may have insufficient cash to cover debt. Victims of this type of problem could be Internet backbone firms, such as PSINet, Level3, and Qwest. These firms were perfectly good companies prior to the downturn, and all are about to be squeezed for cash. Some may survive, some may not. I pity their managers, many of whom have done reasonably good jobs. These difficulties are not management’s fault. One other observation scares me: Recessions are self-reinforcing. If most investors believe that the climate is hostile for growth and for risky business, they will make fewer investments in new firms. Established firms also will be less adventurous when organizing new projects. If fewer firms are visibly growing, it fosters the perception of an unrosy future, reinforcing the hesitation to make investments. After a while, this feeds on itself. Indeed, it may eventually become difficult to identify causality — are the bad times causing low investment or is it the other way around? That is the sort of recession we all want to avoid. Once an economy sinks into a self reinforcing funk, it’s difficult to overcome.
{Editorial note: The NBER web page dates the beginning of the recession to March 2001.}
b161-ch04.qxd
27/05/04
7:02 PM
Page 121
21 Explaining Booms, Busts and Errors
How will I explain dot-bombs to my kids? A dozen years from now my children might be old enough to understand complex human behavior. Yet I anticipate that they won’t look back on today’s events with awe. Rather, they will view them as history, with the disdain that comes with twentytwenty hindsight. Why did so many Internet-related firms lose so much money? To be sure, the irrational exuberance of the various participants is part of any explanation. However, that alone seems an unsatisfying answer. Many analysts thought companies and their investors were behaving reasonably. Tens of thousands of people thought that dot-coms were creating economic value. As the stories of economic loss roll in, three systematic errors common to many of the players emerge. Participants made errors in misforecasting adoption behavior, by underestimating operational requirements, and having overoptimistic expectations of success. Although all these errors arose from exuberance, only the last looks irrational. Understanding these errors will help us comprehend the origins of business booms and busts, a fact of life in technology markets.
Adoption behavior There was considerable merit behind the initial optimism surrounding Internet commercialization. The US government began formally privatizing Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, September 2001. 121
b161-ch04.qxd
27/05/04
7:02 PM
Page 122
122 Part IV
the Internet in 1992, finishing by 1995. The Netscape IPO was in August 1995. By the end of 1998 nearly one-third of US households had an Internet connection. By any measure, the Internet-connected world grew at a remarkable rate. This phenomenon received attention, and rightly so. It was easy to forecast that 50 percent of US households would connect by 2001, a prediction that did, in fact, come true. In other words, the initial reports were mostly accurate. But accurate beginnings bore the seeds of major errors. Indeed, the one main error is now transparent: Many business plans assumed households using one part of the Internet would also use most other parts just as intensively. Unfortunately, e-mail and surfing did not necessarily beget online shopping. Why not? The full story is quite complex and too long for one column. Let me provide a brief synopsis; the curious reader can find more detail in an article by John Defigueiredo, who first explained this concept to me (“Finding Sustainable Profitability in Electronic Commerce;” The Sloan Management Review, vol. 41, no. 4, 2001, pp. 41–52). Shopping requires considerable judgment by the shopper. It can be a very information-intensive activity, requiring touching, smelling, and observing. It may even require that the user experience the good in a simulated environment prior to purchase — such as test-driving a motor vehicle. As it turned out, the easiest categories to build online businesses were in the most routinized goods, in which the user required the least amount of information. Books and CDs do their best business when the user knows precisely what they want. In contrast, the most difficult purchases to put online are those involving idiosyncratic purchases, such as used cars, housing, and many types of furniture. It is not impossible to purchase these products online; it is just more difficult. Quite a few technically adept souls do, but most people don’t bother. Do not misinterpret my last statement. Although it is difficult to sell a used car online, it’s not impossible for the Web to help these types of transactions occur more smoothly. Even if only a small fraction of a company’s total sales are online, it may still be a profitable endeavor. At the same time, most sales will still occur offline. Trials with the easier online purchases are growing, though not spectacularly. Trials with the more difficult online purchases are growing too, just at a snail’s pace. Someday the less technical population will use this stuff, but not tomorrow or the day after. The right scale is years from now. More to the point, slow adoption is the rule — rather than the exception — when it comes to large masses of consumers. The fast
b161-ch04.qxd
27/05/04
7:02 PM
Page 123
Chapter 21 123
diffusion of e-mail and Web browsing was unusual for the speed with which the mainstream adopted it. The slower growth rates we see now are almost normal. The error was in assuming that fast adoption of the first technologies implied fast adoption of all Web-based activities. It did not. To be sure, there is an irony in commercial success of a new information technology being correlated with the need for less information, but we will not dwell on it. So let’s move on to the next error.
Operational requirements The second error relates to the first one. Netscape’s experience as a browser vendor initially went quite well. Indeed, Amazon began selling books online early in the history of household Web use, and its experience is well known. Online offerings of CDs, toys, plane tickets, greeting cards, cheese, and tons of other stuff followed rather quickly, often by other firms. As I recall, mainstream financial analysts endorsed this trend. Mary Meeker at Morgan Stanley, among others, began writing reports about this phenomenon. Though this whole group of analysts has been taken down a peg by recent events, I dare anyone to go back and read their earliest reports. These early reports were accurate. So what error occurred? In short, too many analysts went too far in their support — further than the initial positive experience warranted. They confused successfully launching a business with successfully operating one. There were two versions of this error, one at new firms and one at established firms. New firms began sprouting up everywhere. The act of pitching a plan and growing a new business became an art form (and it still is). To be sure, exuberance played a role. Some venture capitalists should have been skeptical, raised questions, and put a stop to all the good money following bad ideas. Yet for every venture capitalist who said “no” there were three others who said “yes;” and on it went. The story at established firms was different. Virtually every major retailer in the US established a new e-commerce division, treating these experiments as if their organizational lives depended on it. Again, exuberance played a role. More consultants should have spoken up and raised skeptical questions in the minds of old economy retailing executives who rushed to develop the new channel. Again, for every calm voice, three others pushed for urgency, and on it went. All those trials encountered every possible mistake, and every possible match and mismatch between business goals and operations. Some
b161-ch04.qxd
27/05/04
7:02 PM
Page 124
124 Part IV
online businesses turned out to be easy to operate, but most were difficult. Fulfilling orders, for example, requires execution. This is a complex story that is too much for one column. After years and years of operating retail outlets or catalog businesses, established companies had developed operations that were complex and refined. Handling daily complaints, for example, requires trained and motivated staff. Shipping goods and controlling quality requires careful processes. Most neophytes could not replicate that type of operation, no matter how good their Web page looked. Maintaining and refreshing a broad product line across a wide scope of goods, as Wal-Mart does, is extraordinarily difficult and not easy to recreate from scratch. As it turned out, the already established catalog firms — such as L.L. Bean or Victoria’s Secret — were comparatively successful in making the transition to electronic formats. So too were small niche businesses with simple back-office operations. In contrast, many of the pure plays, such as e-Toys, and the big clickand-mortar experiments, such as Webvan, quickly drowned, running up costs in excess of revenues. Although a few of the entrants did OK, most were in over their heads. Some of these failures arose from the first error — misunderstanding adoption speed. But much of the failure had to do with the second error — misunderstanding the business’ operational requirements.
Expectations Vibrant businesses must experiment during uncertain times. To not do so risks strategic atrophy and eventual economic decline. So it is inevitable that extraordinary opportunity leads to extraordinary experiments by a wide variety of businesses and entrepreneurs. This experimentation, in turn, leads to a wide range of outcomes. After the fact, failure alone does not invalidate an experiment’s merit. Companies and entrepreneurs had to carry out these experiments to learn what works and what does not. Though it is not necessarily pleasant to live through failure, it is almost always interesting to watch, at least at a distance. Perhaps this seems overly philosophical, but it helps explain part of the dot-bomb boom and bust. That is, the Internet’s diffusion initiated an extraordinary set of economic opportunities, something people see only a few times in a lifetime. In such circumstances, everyone should have expected a massive set of experiments, which in turn should have resulted in a wide distribution of mistakes and insights. In other words, we should have expected many failures from the outset. We should also have understood that they were not necessarily symptomatic
b161-ch04.qxd
27/05/04
7:02 PM
Page 125
Chapter 21 125
of irrational exuberance, or stupidity, or even excessive courage. Economists note with empirical regularity that some fraction of new firms fail — always. However, this is not the same as saying that at least one new firm — or possibly even many — will succeed or even survive. And that’s where many analysts and investors made their worst miscalculations. Said another way, a cultural belief — reaching religious fervor in some quarters — maintains that some entrepreneurs can work their way through any new opportunity. It is as if there is a law against an excess of bad business decisions. During the dot-com boom you could hear analysts make this error repeatedly. It typically occurred like this: The analyst would examine an online retailing segment crowded with many entrants, then grade each of them. Finally, the analyst would make a declaration about one or two, designating these as “best of breed” or “most likely to survive a shakeout.” The analyst would then recommend investing in the best of the lot. The analyst could have misunderstood which was best, but that sort of error is almost expected — after all, caveat emptor for to the investors acting on analysts’ recommendations. Rather, the true error comes from the analysts’ implied assumption that the best will survive at all, an issue separate from whether their grading was correct or not. It is always quite possible that all the entrepreneurs would face similar adoption problems and operating complexities, and that all would experience the same results, each generating costs in excess of revenues. No preordained reason makes it inevitable that at least one worthwhile decision will arise out of multiple failures. Analytical completeness and proper caution require considering the possibility of a very gloomy outcome. For a few years such completeness and caution were missing. It is in this sense that the irrational exuberance of the times interfered with clear thinking, making the boom and bust worse than it needed to be. In other words, some failure was inevitable, but when naive optimism springs eternal, it is more likely that more failures will follow.
{Editorial note: Surely historians in the later era will look back on the present with twenty-twenty hind-sight and wonder how this could have come to pass. To simply call this era the result of “irrational exuberance” was too simple. This essay was inspired by a session at the Academy of Management about the topic. It is part of an attempt to find a deeper explanation for what happened. This particular essay explores specific facets of business activity.}
b161-ch04.qxd
27/05/04
7:02 PM
Page 126
22 An Inside Scoop on the High-Tech Stock Market Bust
The market capitalization of many high-tech firms came down in a short period of time right around the turn of the millennium. There is no question that this was a shock to many people and disruptive to the economy as a whole. Stock options became worthless. Some investors lost a fortune in paper wealth. Other people lost their jobs. To be sure, some part of reevaluation of technology companies involved the absence of clothes on the dot-com emperor. Many did not want to admit their earlier, overoptimistic economic forecasts, and so did not recognize the beginning of the fall. Further, the events of 11 September made an already bad situation worse. That said, this event only provides part of the story; the reevaluation of high-tech firms started prior to September. The reevaluation also extended beyond new firms to the big leaders among them, such as Yahoo. Large established firms — Intel, Cisco Systems, Microsoft, Lucent Technologies, Sun Microsystems, IBM, and JDS Uniphase, for example — were also revalued. The decline in these widely held firms hurt many sophisticated investors. The recent unexpected crashes and large losses should not have hurt such smart money. These events illustrate some general principles about how to value ongoing, operating firms. To understand those principles, however, you need to address many simple myths about stock values. In particular, it is Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, January 2002. 126
b161-ch04.qxd
27/05/04
7:02 PM
Page 127
Chapter 22 127
commonly stated that stocks were bid up by the Internet investment fad. That statement captures a grain of truth, but also begs a big question. Fads are social phenomena, affecting hula hoops and Beanie Babies. When smart money is at risk, its investors cannot afford to lose money following social fashion. In other words, fads alone should not lead to the revaluation of firms. In what sense did a fad influence high-tech stock value? Can we reconcile these events with conventional explanations, which are usually skeptical about the existence of such fads?
Just the facts The facts are pretty startling on the surface. Table 1 summarizes these facts, derived from data sent to me by Janet Kidd Stewart, a Chicago Tribune business reporter. The table shows the recent market capitalization peak and trough for several high-tech stalwarts. First, note the extent of the decline in market capitalization. Each company experienced a decline in market value in the hundreds of billions. Second, notice the timing. Every peak occurred prior to or in early September 2000, with most happening in early 2000 or late 1999. Every trough occurred during or after December 2000. Most of those lows were prior to or during autumn 2001. In every case it took a little more than a year to travel between peak and trough. That is fast. Third, notice the spread of the pain. Every major firm experienced a drop in market capitalization. To be sure, each of them experienced their pain in some unique business crisis, whether it was failing to reach a sales
Table 1.
Recent market capitalization for some leading high-tech firms.
High market capitalization (billions of dollars)
Date of high
Low market capitalization (billions of dollars)
Date of low
Loss from peak (billions of dollars)
610.9 664.2 531.2 250.8 222.1 124.9 260.2 153.3
Mar. 2000 Dec. 1999 Aug. 2000 Dec. 1999 Sept. 2000 Mar. 2000 July 1999 Jan. 2000
82.4 224.2 130.4 17.0 24.4 6.8 143.3 4.6
Sept. 2001 Dec. 2000 Sept. 2001 Oct. 2001 Sept. 2001 Sept. 2001 Dec. 2000 Sept. 2001
528.5 440.0 400.8 233.8 1977 118.1 116.9 148.7
b161-ch04.qxd
27/05/04
7:02 PM
Page 128
128 Part IV
target for a new upgrade or failing to manage excess inventory in the supply chain. But looking beyond that, the similarity of the timing suggests some commonality in the underlying causes. Fourth, notice the type of firms under the microscope. All these firms have a significant presence in the markets for information equipment and services. All of them have large fractions of revenue tied to business customers throughout the US economy. In other words, these companies are upstream in the value chain for IT services. Their experiences are probably linked because their downstream buyers are the same business organizations. These facts would seem to suggest that an unexpectedly large industrywide drop in demand caused a common reevaluation of all these firms. There is only one big problem with that explanation: This type of drop is not supposed to happen at these large firms this quickly. Why? Because the biggest investors — mutual funds, growth funds, retirement accounts, insurance companies, and so on — own the biggest corporations in high-tech America, such as Intel, IBM, and Sun Microsystems. Large stockholders do not play around with their investments. These stockholders keep portfolios for the long haul, and they only move money around to spread risk. Sophisticated buyers and sellers do not act like your average day trader. They do not buy and sell in response to a hunch, act at the behest of an anonymous tip, or retreat as part a short-run panic. These buyers and sellers anticipate the future, make careful assessments, invest and hold for the long term, and develop portfolios of investments spread across a wide variety of firms. More to the point, sophisticated buyers and sellers do not usually change their minds quickly about a firm’s value. Sophisticated buyers estimate a firm’s values based on anticipated profits, growth opportunities, comparative standing in the market, and a few other indicators, such as the track records of key executives. To be sure, there is a range of opinions about these matters, but when everyone does their home work, the variance of this range is usually small among smart investors. Finally, sophisticated traders have the money to back up their opinions with actions that move stock prices. These movements are usually countercyclical. Sophisticated investors get out of the stock market when stock prices are too high and get into the stock market when stock prices are too low. Their actions effectively mute volatility — turning upswings into selling opportunities and downswings into buying situations. Altogether, the smart money brings stability to long-term stock market prices and, hence, to company valuations.
b161-ch04.qxd
15/06/04
2:54 PM
Page 129
Chapter 22 129
That is how things work. Sophisticated investors invest heavily in market research that frequently takes the pulse of vendors and customers. These investment analysts make a living out of not being surprised. Every textbook says so. And, obviously, in the latter part of the millennium, it did not exactly work that way in high-tech stocks. This is the interesting mystery here.
Explaining the decline Even sophisticated investors must use significant guesswork and prognostication about future market prospects. They always have. But the basic principles never change. A firm’s valuation must eventually reflect how much revenue a firm collects over the long haul and whether that revenue exceeds the costs incurred. In this light, what happened? The latter half of the 1990s brought together five coincident events that kept optimism high — way too high, as it turned out — about future revenues.
Interest rates Interest rates were low in the latter 1990s, particularly short-term rates. Low rates enable cheap borrowing and more risk taking by investors. This leads to more stock purchases. Low interest rates also make capital equipment purchases cheaper. Therefore, the setting was ripe in the latter part of the decade for an active market. But interest rates started going up in early 2000. This alone could not have caused the dramatic reevaluation, but it was a contributor.
Deregulation The deregulation of telecommunications took a large yet somewhat halting step forward with the passage of the Telecommunications Act of 1996. This act fostered restructuring in the delivery of many basic data services and the entry of many new intermediate size firms, which generated several sustained years’ worth of new equipment sales. By late 1999, however, insiders were beginning to understand that this act would not enable as much entry as originally forecast. Market analysts rescaled their estimates downward.
Internet diffusion Perhaps the most surprising event contributing to the great optimism about high-tech revenue was the diffusion of the Internet, which really
b161-ch04.qxd
27/05/04
7:02 PM
Page 130
130 Part IV
took off in 1995.The commercialization of the browser and the US government’s full privatization of the Internet brought about this diffusion. From that point forward, many observers forecast growth in demand for data services. To be sure, these forecasts were based on guesswork about how far diffusion would go. Certainly the guesses were too optimistic. By early 2000 most households and businesses with PCs had adopted the Internet. This was a onetime event, however. Upgrades to broadband and other new services occurred at a slower pace. The large growth associated with Internet diffusion was over.
Y2K Somewhat to the IT consulting industry’s amusement, fears about Y2K generated a temporarily large increase in IT budgets at major corporations in 1997 through 1999. Under the umbrella of “fixing potential Y2K problems,” many ClOs used these increased budgets as a way to correct problems they had wanted to fix for years. These budget increases were temporary, of course. With theY2K nonevent, many ClOs did not get those elastic budgets in 2000 and beyond.
Attitudes Lastly, and not trivially, the Internet’s commercialization spawned a social movement about future change, and this movement worked its way into estimates of many upstream firms’ prospects. At first, analysts were cautious. Anybody can go back and read the early 1996 reports coming from Mary Meeker of Morgan Stanley Dean Witter and Henry Blodget, then of Merrill Lynch, and find analyses that are measured in tone, financially careful, and accurate. By 1999, however, this movement took on a different momentum. It involved a collective vision about the future that was unabashedly optimistic and on the edge of being unrealistic. This later, undisciplined form was revolutionary in tone. The leading prophets foresaw a new economy. They claimed that many old rules had become obsolete. Several of these prophets derided skeptics as old-fashioned and narrow-minded fuddyduddies who “didn’t get it.” Revolutionary fervor was intoxicating, and it was useful for recruiting employees to entrepreneurial firms. However, any sober-minded investor could read the financial reports of data sector companies and see a potential problem. All the growth forecasts were founded on projections about increased use of data services, which, in turn, depended on the rapid diffusion and commercial success of many of the new business models using
b161-ch04.qxd
27/05/04
7:02 PM
Page 131
Chapter 22 131
Internet technologies. Those projections were unrealistic and based on the revolutionaries’ assertion that every business and household in the western world would move to Internet services.
What happened? Of course, this shift did not happen. Why did this become so suddenly apparent in late 1999 and early 2000? The insiders played an interesting role. By 1999 many of the dot-coms were not showing the revenues their venture capitalists had anticipated from 1996 through 1998, the period when those firms were founded. Quietly, the smart venture capitalists started pulling their money out in 1999, refusing to start new firms. Sure, the market for initial public offerings was still strong in 1999. So some venture capitalists did not stop trying to go public with the firms they already had investments in, at least not right away. Yet, by mid 2000, short-term interest rates were rising. This was enough to make some big investors reconsider whether they wanted to invest in these IPOs. Once the IPO market collapsed in early 2000, the skeptics’ views became public, and everyone started to reevaluate their previous assessments. And, low and behold, they discovered that they had founded their previous beliefs about the potential of dot-coms on excessively optimistic forecasts. It was not a pretty thing to watch.
It all came down Optimism is a psychological state of mind. It is vulnerable to short-run influences and social pressure. It persists for any number of reasons, but not in the face of serious financial losses — at least, not with smart investors. In other words, once the smart money saw that the most optimistic forecasts were unrealistic, they inconspicuously withdrew their money, selling to others who still were in the optimistic group. Other, mildly less smart money, then saw what was happening and within a few months, the new realism began to spread. The result was a massive and comparatively quick reevaluation of many high-tech firms. Ironically, the new economy overvaluation was sunk by some pretty old textbook economics. It arrived just a little later and less gradually than most of the old fashioned fuddy-duddies expected.
b161-ch04.qxd
27/05/04
7:02 PM
Page 132
132 Part IV
{Editorial note: This was originally titled “The Ride Before the Fall.” What is missing from this essay, and which became apparent later, was the misrepresentation by stock analysts in several Wall Street firms. Jack Grubman and Henry Blodgett, in particular, seem to have issued optimistic assessments in order to help the banking business in other parts of their firms, even when their own private assessments were pessimistic. They have both been removed from their analyst jobs and barred from returning. To her credit, Mary Meeker was not implicated for such behavior and she continued to practice, albeit a bit more quietly.}
b161-ch04.qxd
27/05/04
7:02 PM
Page 133
23 The Crash in Competitive Telephony
In the latter half of the past decade, the electronics business became a part of the telephone business. Quite unexpectedly, many parts of the electronics industry — particularly data switch makers — found their economic welfare tied to four decades of regulatory experimentation. In this experimentation, regulators wanted to develop competitors in various parts of the telephony industry. One catalyst for the change was the US Telecommunications Act of 1996 — the first major alteration to US federal communications law in more than 60 years. At the same time, the Internet exploded into commercial use, fueling the demand for bandwidth affiliated with data transmission. The general public does not widely understand these changes. This is understandable, because the events themselves were genuinely complicated. Nonetheless, if you have an eerie sense that these changes did not go well, you are right. Many of these new competitors have gone bankrupt. Some analysts now view US transmission capacity as needlessly overbuilt. Others view this crash as marking the beginning of the end to deregulation for local telephony. This situation will take some explaining.
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, August 2002. 133
b161-ch04.qxd
27/05/04
7:02 PM
Page 134
134 Part IV
The setting Every US city has at least one incumbent local telephone provider. Deregulation of local telephony attempted to increase the number of potential providers for local voice services beyond this single monopolistic option. Deregulation of data services followed a similar logic as US telephone networks became increasingly digital during the 1980s. Advocates of these changes expected such competition to yield lower prices, increase customer focus, and offer a wider array of services than a noncompetitive environment. The primary new competitor in this deregulated world was the competitive local exchange company. CLECs typically serve customers in geographically designated local areas. They always compete with a local incumbent, such as SBC or Verizon, and they sometimes compete with each other. CLECs quickly became substantial players, accounting for more than $20 billion in 1999 revenues in the telephony and digital services industry. More to the point, CLECs became the deregulatory movement’s focus. Many CLECs grew quite rapidly and often took the lead in providing solutions for provisioning broadband’s last mile. At the risk of oversimplifying a complex situation, the most common approach to deregulation attempted to split the telephone network into two parts. This approach divided the network between central-switch operation and the delivery of services. Services use a line that interconnects with the switch. The incumbent companies delivered services over the switch and so did CLECs. Hence, government regulators had to forbid incumbents from interfering with CLECs, setting rules to govern conduct during transactions. Despite difficulties in regulating conduct, a general movement toward deregulation ensued in many US cities. Business users were the biggest beneficiaries at first, so these were the biggest advocates. Indeed, by the end of the millennium, the largest US cities had dozens of potential and actual competitive suppliers for local telephone service. These competitors all interconnected with the local incumbent. To be sure, 1999 was the last year in which a consensus of Wall Street analysts expressed optimism about the entry of CLECs and their expansion. Much of the enthusiasm was affiliated with • • •
anticipated growth in Internet transport or transmission control protocol/Internet protocol (TCP/IP) data traffic, anticipated development of digital subscriber line (DSL) connections, or the growth of affiliated hosting or networking services.
b161-ch04.qxd
27/05/04
7:02 PM
Page 135
Chapter 23 135
Virtually every CLEC offered or was preparing to offer these services. Virtually every CLEC also offered voice services to business customers seeking to avoid the incumbent — but CLECs did not regard this as an especially profitable part of their business. By 1999, there was no question that major cities such as Chicago or San Francisco could support some CLECs. But would CLECs spread to medium- or smallsized cities? As it turned out, CLEC survival in less densely populated areas was more difficult than many initially thought; here’s why.
Economics of local carrying capacity The delivery of services from the switch to the customer involves costly activities that are not easily recouped. First, setting up and maintaining the infrastructure for offering services incurs engineering and management costs. Second, every entrant incurs costs associated administration and marketing. Consequently, CLECs require a minimum number of users to generate sufficient revenue to cover variable expenses and fixed costs. More to the point, the need to cover these fixed costs limits the number of entrants in specific locales. Simply put, large cities generated sufficient revenue to support more entrants than small cities. Cities vary in their carrying capacity, their ability to support a given number of service providers.To illustrate, Table 1 lists several cities that fall into each capacity category, based on a 1999 survey. This data shows that large US cities had the greatest number of entrants and small cities, the least. Table 1.
Number of CLECs supported by major cities in 1999.
No. of CLECs
Cities
20 or more
New York, Dallas, Chicago, Boston, Atlanta, San Francisco, and Washington, D.C. Cleveland, St. Louis, New Orleans, Cincinnati, Tuscon, and Norfolk Little Rock, Knoxville, Fresno, Madison, Omaha, and Talahassee Bakersfield, Gainesville, Santa Barbara, Des Moines, Reno, and Waco Biloxi, Fargo, Kalamazoo, Naperville, Wellesley, and Wichita Falls Bangor, Bismark, Champaign, Dubuque, Pleasanton, Redding, and Yonkers
10 to 19 5 to 10 3 to 4 2 1
b161-ch04.qxd
27/05/04
7:02 PM
Page 136
136 Part IV
Even in the midst of a boom year like 1999, the plain facts were sobering: There were finite limits to the potential amount of telephony and digital services business, and many potential suppliers were vying for this business. This situation is a recipe for the equivalent of musical chairs — these markets did not have enough business to support the plethora of participants who had come to the party.
Economics of geographic scope The next limit on the CLEC business was geographic. In how many cities does a CLEC provide service? National firms existed even before the Telecommunications Act. TCG (now an AT&T division) and MFS (now a WorldCom division) provided service in all large metropolitan cities (and, until recently, captured almost half the revenue in the industry). What about firms that entered after 1996? As it turned out, CLECs pursued one of three distinct geographic strategies. Local CLECs focused on a narrowly defined geographic market, sticking to a few cities. Regional CLECs expanded beyond a few cities into a wider geographic reach. This approach usually began in a large metropolitan city and grew out into the suburbs. So a medium-sized city might support more local and regional CLECs simply because it was near another large metropolitan area. The third group — national CLECs — pursued opportunities across the entire country without much regard for geographic continuity. These firms typically provided service only in large metropolitan cities. In general, competing CLECs first emerged in the largest cities and spread to other areas after 1996. Which group mattered most? As it turned out, today about 30 national CLECs provide services in more than 20 cities. Most others CLECs focus on a local or regional area. These circumstances mean that plenty of national providers vied for customers in most major metropolitan areas, but competition did not arise in many medium-sized cities. This happened because national CLECs typically bypassed medium-sized cities. Competition in these cities depended on whether a nearby local or regional firm bothered to grow.
Economics of differentiated service By 1999, the CLEC business had expanded beyond being solely about voice services to also carry data traffic. This latter type of service was especially valuable to businesses and of less value to consumers.
b161-ch04.qxd
27/05/04
7:02 PM
Page 137
Chapter 23 137
Consequently, CLECs varied in their target customers and portfolio of services. Some CLECs built facilities targeted solely at businesses; others targeted a mix of consumers and businesses. CLECs that offered different services were not perfect substitutes for each other. Indeed, that was the point: CLECs deliberately tried to differentiate themselves to protect margins on their most profitable services. The net effects of differentiation are familiar from the viewpoint of many local businesses. On the one hand, cities with differentiated CLECs have a greater carrying capacity than cities where CLECs aren’t differentiated, resulting in more potential suppliers. On the other hand, differentiation can protect CLECs from pricing their services competitively. As it turned out, most of these differentiation strategies were not very successful in stopping pressure on prices. Consequently, many CLECs had lower than anticipated revenues.
Economics of the local regulatory environment The Telecommunications Act prescribed pro-competitive regulatory rules designed to foster investment in local data and voice services. The act anticipated that entrants would either build their own facilities or sign resale contracts with incumbents for access to unbundled elements of the incumbents’ networks. Both required regulatory approval. Most entrants had licenses for resale. However, in 1999, resale voice services were not a high-margin opportunity in most cities. Most major CLECs had already abandoned strategies to build customer bases via resale; only a few based their entry on resale alone. Resale was already transforming into an adjunct service to facilities-based operations in expanding and new markets. This outcome did not diminish the importance of state regulatory environments. Rather, changing economic conditions continually highlighted the importance of strong state regulatory idiosyncrasies in rules, approaches, and histories. Various state regulatory agencies made it easy or difficult to interconnect with incumbents or to enter the market through the resale of voice services. Other state regulatory agencies made it easy or difficult to become a seller or value-added reseller of DSL services. However, nobody ever really identified which type of local regulation worked best. State regulators can make a big difference in a business’ success, but they cannot do much when too many local firms are playing musical chairs for a small amount of revenue. After 1999, there was not much anyone could do to fix the situation.
b161-ch04.qxd
27/05/04
7:02 PM
Page 138
138 Part IV
Too much speculative entry It’s apparent that the demand for CLEC services — underpinned by anticipated rapid growth in DSL and Internet data transport markets — did not grow as fast as the most optimistic 1999 forecasts. Many CLECs did not realize revenues sufficient to cover the debts incurred building their facilities. After 1999, some CLECs curtailed expansion plans announced in previous years. Others left the business altogether. It makes me wonder whether the carrying capacity of cities was largely fleeting. For example, such fleeting capacity would arise if most of a city’s CLECs were seeking simple resale arbitrage opportunities. Overall, the downturn that began in 2000 largely arose from anticipating a demand that never materialized. It’s now apparent that CLECs entered cities that did not need any CLEC. It’s also apparent that many CLECs expanded into cities that already had too many entrants. These cities’ carrying capacities simply were not large enough to support several firms. It was a recipe for bankruptcy. Going forward, it’s unclear where this history leaves competitive telephony in the US. Most survivors feel lucky to still be here. Most will wait for business to settle into predictable patterns in the next few years before undertaking any more expansion.
{Editorial note: Once again, historians in the later era will look back on the present with twenty-twenty hind-sight and wonder how this could have come to pass. This essay is about a facet of this topic which is quite complicated and rather obscure to most people.}
b161-ch04.qxd
27/05/04
7:02 PM
Page 139
24 Too Much Internet Backbone?
The last issue of the now defunct Red Herring magazine in March 2003 — may it rest in peace — devoted its last breaths to a cover article discussing the strategies of entrepreneurs who were buying up each other’s networks at fire sale prices. The article seemed to imply that commercial providers had over invested in backbone capacity in the US. Is there any economic sense in claiming that there is too much backbone? The commercial Internet consists of hubs, routers, switches, points of presence, and a high-speed, high-capacity pipe that transmits data. People sometimes call this pipe a backbone for short. The Internet’s backbone connects servers operated by different Internet service providers (ISPs). It connects city nodes and transports data over long distances, mostly via fiber optic lines of various speeds and capacity. No vendor can point to a specific piece of fiber and call it a backbone. This concept is fiction — one network line looks and functions pretty much the same as any other line — but the fiction is a convenient one. Every major vendor has a network with lines that run from one point to another. It is too much trouble to refer to such lines as “intermediate transmission capacity devoted primarily to carrying traffic from many sources to many sources.”
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, March 2003. 139
b161-ch04.qxd
27/05/04
7:02 PM
Page 140
140 Part IV
The backbone’s presence and structure provide information about which cities are playing the most prominent role in the Internet’s development and diffusion. The geography of these connections provides insight into the network’s economic determinants. That said, interpreting the size of backbones and comparing the backbones in different regions is not straightforward. Its presence depends on many parameters, such as population size, type of local industry, and other facets of local demand. More to the point, delivery resources are concentrated in some areas. Networks overlap in many locations. This redundancy looks inefficient, but that is too simple an observation.
Features of the Internet backbone The backbone for US networks grew out of the network built for the US National Science Foundation by IBM and MCI. They designed NSFnet to serve the research needs of universities, and it connected several thousand universities by the early 1990s. NSFnet had concentrated much of its communications infrastructure around a dozen supercomputer centers. This infrastructure connected to other major research universities in many locations, resulting in a geographically concentrated transmission network with wide dispersion of access points. The few loci of concentration in NSFnet were quite different from the concentration that developed after commercialization. The commercial Web’s connectivity did not arise from a single entity supporting a single purpose. Instead, multiple suppliers invested in the Web to support a myriad of targeted customer needs. Four institutional factors shaped network infrastructure in the commercial era: First, the commercial network developed distinct cooperative institutions for exchanging data. It retained several of the NSF-developed public exchange points but transferred their operation to associations of private firms. Second, a few other large firms exchanged traffic using private peering — a bilateral agreement between two firms to exchange data at no cost to save each of them monitoring costs — but did not discriminate on any basis other than flow size. Third, many smaller ISPs paid for access to larger networks, entering into long-term contracts. Fourth, and quite significantly, popular Web sites, such as Yahoo, made deals with cache/mirror sites operated by firms such as Akamai, Digital Island, and Inktomi. These deals eliminated much of the differences in
b161-ch04.qxd
27/05/04
7:02 PM
Page 141
Chapter 24 141
performance among locations — at least for the most popular sites that could afford to pay these mirror sites. It also eliminated much of the competitive differences among the backbone locations. To the delight of some and the dismay of others, a multi-tiered system emerged. The tiers were associated with footprint size and traffic volume; they largely provided shorthand designations for those that carried data over long distances, and those that collected charges from others for transit service. For example, the largest firms all became tier 1 providers. These firms included AT&T, IBM (before it sold its provider service to AT&T), MCI, UUNet (before WorldCom purchased both), and Sprint. Most regional and local ISPs became lower tier ISPs, purchasing interconnection from one of several national providers. Rarely do engineers and economists see the world in precisely the same way, but on this one instance there was general agreement. My engineering friends commented that data interchange preserved the architectures’ end-to-end consistency — that is, it did not alter performance based on the user’s identity or the application type. Stated simply, transmission capacity built anywhere interacted with transmission capacity situated anywhere else. Appropriate contracts handled the costs of interconnection among ISPs and backbone firms. Facility ownership did not induce discrimination based on the origin or destination of the data or type of application. Economically speaking, the preservation of end-to-end consistency had three significant consequences: First, it did not discourage vendors from building geographically overlapping networks. Second, vendors could specialize with regional footprints. Third, this built-in consistency prevented firms with national footprints from having big advantages in the deployment of end-to-end applications that require low amounts of signal delay, such as virtual private networks or videoconferencing applications.
Geographic dispersion of capacity During the first few years of the Internet’s commercialization, a handful of US cities or regions dominated backbone capacity. Specifically, San Francisco/Silicon Valley, Chicago, New York, Dallas, Los Angeles, Atlanta, and Washington, DC, contained links to the vast majority of backbone capacity. As of 1997, these seven areas accounted for 64.6 percent of total capacity. By 1999, even though network capacity quintupled over the previous two years, the top seven still accounted for 58.8 percent of total capacity.
b161-ch04.qxd
27/05/04
7:02 PM
Page 142
142 Part IV
From the outset, the distribution of backbone capacity did not perfectly emulate population distribution within metropolitan regions. Seattle, Austin, and Boston have a disproportionately large number of connections (relative to their populations) while larger cities such as Philadelphia and Detroit have disproportionately fewer connections. In addition, the largest metropolitan areas are well served by the backbone while areas such as the rural South have few connections. On a broad level, this design feature looks familiar to observers of other US communications networks. As found in land-based line networks, there are economies of scale in high-capacity switching equipment and the transmission of signals along high-capacity routes. In addition, there are a few (commonly employed) right-of-way pathways (along rail lines, highways, and pipelines, for example) that are available for long-distance transmission lines. Thus, straightforward economic reasons dictate that each firm have only a few major trunk lines running along similar paths. Remarkably, despite the entry of many firms and the growth of the total capacity, maps of Internet backbone for US data transmission have not changed much from 1997 to the present. This speaks to the durability in economies of scale in high-capacity switching and transmission, even during a period of high growth. In other words, as with many communication networks, the present backbone network is a hub and feeder system with only a few hubs. Another notable feature of competitive backbones is multiple dataexchange points. Choices for data exchange locations have persisted into the commercial era. To be sure, some of these points would have arisen under any system because of their central location within the midst of traffic flow. For example, data exchange in New York or Washington, DC, makes sense for traffic along the US East Coast. The same is true for Dallas or Atlanta in the South, Chicago in the Midwest, and San Francisco and Los Angeles in the West.
Coordination or crazy building? Commercial firms in most industries do not coordinate their building plans with each other; the firms building the commercial Internet were no different. This led to replicated transmission capacity along similar paths. The backbone network of the late 1990s embodied features that strongly reflected both the lack of coordination and presence of dynamic incentives. Both forces arose from the absence of monopoly. To be sure, the US backbone is not a monopoly. There is a multiplicity of players building it, competing with each other. This competition
b161-ch04.qxd
27/05/04
7:02 PM
Page 143
Chapter 24 143
enhances the dynamic incentives of each player to grow quickly, price competitively, experiment broadly, and tailor network services to fill customer needs. The impatient environment of the late 1990s provided strong incentives to grow quickly, even when the growth was redundant. In other words, a competitive market among network providers will necessarily lead to uncoordinated build-outs, such as overlapping footprints and other redundancies. Is this outcome necessarily bad? Yes and no. To be sure, these overlaps and redundancies appear inefficient after the network is built, but this is a myopic interpretation. A competitive market gives all players strong incentives to quickly build their network, price it low, and figure out how to customize it to user needs. It should be no surprise, therefore, that the commercial US backbone looks so extremely redundant. By the same token, the backbone might not have been built so quickly in the absence of such competition. Related, uncoordinated investment of sunken investments can potentially lead to price wars in the event of overbuilding. Indeed, it would be surprising if uncoordinated investment resulted in too little infrastructure during a period of sustained demand growth, as occurred in the late 1990s.
Where are we today? These are not pleasant times for backbone providers. Spring 2000 saw the decline of financial support for dot-coms. The 11 September terrorist attacks shook business confidence in long-term investments. This low continued as the media publicized the WorldCom financial scandal. This down cycle is not over, which leaves an open question about the long term state of US backbone networks. Qwest, Level 3 Communications, Sprint, Global Crossing, WorldCom’s MCI, Williams Communications, PSINet, AT&T and others all invested heavily in redundant transmission capacity during the boom. UUNET, a WorldCom division, was the largest backbone data carrier in the US until the scandals that lead to WorldCom’s bankruptcy. PSINet overextended itself and had to declare bankruptcy. Although nobody expects ownership consolidation to completely eliminate redundancies, this shakeout will reshape the precise ownership of the backbones carrying future data traffic flows. Observers lament that much of the installed fiber remains unlit, while those with financial acumen prowl the bankruptcy courts like jackals after a kill, gathering together the assets of others. The final ownership configuration remains unknown as of this writing.
b161-ch04.qxd
27/05/04
7:02 PM
Page 144
144 Part IV
Yes, it looks crazy, but this is what you expect in a market with uncoordinated investment and a period of intense growth followed by a period of collapse. That is not so notable. The notable outcome is that the players built the network as fast as they did.
{Editorial note: The boom and bust in the Internet transmission market has often been compared to the boom and bust that occurred when the railroads were first built in the US. This essay was an attempt to begin to be precise about what really happened.}
b161-ch05.qxd
27/05/04
7:03 PM
Page 145
Part V
Prices, Productivity and Growth
This page intentionally left blank
b161-ch05.qxd
27/05/04
7:03 PM
Page 147
25 Debunking the Productivity Paradox
The party line is mistaken. The “productivity paradox” is not responsible for all the problems with computers. When all the evidence is in, there probably is almost no productivity paradox at all. Now that I’ve got that out of my system, let’s have a civilized discussion.
The party line At dinner parties I like to talk about the computer market. The topic usually initiates a lively conversation. Computer chatter is governed by an easily overlooked, but dependable regularity. Every computer user at every dinner party has at least one complaint about a computer system. It does not matter who the guests are — engineers, administrators, mechanics, medical doctors, lawyers, sales representatives, or warehouse supervisors — everyone has at least one complaint. It never fails. Actually, this regularity holds not only for dinner parties, but for any moderately small gathering over, say, lunch or coffee — and for settings where these things should not be discussed. (Once a reporter from a famous newspaper called to solicit my opinion about the productivity paradox. Instead, we spent most of the interview talking about his newspaper’s new malfunctioning computer system.)
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, August 1996. 147
b161-ch05.qxd
27/05/04
7:03 PM
Page 148
148 Part V
This little social tonic has led me to take a somewhat unscientific survey of many users’ complaints with the computer market. Here is the unexpected observation: Many people seem to think that their complaints have something to do with the productivity paradox. Usually it does not. This has got to stop.
A little intellectual history The productivity paradox began as a by-product of an economic conundrum known as the “productivity slowdown.” Then, almost without evidence, the paradox took on a life of its own. Think of productivity as the average output per worker for the whole US economy. Economists pay attention to productivity because it underlies the growth of worker income and predicts the success of US industries. If productivity grows slowly, the US economy is in for a tough ride. In the early 1980s, economists began documenting that US productivity has not grown as rapidly from the late 1960s onward as it had in the two decades after World War II. This trend alarmed many economic Cassandras. (You are forgiven if you have never heard of the productivity slowdown. While this is a compelling economic question, it is a topic best suited to late-night CNN.) Anyway, sometime during the mid-1980s, the research shifted focus. The hype surrounding the information age found its way into productivity slowdown research. Economists began to ask, Why haven’t the improvements in computing translated into US productivity growth? At first this was nothing more than a question about a simple anomaly. The rest is harder to trace. Out of this query arose the productivity paradox. Not only did the productivity paradox show up in academic journals, but it eventually started showing up in newspapers, magazines, and on CNN. Maybe headline writers loved the alliterative way “productivity paradox” rolls off the tongue. Maybe it really hit a nerve. In any case, the paradox began taking the blame for everything — low wages among the computer illiterate, downsizing at major corporations, and everything wrong with computers. (Nobody has yet blamed the failures of the Chicago Cubs on the productivity paradox, but this seems just around the comer.)
What’s wrong with the paradox Let’s confront the paradox on its own terms. How strong is the evidence that the paradox exists? Mixed, at best. For example, most economists now
b161-ch05.qxd
27/05/04
7:03 PM
Page 149
Chapter 25 149
agree that the productivity slowdown disappeared from US manufacturing in the 1980s. Yes, that’s right: There has been no productivity slowdown in manufacturing in the last decade. Manufacturing is doing just fine. Of course, that then means that the question behind the paradox is badly posed. Paradox supporters must maintain that computers helped manufacturing, but nothing else. At best, manufacturing industries might be unusual. Manufacturing accounts for only a third of the US gross domestic product (GDP) and a smaller fraction of employment. It might just be that manufacturing is not representative of the rest of the economy, but this is a tricky argument to sustain. The question behind the paradox also seems badly posed for another reason. Was there really such a dramatic change in computers? Everyone knows about the changes in the prices and quality of computer hardware, but is that all that is relevant? Shouldn’t we also examine the costs of everything else? Here’s a comparative analogy: If jet engines got dramatically better and cheaper over many years, would anyone seriously expect the price of a whole plane ride to get dramatically better as well? Not if the cost of everything else — pilots, airplane crews, fuel, airframes, airport maintenance, and so on — did not change. The quality of airplane rides might improve slightly, but that will hardly show up in GDP. So here’s the crux of it: Computers must be combined with many other things to be useful. Most of the complementary inputs in computing have not gotten cheaper or better at a dramatic rate. Software, programmers, supervisors, basic R&D at user establishments, invention of new uses, operations, and management — all the stuff that really costs money in computing — is still expensive today. Yes, it is better than it was decades ago, but vendors are still promising the big breakthrough just around the comer. Is it any wonder that the total output from computerintensive enterprises does not grow as fast as hardware improves? Hardware is just a small piece of the puzzle. Finally, why should computers take the blame for a slowdown in productivity growth in non-manufacturing industries? Why not blame something more obvious, like poor accounting? Output is difficult to measure in service industries, where computers are most widely used. Banking, insurance, medical services, higher education, engineering services, on-line services, and financial forecasting are all computer-intensive industries. They all generate lots of revenue. However, revenue is a poor measure for the economic benefits associated with market activity in these sectors. The growth of total revenue is an even worse way to measure the increase in quality that results from introducing better computers. Maybe GDP accounting needs fixing, not the computers.
b161-ch05.qxd
27/05/04
7:03 PM
Page 150
150 Part V
Keeping the paradox in its cage Some things belong in the productivity paradox, but most things do not. More precisely, most dinner party complaints are about unrealized expectations, not the productivity paradox. Unproductive investment and unrealized expectations are conceptually different. If most investment in computing is unproductive, we would observe enormous sums of money wasted on machines that no one ever uses. We would see wizened and repeatedly burned administrators vowing never, ever to invest in this technology again. In fact, this is not what we observe. Only in a few isolated instances do new computers go unused. Rarely do administrators rip out the new technology in favor of the old. Experienced administrators continue to invest in computers. This is not symptomatic of a problematic addiction to new technology. It is just that administrators are happy with their past investments and want more. Most user complaints highlight two features of the market: unscrupulous vendor behavior and poor management of technological uncertainty. Unscrupulous vendors regularly promise more than their technology can deliver. This has something to do with salesmanship, but more fundamentally, it arises from the gaps between the culture of designers and the actual needs of most users. On the one hand, these instances are unfortunate and frustrating for the duped users. On the other, these types of promises have been a feature of the high-technology sector of the US economy for most of this century. These situations hardly qualify as a burning public policy issue, except in the extreme case when a vendor makes patently fraudulent claims. Managing technological uncertainty is a different matter. There is a learning curve associated with new computing technology. Much of the networking revolution — often called client-server technology — is close to the technological frontier and inherently risky. By definition, an occasional user will find it more difficult to install and use than anyone — even the experts — expected. By definition, many new applications will take time — perhaps years — to perfect. Do most companies have the resources to expose their managers and their workers to this risk? Would a company be better off playing it safe as technological laggard instead of a risk-taking technological leader? Are most managers competent enough to manage their way through these risks? These are hard questions, and the answers affect everyone’s work lives. A hard question, however, is not a paradox, mystery, or conundrum. It does make for good dinner party conversation nonetheless.
b161-ch05.qxd
27/05/04
7:03 PM
Page 151
Chapter 25 151
Improving the party line Many users have legitimate complaints with their systems. If you and I should ever meet at a dinner party, I will happily listen to your stories about your company’s computer system. There is much to learn from these stories. However, let me state this emphatically: It may be a headache. It may cause you pain. Yet, whatever is wrong with your computers, it is probably not the productivity paradox! This is not the party line. It is just a more careful use of concepts.
{Editorial note: The productivity paradox still continues to get attention because it is difficult to estimate the private and societal return on investment in information technology. There are many reasons for that, so the mystery is not easily distilled in one essay. This is the beginning of an attempt.}
b161-ch05.qxd
27/05/04
7:03 PM
Page 152
26 Banking on the Information Age
Alan Greenspan, chair of the United States’ Federal Reserve Board, has commented on the pervasiveness of information technology and the country entering a new era. This theme first appeared in a few of his dinner speeches around 1995. It found its way into policy speeches a few years later. Nowadays, you can even find it in Greenspan’s congressional testimony. Your first reaction to this might be, “Hey, welcome to the information age.” The IT revolution is not, after all, recent news. It doesn’t become news just because Greenspan came late to the party. Actually, Greenspan’s opinion indicates that something fundamental is happening here. A bit of background might help. Greenspan is a savvy, cautious thinker imbued with instincts for circumspection. He tends to deliver deliberately elliptical speeches suspicious of untested theories and respectful of hard facts. Common wisdom says financial markets might misconstrue any rash remark or off-handed humor as a signal of a major or even a minor shift in Federal Reserve policy, so he plays it straight. All the time. Thus, it comes as something of a shock when Greenspan publicly changes his mind about something. To wit, the country’s central banker has decided that IT really does matter for economic growth and for fighting inflation. Why?
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, October 1999. 152
b161-ch05.qxd
27/05/04
7:03 PM
Page 153
Chapter 26 153
Let me put Greenspan’s view in some perspective. It is not just that IT investments have made financial markets more efficient. More precisely, he is focusing on three distinct long-term trends: 1. IT has become critical infrastructure for many facets of the modern economy. 2. Just as many great things today resulted from IT inventions that are two decades old, many great inventions today hold seeds for the future. 3. Changes to IT bring about largely unmeasured benefits to society in the form of new goods and services, and many of these benefits keep prices down.
IT as critical infrastructure A PC is necessary, though not sufficient, for conducting high-technology activity. That said, in the US as with most of the developing world, PCs have become so abundant as to make the issues concerning critical infrastructure rather subtle. First, let’s get one thing straight. The popular discussion about critical infrastructure arose from a marketing ploy. The simplest version shows up in brazen commercials from several computer companies. You see, poor Johnny has parents who don’t own a PC. Johnny doesn’t learn to search the Internet, doesn’t learn to think properly, and gets failing grades. Now crank up the music. Here comes Cindy, whose parents invested heavily in online educational programs. Cindy’s school report is a full-blown multimedia event. It includes a video stream, Web pages with professional layouts, and grammatically correct poetic prose laced with quotes from Shelley. This child is obviously headed for great things. Not so poor Johnny, the PC deprived. The point, of course, is that sellers want consumers to think of the PC as a critical component of their children’s future success. Well, maybe. Then again, maybe not. Certainly, Greenspan doesn’t care about your child specifically — he worries about whole neighborhoods of children. The concern is that neighborhoods of households lacking PCs will produce workers lacking the abilities to step into high-paying jobs later in life. These neighborhoods tend to occur in inner cities and in rural areas. So the argument goes: the US will hamper its future if children in these areas lack access to basic training in using IT. These policy advocates must then put PCs in schools and Internet connections in libraries to give Johnny a chance, even if his parents can’t.
b161-ch05.qxd
27/05/04
7:03 PM
Page 154
154 Part V
The validity of this argument is certainly debatable, but undoubtedly many policymakers believe it. The present federal program to subsidize Internet access in disadvantaged areas’ schools and libraries spends two billion dollars a year based partly on this premise. Greenspan evidently has another version of this concern at the top of his agenda. He can see the pervasiveness of PCs. They have changed almost every occupation from architecture to zoology. PCs have become critical infrastructure for many activities in most regions of the country. Greenspan doesn’t worry about abundant things such as hardware. He worries about scarce things such as adequate software, trained personnel, and so on. With good reason: In some regions it’s difficult to find programmers in frontier languages. In some remote cities it’s difficult to find engineers trained to service an unusual computing program. In some industries it’s difficult to find consultants who understand how to translate the latest computing technology into useful applications for that industry. Where labor is difficult to find, costs go up. Too much of that is bad for economic growth.
Technical change feeds on itself Greenspan, for good or ill, is old enough to remember how the wave of invention associated with mainframes changed many essential features of financial markets — the markets he knows best. He must also have observed the patterns of the past two decades, when PCs transformed financial markets yet again. Watching E-trade and E-Schwab today, he might well feel a sense of deja vu. That long-term perspective informs some of the comments Greenspan makes today. Changes evolve slowly, especially in service industries. Only after the passage of time, and the gradual accumulation of many incremental improvements in processes and outputs, does dramatic change result. For a variety of reasons, experimentation and learning often can occur only within a market setting. It takes time to translate an invention into a viable commercial product-time to develop business models, create new distribution channels, let one set of users learn from another, and so on. No one adoption pattern characterizes all IT, nor are these patterns necessarily similar to important historical episodes of diffusion such as for radio, television, or the automobile. Actually, new waves of IT invention set off new waves of IT invention by users, and each wave has its own diffusion curve of adaptation and adoption. For example, the invention of inexpensive fiber optic cable did not immediately change the capability of phone service nationwide. Performance and features changed in fits and
b161-ch05.qxd
27/05/04
7:03 PM
Page 155
Chapter 26 155
starts as digital switching technologies, repeaters, and software that increased fiber’s capabilities were developed and adopted. Thus, new services showed up at different companies and in different regions at different times. Similarly, such important contemporary technologies as the World Wide Web and enterprise resource planning have set off entirely new waves of invention. The Web (or at least technologies arising from it) prompts a great deal of new application development. Along with TCP/IPbased technologies, whole new business models are emerging for delivering and using data-related services. Similarly, the unification of distinct systems associated with enterprise resource planning permits a new wave of control of IT and businesses. These changes are not merely the tail ends of a diffusion curve that began long ago; they represent a renewed process. Such processes differ across regions of the country and across industries within specific regions. Even a central banker can get excited about the cost conditions and economic opportunities that future users will face after the deployment of extremely inexpensive computing capabilities and low-cost, highbandwidth fiber and wireless communications technologies. These deployments will induce (and already have induced, to some extent) the entry into this market of thousands of firms trying to solve technical and commercial problems that never previously existed.
How IT benefits society Ultimately, Greenspan cares most about changes to the price level from one year to the next. Will the IT revolution reduce inflation or not? This is the trillion-dollar question. How could anyone answer this precisely? It’s extremely difficult to find a direct relationship between investment activity yesterday and economic benefits from new technology today. Adding another computer in an office is not like adding another truck to a construction firm, nor is developing a better wide-area network like adding robotic machinery to an assembly line. Real value comes from more than just a few new dollars earned by an enterprise that adopted a WAN; it also comes from the value created by the new services consumers use relative to what they would have used had the IT investment not occurred. This value is a hard thing to measure. Rarely does the standard productivity model provide an informative answer. Moreover, investment and use differ over time and are associated with different economic goals. In other words, hardware-based measures of IT,
b161-ch05.qxd
27/05/04
7:03 PM
Page 156
156 Part V
such as the number of computers, data communication lines, or Internet servers, presume that different types of IT capital are not strongly heterogeneous in their capabilities. This simply isn’t so. The final output from organizations that use IT may also change over time. Some of these changes may generate new revenue; some may induce the entry into the market of new firms with business models using the new IT in a radical way; and some may induce exit. Thus, the key features of the new IT’s final output may change radically overtime. It’s easiest to assess the value of these changes if the new technology simply changes the prices of existing goods. Yet, because it’s pervasive, fundamental changes in IT lead to widespread and complex change throughout the economy. Further, IT can change the economy in qualitative ways not easily converted into quantitative measures like price changes. Many of these benefits keep prices down — something that really matters to the main inflation fighter in the country. These benefits show up slowly and only through indirect means. Greenspan appears to have bought into the perspective — an opinion that he only expresses cautiously, elliptically, and with many qualifiers — that the past investment in IT gave us lower prices today and may again tomorrow. That’s quite an observation from the Federal Reserve chair — one that we can take all the way to the bank.
{Editorial note: It was well known by insiders that sometime in the mid 1990s Greenspan developed an interest in understanding the returns to IT investment. This interest had something to do with his policies. When the Internet boom began, he was convinced that it would lead to a large increase in productivity. So he did not take action to discourage investment in IT.}
b161-ch05.qxd
27/05/04
7:03 PM
Page 157
27 Measure for Measure in the New Economy
Even with the recent restructuring in the dot-com economy, nobody doubts that new IT has improved the US economy. On that everyone agrees. But, of course, that’s the end of the agreement. It’s one thing to recognize the difference between a gain and a loss. It’s another to recognize the difference between little improvement and a big one. How much impact has IT had on an economy? Should a sophisticated observer err on the side of optimism or pessimism? Should a very sophisticated observer, somebody like Alan Greenspan, believe the numbers reported to him on a weekly and daily basis, or should he read better or worse things into them? What should an investor with a large portfolio think, such as someone at Lehman Brothers or Morgan Stanley? The answer is less obvious than it might seem. It depends on understanding what IT does to the economy and what it does to government statistics. In practice this topic isn’t as arcane as it sounds, but it’s a bit subtle.
Why IT’s improvements are hard to measure Let’s start with a simple example of how hard it is to measure the true impact of IT. Is there any statistic about how much time society saved
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, February 2001. 157
b161-ch05.qxd
27/05/04
7:03 PM
Page 158
158 Part V
from the diffusion of IT in government? Nope. The basic problem is that it’s inherently impossible to figure out. IT makes one of the most onerous parts of society just a bit less onerous. That is, it makes government more accessible and less bureaucratic. Specifically, it has become much easier in the last five years to get copies of government data and copies of government forms — simply by going online. This is true at the federal, state, and local levels (though to different degrees, depending on where you look). Anybody who spends time working with governments knows about this. Is this an improvement to society? Sure it is. Accountants, taxpayers, lawyers, and permit applicants can get their tasks completed faster and with more efficiency. That’s a net gain. Lawyers will have a bit more free time to take new cases, accountants to consult, and construction supervisors to spend on site instead of at city hall. Now here is the tricky question: How much of an improvement is this? It might be big since there are many accountants, taxpayers, and lawyers out there. If each of them saves 10 hours a year, that adds up to a big savings in the aggregate economy. Then again, it might be a small improvement, particularly if less bureaucratic procedures simply lead to longer rules and no net gain in efficiency. More broadly, is more efficient government a big boon for society? Sure. These sort of changes should matter, since government activity involves such a huge fraction of GDP and employment (over 20% when education, police, and other local services are added in). But how do we know how much improvement can be attributed to IT? That’s inherently ambiguous. Now consider the broader question. Is it possible to measure the impact of IT in another part of the economy such as hospital administration or publishing? Sure, anyone who works in those sectors knows about improvements in the last five years. Is there likely to be a statistic for those sectors or any other sector of the economy soon? Don’t count on it.
New economic phenomena require new statistics More generally, information technology alters what we ought to measure because it alters fundamental economic behavior and the patterns of economic life. This seems like a simple principle, but it’s quite subtle to put into practice. For example, if better IT leads to faster feedback between customer and supplier, then it ought to lead to faster product cycles and upgrades. This is as true in consumer electronics as it is in women’s fashion. Clearly
b161-ch05.qxd
27/05/04
7:03 PM
Page 159
Chapter 27 159
everyone is better off. The shelves have the right products, the retailers don’t waste inventory, and the manufacturers match demands. Now then, how do we measure that gain? How much better off are we? It is hard to know with conventional statistical techniques, which are what the US government largely uses. All of this simply shows up as less material for the same amount of output, that is, a productivity improvement. This is part of the gain, to be sure, but not all of it. There’s nothing in there about satisfying demand more precisely or lowering the transaction costs of coming to a decision, that is, about saving somebody time. Here’s a related version of the same problem. If the diffusion of the Internet lets someone in a low-density-area access get a collector’s items on e-Bay, is that a benefit to society? Sure it is. This transaction benefits e-Bay’s bottom line, which is a gain. But it also leads someone to avoid a flea market or trade show. That might mean that a hotel for the trade shows made less money. There’s more. If the user saved money at e-Bay (or by not getting a hotel room), then that savings also shows up somewhere else in the economy. That’s great for the economy overall, though not necessarily good for the hotel business. This is what an economist would call an example of a “decline in the cost of transacting.” Present statistics only partially estimate how big the savings are.
It’s easier to look where the light is brightest So that gets us back to the source of the problem, government statistics. Statisticians end up measuring economic activity simply because it’s all they can measure with confidence. So what’s wrong with that? If we don’t think about it, we end up underemphasizing many other important things happening just a bit outside the spotlight. Don’t misunderstand me. Good statisticians know about this bias, so there are many self-conscience attempts to avoid it, especially within government statistical agencies. Nonetheless, this bias often creeps into a statistic. Let me use one more example to illustrate the point, one that’s well known in the online world. Because pornography is one of the few revenuegenerating activities online, it’s tempting to conclude that this is where the Internet has made its largest contributions to economic activity. While titillating, that inference is grossly misleading, since every household survey in the last five years shows that pornography is an activity that involves less than 10% of the total time spent online.
b161-ch05.qxd
27/05/04
7:03 PM
Page 160
160 Part V
Yet, that also shows just how hard the real statistical problem can get. How big is the non-pornographic part of the Internet? So-called free Internet activity or advertising-supported online activity accounts for over two thirds of household activity online. The important contributions of this activity to the improvement in economic welfare are largely unmeasured. That is, online technology involves a qualitative change in the way people communicate and receive information. For example, have you seen a teenager use instant messaging recently? That’s not what statisticians measure. Revenue is what statisticians can measure, so this is how statisticians see it, at least for now. The household subscriptions to American Online, Mindspring/Earthlink, and other ISPs go up. The accounts for advertising services on the Internet also go up. Then there’s a loss in advertising to television, radio, and magazines, whose viewer-ship has subsequently fallen. Will any government statistician change soon? Do not count on it.
There’s no typical experience Some industries are more influenced by IT than others. It’s already clear to every observer that music, news, entertainment, electronic games, and consulting services will look very different ten years from now, maybe even five years from now. But what about other activities that are essential to the economy, such as warehousing, transportation, medical services, or publishing? Those probably will change dramatically too. There’s a natural human tendency to understand the world in terms of a few typical examples, but that tendency can lead one astray if every example differs from the other. As we watch this revolution work its way through our economy, it will be difficult to keep track of the stories of how one market influences different submarkets and their interactions. What good are comparative statistics if the statisticians need to make a number of corrections and qualifications in every industry? This is the big problem. Even the best statistical agencies in the world are relatively overmatched by present events. So for the foreseeable future government agencies are under siege, trying to produce some numbers, even if they’re only partially informative. More to the point, the changes wrought by IT are often not measured using conventional approaches. That’s because most of the change associated with new IT involves qualitative change to the delivery of a market activity or changes to the features of hard-to-measure services.
b161-ch05.qxd
27/05/04
7:03 PM
Page 161
Chapter 27 161
This isn’t a story where the government does a bad job. On the contrary, the US statistical agencies are among the best in the world. They deserve our sympathy because their work receives so little glory. Rather, this is a story in which real events place virtually unsolvable challenges in front of the best statistical agencies in the world. Said more soberly, our government’s present numerical habits will, at best, only provide a partial answer. At worse, they may direct our attention away from what really matters.
{Editorial note: Robert Solow was known for quipping that PCs were everywhere except in government statistics. It shall soon be said that the Internet is everywhere except in government statistics.}
b161-ch05.qxd
27/05/04
7:03 PM
Page 162
28 Pricing Internet Access
As a rule, economic facts are often the equivalent of a cold shower. Economics is affectionately known as dismal science for a reason. The plain fact is that not many Internet firms were profitable at anytime in the last five years. The list of unprofitable companies (and spectacular investment failures) is embarrassingly long, including Amazon, AT&T Broadband, e-Toys, Dr. Koop, and too many other dot-coms to mention. Indeed, positive profitability was so rare that we all know the names of firms that achieved it: Cisco, Yahoo, e-Bay, and AOL (if you count them as an Internet company). So I am struck by the comparative success of one general class of companies that continues to collect revenue, compete vigorously, achieve mass-market status, and not implode in spite of frequent restructuring. I refer to Internet service providers, or ISPs for short. How in the world can hundreds of these firms survive in this market over so many years-particularly in light of free alternatives, such as Netzero? What pricing mechanisms do access providers use to collect revenue? Why do these mechanisms work? More to the point, what do these mechanisms tell us about the source of sustainable economic value in Internet activity?
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, April 2001. 162
b161-ch05.qxd
27/05/04
7:03 PM
Page 163
Chapter 28 163
Market structure and pricing First, we need a little background. ISPs differ greatly. AOL, AT&T Mindspring/Earthlink, and many other large ISPs focused on building a large national presence, investing heavily in capital and marketing expenditures. They primarily located in urban areas and in some low-density areas. These are the major ISPs, which most people recognize and know about. At the same time, there are thousands of ISPs that ply little niches and do quite well. These often have a regional- or city specific focus and deliberately concentrate on new services, such as network development and maintenance. These approaches may go hand in hand with local marketing. The ISP market structure has a very unusual shape. AOL (along with its subsidiary, CompuServe) has signed up about half of US households that use the Internet. The remaining households tend to split between big and small providers. By any measure, the industry is extremely competitive at the national level. In urban areas, local ISPs compete with the national firms. However, in rural markets, the small ISPs largely compete with each other, if anyone at all. Hundreds of these small ISPs entered the low-density locations, which the large ISPs eschewed.
Two myths about prices So how do most ISPs make their money? They offer subscriptions for service and charge for it. What are subscriptions like? Well, that gets us immediately to the two myths of pricing: That all contracts are $19.95 a month and all are flat rate. Let’s start with the myth of $19.95. US government surveys of household ISP use find considerable variation in the prices paid for Internet access. To be sure, the most common level of expenditure is around $20 a month, but this is far less dominate than assumed in common discussion. For example, a December 1998 survey performed by the Bureau of Labor Statistics shows that about one third of all US households with Internet access report expenditure between $19 and $22 per month. Another third report expenditures under $19 with spikes around $15 and $10. The remainder report expenditures above $22, with spikes at $25 and $30, quickly tapering off to levels not exceeding $50. In other words, households pay a range of prices for Internet access, with close to 90 percent falling between $10 and $30, inclusive. More recent data also looks similar. The myth of flat rate pricing is also worth examining. Flat rate pricing emerged as the predominant default pricing contract at the
b161-ch05.qxd
27/05/04
7:03 PM
Page 164
164 Part V
industry’s outset. Even AOL abandoned pricing by the hour, moving to flat rate pricing in 1996 and 1997. By this time, it was already regarded as the norm for dial-up service at most other ISPs. Flat rate pricing arose for three reasons. Most dial-up Internet access in the United States operates over local phone switches where per-minute use is not metered much, if at all. Hence, local ISPs don’t incur any costs from offering the user un-metered service. Next, some observers believe that it’s a hassle for the ISP to monitor data flows for each user and to administer customized billing. This is thought especially true for many small ISPs that find it costly to implement something other than user-level flat rate pricing. My own guess is that the engineering is not difficult to implement, but it is a hassle to administer, which is what stops many small firms. Still, that does not explain why large firms don’t do it. Last, many observers believe that users don’t like monitoring their own use or being metered in any way. This is especially true of families with kids, where the parents would have to police their children’s use.
Hourly limitations ISPs could discount the $20 a month price. For example, if users were willing to monitor their own use, then hourly limitations have benefits for both user and ISP Low-volume users agree to a monthly hourly restriction in exchange for lower prices, and higher volume users agree to a higher limit and pay higher prices, and so on. Why does this make sense? Because low-volume users make fewer demands on modem capacity. Modem capacity provides a limit on the maximum number of phone calls at a point of presence, that is, a place where local phone calls are first routed to the Internet. Related backbone connections are another constraint on the maximum flow of data. When traffic reaches the ISP’s maximum capacity, it translates into slower connection speeds, Web page downloads, and response times. Data flows correlate to the number of users but the correlation is weak. In other words, some users consume the majority of capacity and are much more expensive to serve than others. For example, the median household spends a total of 10 to 15 hours online during a month, with the upper quartile at approximately 30 hours. The skew is quite pronounced: Ten percent of users spend over 60 hours a month online and 5 percent, over 90 hours. Session length is also skewed. The median session is approximately 10 minutes with the upper quartile at approximately
b161-ch05.qxd
27/05/04
7:03 PM
Page 165
Chapter 28 165
30 minutes; 10 percent of users have sessions over 75 minutes; and 5 percent, over 100 minutes.
Contracts for use prices The traditional way of summarizing communications prices provides a way to frame the phenomenon here. The traditional view distinguishes between connection charge and use charge. In ISP markets, the connection charge is a substantial part of cost. A typical, limited, 10 to 20 hour per month contract costs approximately $10 plus change. The difference between this contract and unlimited contracts (over 100 hours) is approximately another $10, depending on service quality and the ISP. In other words, the use charge is approximately $0.12 an hour or even less (up to a maximum of $20). The typical ISP bill unifies these two expenses. So why doesn’t any ISP simply offer a variable contract with $10 per month and twelve cents an hour after that until some maximum price limit? It would be cheaper for most users. Yet, would this business model succeed? Not necessarily. It would add an administrative expense to ISP operations, since now the ISP would have to monitor use and pass those charges on to users with customized bills each month. Since most users find their telephone bills incomprehensible, it isn’t obvious that many ISP subscribers would regard this billing practice as an improvement. Moreover, the bills would get even more complicated, once typical ISP activity was added to the mix. For the last five years ISPs faced many unsettled questions about what to bundle in the standard contract and what to charge for separately. The standard contract for dial-up service tends to come bundled with a wide variety of un-priced services such as e-mail account functions, games, home page links, standard servicing, local news, and other costly custom features. Many ISPs boast about their connection speed, modem availability, and other service guarantees. These are costly to provide but are implicitly bundled in the price of basic service. Many ISPs charge separately for a variety of additional services, such as hosting on a large scale and for extensive consulting, especially for business. Many charge set up fees for basic service, although many do not. In this environment, simpler bills are better. So what does this tell us about the sources of value in ISPs? Revenue does not shower down on these providers easily. It takes hard work, clear execution, and a mechanism for getting users to give up their money. None of this should be taken for granted.
b161-ch05.qxd
27/05/04
7:03 PM
Page 166
166 Part V
{Editorial note: I have a long standing academic interest in understanding the behavior of ISPs and the value of the Internet. This essay was a by-product of those academic pursuits. See, for example, “Commercialization of the Internet: The Interaction of Public Policy and Private Actions,” in (eds) Adam Jaffe, Josh Lerner and Scott Stern, Innovation, Policy and the Economy, MIT Press, 2001.}
b161-ch05.qxd
27/05/04
7:03 PM
Page 167
29 E-Business Infrastructure
Economic statistics usually involve something almost obvious, but not immediately apparent. Their construction often resembles that old saying about elephants sitting in trees. The elephants are invisible until someone points them out. After someone points them out, the elephants seem to be everywhere, and we wonder how we missed them before. Accordingly, recent federal government initiatives to measure electronic business throughout the United States get a mixed reaction. A skeptic would correctly ask whether these initiatives measure anything other than the obvious. A supporter also correctly responds that if society wants an accurate count of all the electronic elephants in trees, then the US government is in a good position to measure the obvious. These initiatives raise one of the oldest questions about government agencies collecting statistics: What should these agencies collect and why? Many federal agencies already collect data on different aspects of electronic commerce. These agencies include the Bureau of Labor Statistics, the National Telecommunications Industry Administration, the Bureau of Economic Analysis, and the Federal Communications Commission, just to name a few. To illustrate the issues related to statistics collection, I will focus on the US Census Bureau’s EStats program to measure e-business. If you want to observe E-Stats in action, go to http://www.census.gov/ and click on “E-Stats.” Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, November 2001. 167
b161-ch05.qxd
27/05/04
7:03 PM
Page 168
168 Part V
What do they do? It is important to understand that the US Census does more than count the entire US population every decade. As a division of the Department of Commerce, this agency also spends considerable time gathering economic data about business establishments throughout the country. This is especially true in the nondecennial years, when the staff has time for special projects, such as an economic census of business establishments. This data is of use to marketing experts, regional economists, and certain macroeconomists. When the US Census first began these economic censuses many decades ago, they primarily surveyed manufacturing establishments. As time has passed, however, the US economy has become less devoted to manufacturing and more focused on services, thus, the US Census now surveys more than just manufacturers. The E-Stats program is one of several programs tracking statistics for manufacturing and nonmanufacturing commerce. The motivation comes from a variety of sources. Some agencies, such as the Federal Reserve, need more accurate statistics for the part of the US economy behind recent productivity gains and, more recently, volatility in economic activity. Other agencies, such as the Bureau of Economic Analysis, need a better sense of what fraction of gross domestic product (GDP) is attributable to e-commerce and similar phenomena. Many public-policy programs — such as the $2 billion E-Rate program-hinge on having unbiased data that sorts between hyperbole, hypothesis, and mere speculation. For example, debates about broadband would benefit from more accurate censuses of Internet use at homes and private business establishments. These initiatives at the US Census could ground the discussion in facts, orient debate toward the right set of questions, and, at a minimum, avoid common misconceptions. In this sense, we can view these initiatives in an overall positive light. That said, this type of measurement is not easy to carry out. To illustrate, I now focus on a US Census proposal to measure e-business infrastructure. Why is this so interesting? To oversimplify this a bit, different regions of the country need to know whether they are ahead or behind economically. Economists will use this data to explore whether the more information-intensive industries underwent more or less growth compared to other industries. Economists are also curious about whether investment in e-business infrastructure helped induce more revenue in some regional industries and not in others. We cannot make such determinations without data.
b161-ch05.qxd
27/05/04
7:03 PM
Page 169
Chapter 29 169
What is infrastructure? To understand some of the difficulties with measuring elephants, let’s dig below the surface. Measurement must take among commercial datacollecting firms is to learn about Internet users, sometimes by going directly to the Internet to track conduct. Commercially available data sets provide information about whether households are online, if businesses have access to Internet services, the time users spend online at work and at home, user surfing habits at advertising-supported sites, online spending habits, and so on. These data come from such firms as Jupiter, Forrester, MediaMatrix, Plurimus, ComScore, Harte Hanks, IDG, DataPro, PNR, and many others. These firms tailor most of these data to marketing departments’ needs, and some of it is experimental, but these facts do not make these data useless for every government purpose. Said another way, if many commercial providers supply information about electronic commerce, what type of information should a government agency spend its time collecting? The short answer goes something like this: Governments shouldn’t collect data that commercial firms can track just as well. On the other hand, there are plenty of measures — such as GDP measurement and unemployment statistics — that matter to society at large, but that commercial firms do not compile. The US Census has a comparative advantage in collecting this important data in that it is comprehensive, covering every establishment in every location. Does this e-business infrastructure measurement play to the comparative advantage of the census office, or could data from private firms provide adequate information? Do we need a government program for this project? These are open questions that economists are still debating. In the meantime, some well-meaning government employees are figuring out whether it’s feasible to measure e-business infrastructure. The total will probably reach into the hundreds of billions of dollars, so the task is potentially quite important. At the same time, the task is extremely challenging. These employees’ biggest hope is that in the process, they’ll avoid creating an even bigger white elephant. Consider the measurement of software and the problems that software’s attributes pose in constructing an e-business infrastructure index. Whatever way we define it, e-business infrastructure comprises a wide variety of disparate technological pieces with a wide set of uses. This makes it difficult to identify impact, trace the lines of causality from infrastructure to outcomes, and so on. One way to more easily identify these
b161-ch05.qxd
27/05/04
7:03 PM
Page 170
170 Part V
linkages is to collect information about many circumstances in many different markets or industries. Yet, in practice this will be challenging. Infrastructure is made up of routers, computers, optical communications, and software. But how about fax machines or CT scanners? Are these part of e-business infrastructure? How about equipment used for radio and television broadcasting? What definition is appropriate, a broad or a narrow one? Here is another challenge. The general rule of thumb within most corporate information groups is that packaged software and hardware constitute a large but not overwhelming share of overall expense. Conversely, maintenance, administration, support, and software programming are the largest expenses. These assets involve a lot of human capital, complex investments, and the construction of idiosyncratic assets. Often, they have little resale value but a high service value to the owner. Unfortunately, these patterns are quite problematic for the US Census when carrying out measurements. What is the proper way to estimate the value of software? I do not think anyone has a perfect answer. While packaged software, such as MS Office, has a set market value attached to it, the market value of much contract software from consultants is less clear. What is the value of old, yet functioning software acquired a few years ago? What is the best way to benchmark the value of a program that took years to debug and refine? These seemingly simple accounting questions will have enormous consequences for any estimate of e-business infrastructure. This is an art at best and not yet a science. In addition, there is tremendous variance in ownership structures for IT services delivery. Normally, this would not cause data collectors problems because there are ways to correct for rental/ownership differences across firms and industries. However, many firms outsource only parts of their business computing and communications. Many firms do not have clean boundaries between assets owned by the firm, rented from others but kept on the premises, and effectively rented from others who provide end services. Data back up; hosting; and many routine data functions in payroll, benefits, insurance claims, and other administrative tasks fall into this category. Some firms own these tasks, and some outsource them. The same work can occur in-house or off-site, but in each case, there is a different asset owner and a different work location. This trend is already significant, at least as measured by the many firms supplying outsourcing. The most optimistic forecasters predict that outsourcing will spread into many other facets of IT applications in the form of application service providers. So the US Census has to figure out what part of corporate IT goes to a firm such as EDS or Perot Systems, and what part stays in-house.
b161-ch05.qxd
27/05/04
7:03 PM
Page 171
Chapter 29 171
In other words, a census cannot attribute a specific amount of e-business infrastructure to a specific industry without also accounting for some fraction that is missing because the service is rented from an upstream supplier. Because IT outsourcing is a huge industry, the companies in it are constantly changing their organization, merging, disbanding, and recombining in new variations. In such an environment, nobody wants an e-business infrastructure statistic that is sensitive to arbitrary decisions about vertical integration or disintegration.
What should be left to markets? IT markets differ from other infrastructure in one other way: A lot of commercial companies are already tracking considerable information. For example, a trend among commercial data-collecting firms is to learn much about Internet users, sometimes by going directly to the Internet to learn about conduct. More to the point, there are now commercially available data sets about whether households are one line, whether businesses have access to Internet services, how much time users spend on line at work and at home, their surfing habits at advertising-supported sites, their spending habits, etc. These data come from such firms as Jupiter, Forrester, Media-Matrix, Plurimus, ComScore, Harte Hanks, IDG, DataPro, PNR and more firms than I can recall. Most of these data are tailored to the needs of marketing departments and some of it is experimental, but that does not make them useless for every government purpose. Said another way, if there are many commercial providers of information about electronic commerce, what should a government agency spend its time on? The short answer goes something like this: governments ought not to collect data about things which commercial firms can track just as well. On the other hand, there are plenty of things — such as GDP measurement and unemployment statistics — that matter to society at large, but commercial firms do not compile. Related, the US Census has a comparative advantage in being comprehensive — covering every establishment in every location. Does this project play to the comparative advantage of the Census? In other words, could e-business infrastructure be measured using data from private firms? Do we need a government program to do this? These are open questions and still being debated today. In the meantime, some very well-meaning government employees are trying to figure out whether it is feasible to measure e-business
b161-ch05.qxd
27/05/04
7:03 PM
Page 172
172 Part V
infrastructure. So now they are facing a hundred billion dollar problem with quite a few ten billion dollar questions. Most of all, they do not want to make a big white elephant.
{Editorial note: I sit on an advisory panel for the Center for Economic Studies at the US Census. This column was motivated by a presentation about the E-Stats. I should add that I find E-Stats useful in my own academic work for its concise presentation of facts about e-commerce. This essay should not be construed as criticism.}
b161-ch05.qxd
27/05/04
7:03 PM
Page 173
30 The Price is Not Right
One of the poetic signposts of my misspent youth had a chorus that began “The revolution will not be televised.” This song described the 45 different ways in which media firms and government agencies would fail to record a social revolution. It was an ironic and jaded poem, filled with attitude and rhythm. Never did I imagine it would come true. But today I see that it has in a way the poet never expected: It has happened to the Internet revolution. It is not being recorded. Taped to a filing cabinet in my office is a copy of the official price index for Internet access in the US. I show it to my students and tell them that they should pity the government economists who compile it each month. The Internet access industry is roughly a $10 billion industry. The price index includes errors that produce an even greater mismeasurement of gross domestic product (GDP). More to the point, the numbers are completely accurate and, simultaneously, almost unrelated to the revolution that virtually every online surfer has experienced. I reproduce the index below in Table 1. Take a look. The index has hardly declined (from its normalized level of 100) since the US Bureau of Labor Statistics first started it in 1997.
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, September 2002. 173
b161-ch05.qxd
27/05/04
7:03 PM
Page 174
174 Part V Table 1. US Internet access price index. Year
Index
1997 1998 1999 2000 2001
100 102.5 92.2 96.4 98.1
Here, I explain the errors in this index, which will involve unfettered criticism. It also will be a bit unfair because the cures are hard to devise. However, the explanation yields a general lesson: Price statisticians talk to firms instead of users. Firms talk about changes in their own service, while users talk about changes in their own use. The difference is enormous during revolutions. Hence, a price index is the last place to find a revolution.
Counting accurately but wisely? To be fair, the official price index is accurate in a narrow sense of the word. A huge weight goes to America Online’s prices. Even though everyone else is dirt cheap, an official consumer price index cannot decline much as long as AOL’s market share is so large. AOL’s prices have not declined much in quite a few years. They actually rose last year, which is why the index rises too. To put it bluntly, even with this weighting scheme, this approach is just wrong headed. The official index presumes that the quality of the service has not changed. Even though I am no fan of AOL, I am the first to concede that even AOL has not stood still. Over the years, its software has become easier to use, faster to access, better at sorting mail, and more reliable in terms of staying online. You can do all sorts of things with instant messaging, streaming music, digital pictures, and so on. AOL has also reduced the probability of busy signals when using dial-up access. Yet none of these improvements play any role in the official price index. Moreover, every other Internet service provider (ISP) has also gotten better over time and in much the same dimensions. Many ISPs charge nothing extra for these added benefits, whether it is Joe Schmo’s ISP or EarthLink. Again, these changes play no role in the index. In other words, qualitative improvement goes unrecorded.
b161-ch05.qxd
27/05/04
7:03 PM
Page 175
Chapter 30 175
This insight extends in all sorts of directions. What do you do with Internet access? You surf. How many Web pages could you reach in 1997? How about today? How much easier is it to use today’s portals compared to those in 1997? All that improvement is not part of the price index, though it is a part of every user’s experience. Let me illustrate the principle with my favorite example. Like many of my friends, I cannot imagine how I survived before Google. While writing this article, I went to Google, and in seconds, it informed me that Gil Scott-Heron wrote the poetic signpost of my youth in the early 1970s. Moreover, he was born the first of April, 1949. His poetry was aimed at angry inner-city blacks, not cozily suburban, white youths like me. Scott-Heron is alive today, lecturing rappers about how to take responsibility for their lyrics. I also learned that Scott-Heron’s chorus had been used by scores of other authors. It had descended into common use. So I concluded that I could keep his phrase in this column, even though it was contextually inappropriate. This was informative. I love Google. It did not even exist in 1997. Today I pay nothing extra for it; I would if I had to. That is the point. Officials should ask what sort of price decline is equivalent to the improvement in user experience associated with the increased capabilities. Officials do not ask that question. Here is another example: Officialdom ignores changes to advertising, a serious point. Most people find advertising annoying. Almost everyone I know cannot stand Web sites that induce another browser window to display the latest in digital cameras. For many Web users, last year’s drop in online advertising improved the Internet experience. Indeed, many people actually face the choice between paying for something and having it free with advertising; many do pay and pass up the freebie. Some actually take the freebie, too. The price index should account for both types of users. In other words, the right question is: What sort of price decline is equivalent to the improvement in life from fewer ads? That is how much prices should adjust. There is also a dark side to this example. Ads involve a transaction between the firm that is advertising and the Web site that carries the ad. This transaction represents an honest-to-goodness, accurate contribution to expenditure, which is a part of GDP. In other words, according to official statistics, the decline in online advertising last year represented a decline in GDP. Any change in your online experience, however, remains unmeasured in the Internet price index. So officialdom records the event’s bad portion, but ignores the good.
b161-ch05.qxd
27/05/04
7:03 PM
Page 176
176 Part V
Doing new things with speed If we know one thing about user experience, we know that users value speed. Yet, the official index also does not entirely account for what happened to speed. The key here is entirely. To be fair, statisticians get some things right. They do maintain separate accounts for the prices of cable, digital subscriber line (DSL), and dial-up access, aggregating them properly into the official index. They deserve credit for doing this. But that does not completely resolve all open issues. Popular sites are much better at caching their material, so web pages refresh faster for users. The backbone capacity across the country has also grown large, so that your e-mail packet is less likely to bottleneck at a public exchange point. Once again, qualitative improvement goes unrecorded. Moreover, five years ago, most city dwellers had no broadband option. Today, most have access to at least one high speed possibility, either cable or DSL. Seven to eight million take advantage of that now and more will in the future. Increasing availability is the same as lowering price from an extremely high level to a much lower one. The price index should record a decline in price, but it does not. Once again, recorded prices do not decline as much as the user experience improves. By the way, since the official index starts in 1997, it also misses an earlier revolution, associated with the availability in commercial dial-up access. That event went entirely unrecorded.
The list goes on and on The list of unrecorded improvement goes on and on. Here is another subtle one: The official index does not account for the availability of better contracting practices. Today, online transactions are easier, more reliable, and less uncertain. As an example, e-Bay is simply better than it used to be at weeding out cheaters and shills, which makes its guarantees more reliable. As another example, many ISPs no longer must limit so-called unlimited users to 100 hours a month by automatically logging them off to save on capacity. There is a sense in which the fine print on the ISP contract is less relevant to today’s user experience. All this improvement goes unrecorded in official indices. This observation has its dark side, too: The fine print in contracts can also be a negative. For example, it’s hard to lose that creeping feeling that online privacy has eroded in the past few years, because a few Web sites
b161-ch05.qxd
27/05/04
7:03 PM
Page 177
Chapter 30 177
are becoming more cavalier in handling private user data. I would happily pay to fix this problem. Other new capabilities go unrecorded for mundane reasons. For example, many users can buy additional services from their ISP It’s now cheaper to get extra email accounts, additional space for saving pictures, and other optional services. Similarly, someone other than the ISP might offer complementary services. In these cases, the price agency does not make any effort to collect data. Of course, to the user, such a distinction between suppliers is meaningless. Have you tried to get some hosting services recently? Prices have fallen; this stuff is cheap, and it’s much better than it used to be. It improves the online experience. These examples raise a question. Perhaps a price index for Internet access should not include changes to hosting, better contracting practices, shifts in privacy norms, and improvements in broadband availability. Such a view is fine with me so long as this data is part of a price index somewhere else. But it is not anywhere else either. It’s just unrecorded.
Changing communities Of all unrecorded improvements, one beats them all: The official index also does not account for the changing composition of the online-surfer community. This oversight is a big deal. The composition of the user community influences each user’s online experience. For example, though I am a longtime online user, the growth of the mass market dramatically improved the quality of my experience. Specifically, my welfare improved three years ago when my parents, aunts, and uncles finally got online. Sending them online baby pictures is a real thrill. To be fair, this one has a dark side, too. I know a lot of technically adept users who lament the growth of the mass-market Internet in the late 1990s, because the massive growth interrupted their intimate and cozy Internet world. For some, this permanently interrupted and destroyed something valuable. Moreover, this dark side seems to have the potential for endless growth. Many on-line users would be happy to be rid of spammers, for example. I, for one, would gladly pay somebody to rid my in-box of unwanted advertisements for sexual enhancements and low-cost office suppliers, as well emails from Nigeria asking for my bank account numbers.
b161-ch05.qxd
27/05/04
7:03 PM
Page 178
178 Part V
Still, I bet the online community has grown in ways that improves life for most users. Like most communications networks, the value of communicating rises as the numbers of participants increases. Once again, this is completely unmeasured by the official index.
Epilogue None of these observations is a secret. Government statisticians know that they err by talking exclusively to firms instead of users. More to the point, they know that they are not recording the revolution, they just do not know how to fix it cheaply. Sadly, nobody else knows, either. Isn’t there something inherently ironic here? Shouldn’t improvements in information technology make it easier to record those improvements? The answer is no if officialdom asks the wrong question.
{Editorial note: This essay was motivated by a longer study I did for the Bureau of Economic Analysis about the construction of the official US price indices for Internet access. I really have great sympathy for the problems associated with constructing price indices, though I also wish it were done better in an industry I cover so closely.}
b161-ch06.qxd
27/05/04
7:03 PM
Page 179
Part VI
Enterprise Computing
This page intentionally left blank
b161-ch06.qxd
27/05/04
7:03 PM
Page 181
31 Client-Server Demand and Legacy Systems
The most important computer users in the US today are business organizations. Virtually every business enterprise buys something from the computing market. Banking and finance are the most computing-intensive sectors, though many sectors of manufacturing, wholesaling and retailing, transportation, and other services are now not far behind in computer intensity. Information systems underlie most efficiency gains, new products and services, and other startling economic improvements in these organizations. For my money, it is important to understand the demand for all computing platforms, mainframes, minis, and micros. While mainframes and minicomputers have declined over the last decade, they are not dead. To be sure, personal computers are likely to be the most important sector in the future, but the early systems are not disappearing as fast as everyone thought. This column considers the competition between old and new. Think of “old” as a generic mainframe, like an IBM 4300 or 3900 or any centrally managed large system, and “new” as client-servers. Though no two engineers agree on this definition, think of client-servers as networks of microprocessor-based workstations or personal computers hooked up to more than a common printer. If the server is a centrally managed mainframe routing e-mail, it does not count. Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, October 1995. 181
b161-ch06.qxd
27/05/04
7:03 PM
Page 182
182 Part VI
Some years ago, Stanford economist Tim Bresnahan and I began studying the disappearance of users of large systems. We obtained surveys of tens of thousands of computing sites from Computer Intelligence Infocorp, an information provider in this industry. We pored over thousands of records of purchases and buyer behavior. We wanted to identify who got rid of their systems. Why some users and not others? What did this say about the nature of competition between old and new? Our study had some general lessons and some lessons specific to this case. The general lessons are interesting because they will likely arise again in other competitive episodes in the computing market. The specific lessons are interesting because they make good stories (and many people’s livelihoods ride on the outcome).
Enterprise computing and operations The overriding general observation is this: When an organization makes a major computing equipment acquisition, it puts many of its routines at risk, potentially revisiting an enterprise’s core strategy for structuring operations. Such issues do not resolve themselves easily if management, staff, and the user base do not share a common view of computing priorities. The details are complex because the technology choices are complex. Effective use of modern computing equipment involves communication equipment and networking technologies, old and new software, and large doses of human intervention. The networks often involve private and public communication lines, private and public switches. On the human side, effective use of computing technology means countless hours of training, learning, and maintenance by staff, and frequent restructuring of important and minor routines by programmers. On the software side, this means producing application-specific and organization-specific programming to refine and retrofit old software. The management side is also complex. Changing equipment potentially alters an organization’s operations, its staffing, and its final product. It has consequences for many potential expenses far beyond the actual purchase of the equipment. Though generalizations are incautious, for most computing, the yearly expenses associated with managing staff, as well as employing software programmers and support help, usually exceed the hardware expenses. Finally, different management patterns, which are associated with different applications and outputs, tend to incur very different magnitudes of adjustment costs. For example, there are large differences in the costs of
b161-ch06.qxd
27/05/04
7:03 PM
Page 183
Chapter 31 183
changing systems for engineering-based applications, batch-oriented back-office functions, or essential real-time applications.
Some specific lessons Now for some lessons particular to this case. Here is what we learned in our surveys: In the early 1990s most large computing users stayed with their large systems, surprisingly unwilling to move to a new client-server platform. This seems to make little sense at first blush because the new systems ostensibly had higher technical benefits across a wide set of uses. The key to understanding this puzzle was to have the right view of both the costs and benefits of client-server technology. Many users knew about the potential benefits of client-servers, but abandoning large systems came at larger internal adjustment costs than technologists and engineers anticipated or cared to admit in the trade press. “Internal” is the important point. No market exists for costs incurred. Resolving problems instead depended on organizational incentives for inducing employees to bear those (often hidden) expenses. We found that the only types of users who initially moved to client-servers were engineers and academics (including most of our friends and colleagues). They tend to have decentralized organizations, and lower, more dispersed internal adjustment costs, but make up only a small fraction of total demand. Many big-system users either did not change or took a more cautious attitude to investing in their computing stock. This helps explain why the aggregate mainframe market demand shifted downward, but much less than predicted by those in the majority of the trade press who predicted (and are still predicting) a revolution in computing technology. We further investigated whether a user’s ties to a manufacturer made a user more resistant to new technology. While stories on this issue abound, there is little systematic statistical investigation of it. One can easily see why this is an important question, since a single firm provides roughly two thirds (by the number of boxes) of all large general-purpose systems in the US. To put one version of this hypothesis bluntly, IBM has few proprietary rights in the most widely used client-server solutions. So, is IBM holding up the diffusion of the technology to customers with whom they have the closest ties? This issue, even in its blunt form, is difficult to pose in practice because the market structure for complete computer systems is so complicated. Hardware vendors provide software services and maintenance; some even provide customized services. A large third-party software industry for large systems also exists; some of it is available on multiple
b161-ch06.qxd
27/05/04
7:03 PM
Page 184
184 Part VI
platforms and some is not. Many users program their own system tools, but buy packaged application software from one vendor or another, and on, and on. The main point here is that the real world was simply too complicated for the most sweeping conspiracy to possibly be true; IBM could not be managing so many different complex situations. To construct the hypothesis at a reasonable level, we had to work hard and understand at a significant level of institutional detail how this market operated. Anyway, the overriding finding is that no matter how you cut it, IBM took a big hit to sales from traditional customers. Now that is not news, I admit. More to the point, ties to vendors do not matter as much as internal adjustment costs. The one possible exception to this finding occurs among large-system users who buy IBM-proprietary communication technology, often hardware and software products that are complementary to large databases and a large user base. These buyers tended to resist abandoning mainframes for client-servers. However, it was not obvious that the tie to a vendor was as essential as the scale of computing activity, which made for enormous adjustment costs. Also, the new client-server technology had difficulty satisfying these particular types of users’ needs.
Competition between old and new In general, what did we see here that we are likely to see again? Computer users see constant competition between old and new technology because the appearance of a new technology offering lower costs or superior capability rarely leads to instant replacement of the old technology. After all, some users may be reluctant to retire computers that continue to offer a flow of useful services, even if technical change apparently depreciates the market value of those services. In addition, sellers of the old technology may find their competitive circumstances changed, but react quite naturally with new pricing or new technology strategies. Buyers may also delay purchasing the new technology until anticipated price/performance improvements appear. The larger point is that the pace of new technology adoption and retirement of the old depends on all the factors that shape the competition between old and new. While it may sound didactic for an economist to emphasize the importance of market processes, my experience has been that these factors arise in many different facets of demand research. Most of these processes are out of a buyer’s and user’s control, but they influence the costs and benefits of different purchase decisions — even the competition between mainframes and client-servers and, in all likelihood, future events.
b161-ch06.qxd
27/05/04
7:03 PM
Page 185
Chapter 31 185
{Editorial note: This is a synopsis of work that was eventually published in 1996, “The Competitive Crash in Large Scale Commercial Computing,” in (eds) Ralph Landau, Timothy Taylor and Gavin Wright The Mosaic of Economic Growth, pp. 357–397, Stanford University Press: Stanford, CA.}
b161-ch06.qxd
27/05/04
7:03 PM
Page 186
32 Upgrading, Catching up and Shooting for Par
My father is not starry-eyed about technology. He does not have time for it. He would prefer to run his business or play golf. He buys computers when they are cheap, simple to use, and obviously better than something he already owns. His golfing partners seem to be about the same. My father and his golf partners are not odd people. They are like most business executives. They are much less enthusiastic about PCs than you, me, or anyone else who builds and sells computers. In general, it is not odd for buyer and seller to have different perspectives. Indeed, ever since PCs migrated from the hobbyist community into the business community, such a difference has existed in the PC market. Most PC engineers, who spend the bulk of their time designing bestpractice technology, have few friends like my father’s golfing buddies. This is not an editorial on the social circles of PC engineers. Instead, it something simple and somewhat obvious to most observers of this market: Buyers and sellers are not playing on the same course. Why does this matter? Because most PC engineers I know do not talk to their biggest customers-business managers. As a result, most PC engineers do not understand how their biggest customers think — on the most basic level. Most PC engineers do not have my father’s computer problems (nor do they want them). These problems illustrate a more general economic
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, June 1996. 186
b161-ch06.qxd
27/05/04
7:03 PM
Page 187
Chapter 32 187
phenomenon that goes to the heart of how computer technology spreads in the US economy.
Upgrading grudgingly Several months ago, my father and I began discussing the computers at his office, where he is the managing partner. My father wanted to know whether the firm should adopt Windows. My father did not really care which version. He just wanted know if he should junk all his DOS programs once and for all, retrain his staff, and purchase new software and hardware to run it. Was it worth his trouble to change now, or should he wait? My father was very clear-headed in his reasoning. Change was inevitable, but not essential at the moment. His customers are not worried about whether he uses DOS or Windows, as long as the office delivers the services. Moreover, the firm is not a technical backwater, so there was no technical constraint on upgrading. A few years ago, for example, the introduction of laptops revolutionized the firm’s field work, allowing employees to do much more at customers’ premises. The office also uses a moderately sophisticated LAN. None of this requires Windows, but would be enhanced by it. If one spends enough time with the average business manager, this type of story comes up repeatedly. The underlying economic issue is not unique to our time, to this product, or to my father. This year it was Windows, last year it was notebooks, next year it will be something else. The underlying problem is always about the timing of upgrades. Just below the surface is a puzzle that engineers often overlook because they spend so much time with best-practice technology. If most executives are not enthusiastic about technology, and most PC customers are executives, how does new technology ever find its way into America’s offices? Think about it. When something new comes along in the PC industry (about every five minutes), business managers are caught between two impulses. On the one hand, a manager’s day is busy, filled with appointments, deadlines, and tee times. Who wants to take the time to change a computer system if it is not required? At the same time, most office managers hate to fall behind. They feel compelled to catch up to the vanguard. It does not really matter why: Perhaps the marketplace is very competitive, and new technology is necessary for survival. Perhaps customers demand new services associated with new technology. Perhaps it is a payoff to employees who like the latest fancy toys.
b161-ch06.qxd
27/05/04
7:03 PM
Page 188
188 Part VI
Eventually the impulse to catch up overwhelms the reluctance to invest — not all at once, but piecemeal. Today it’s an operating system, tomorrow it’s printers, the next day it’s something else. This process putters along, bringing steady, relentless, and technically uneven improvement.
Best versus average practice Students of technology use the terms “best practice” and “average practice” to describe a whole market going through this process. Best practice refers to the best available technology for the lowest price at some point in time. Average practice refers to the typical technology in use by the average user. The first focuses on what is technically possible; the second on what is typically in use. Average practice is not as exciting as best practice, and it is not as technically adventurous. Not surprisingly, most computer engineers would rather spend their day thinking about best practice. It is their job, after all. Yet, most executives spend their day worrying about average practice. Today’s purchases are only a small part of that whole stock of equipment. Consider average practice in the PC market. While it is not hard to believe that most users keep up with new technology to some degree, it is hard to answer the question, what is the typical user’s experience? Only a small percentage of all users own only new equipment. New users buy new stuff almost by definition, but most old users upgrade at different rates. The least one can say is that everyone eventually upgrades and throws away old equipment. For example, virtually every experienced user I know has a closet full of old parts, obsolete systems, or useless old software. The more precise question to ask (and the harder one to answer) is, how far does average practice fall below best practice for most users? If users replace their systems often, average practice is not far below best practice. If users are basically satisfied with very old designs, they won’t buy new ones quickly. In that case, only very few users, if any, will put best practice into general use. Does anyone have any idea how big the gap is today? With millions of PCs, workstations, and software programs, it is hard to measure or observe average practice across the United States. Occasionally small surveys hint at the gap between best and average practice, but it is hard to pin down such a complex moving target. My bet is that average practice is only a few years behind best practice and has generally stayed very close in the last decade. This is based on one unscientific hunch and one scientific observation.
b161-ch06.qxd
27/05/04
7:03 PM
Page 189
Chapter 32 189
Here’s the hunch: My father’s story is not too unusual, and at worse, he is not far behind the average. Most of the offices I visit in my field work look technically similar to his, give or take a couple years of technology. Now here’s the scientific observation: It is possible to compare PCs to a similar technology in an earlier age. Several years ago I documented average- and best-practice computing in the US during the mainframe era, roughly 1968 to 1983. While use varied somewhat, the typical mainframe computer was between five and seven years older than best practice. While we could argue about the main causes, this much was true: Most users tended to own old equipment for only a few years. Some stayed very close to and others stayed far away from best practice. On average, however, they were never far behind.
Narrowing the gap … a little Why does this tell us something about the modern era? Because PCs are a lot easier to replace than mainframes (and a lot cheaper). Probably most PC users have narrowed that five- to seven-year gap; possibly they are no more than two to three years behind best practice. There is a fundamental observation here about how technology spreads. Technology diffuses to business whenever a product appeals to the majority of customers — executives who are not starry-eyed about technology. Many good products missed their mark because the designers forgot that their users did not want the best practice, but merely something better than what they had. Put simply, the products that fill the gap between best- and average practice technology bring the biggest advances to the mass of users. These are the applications that businesses decide to buy and adapt to their existing systems. Many well-intentioned (but misguided) engineers have encouraged their companies to design new, technically sophisticated products without worrying about who would use them or how. By the way, my father’s office decided to switch to Windows sometime this spring. However, nobody wanted all the (anticipated) difficulties to interfere with the firm’s ability to meet spring deadlines, so the switch was put off for a few months. It will probably have taken place by the time this column is published. From what I can tell, this seems like a sound and rational business decision. This is not an advertisement for my father’s business; it is just an example of how business decisions lead to changes in average practice in computing today. Progress is cautious, incremental, but inevitable. That’s about par for the course.
b161-ch06.qxd
27/05/04
7:03 PM
Page 190
190 Part VI
{Editorial note: The academic paper referred to above is “Did Computers Diffuse Quickly? Best versus Average Practice in Mainframe Computers, 1968–1983.” Working Paper No. 4647, National Bureau of Economic Research, Cambridge, MA. February 1994.}
b161-ch06.qxd
27/05/04
7:03 PM
Page 191
33 How Co-Invention Shapes our Market
According to a popular view of technology, suppliers and buyers play distinct roles in sequence. Vendors first invent, sell, and improve. Buyers then adopt, upgrade, and retrofit. The popular view is wrong. Users do not take such a passive and narrow role, nor are vendors the only inventors. Users actually do much of the inventing in the computer market. They invent for a simple reason. Rarely is a new technology useful immediately after purchase. It needs customizing, reshaping, and improving. Users change things after trying them out for a while. Users do a lot of “co-inventing,” as colleague Tim Bresnahan and I have termed it. Co-invention makes technology useful. It is impossible to understand the computing market without appreciating why and how users co-invent. The costs of co-inventive activity can be substantial and can easily swamp the initial outlay for the technology. More to the point, Bresnahan and I believe that co-invention is the key to understanding the diffusion of clientserver systems. In particular, we examined this diffusion to large-scale users. Why did some buyers act boldly and others cautiously? Why did some experiment early and others late? Our answer is all wrapped up with co-invention. We studied establishments from 1988 to 1994. We started with 1988 because there was no real commercial action in large-scale networking of
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, February 1997. 191
b161-ch06.qxd
27/05/04
7:03 PM
Page 192
192 Part VI
PCs before 1989. We stopped with 1994 because we started this project in early 1995. (It took more than a year to analyze and organize the data on over 12,000 establishments.) To save on space, I’ve summarized our main findings here.
Large-scale computing Why did we study large-scale computer users? First, this was an interesting and complex place to examine the diffusion of client-server computing. These applications tend to be supported by professional staff and large supplier networks. If we could analyze the behavior here, we could take the same insights to simpler situations. Second, marketing folk know a lot about centrally managed facilities. They know the names of their CIOs or DP manager, and even what software they use. One of these marketing firms, Computer Intelligence Infocorp, provided us with very good information for as far back as 1984. We can never do empirical research without good data. Third, centralized computing facilities came into existence decades ago when prices were higher and functionality was lower. These places first developed some extremely important applications, for example, on-line transactions processing, financial modeling, automatic payroll and check writing. Even if some of these applications are not on the cutting edge of technology today, all of them are still important to the economy and especially to the companies that run them.
Patterns of behavior Figure 1 contains our principal observations. We tracked the type of largescale computing that a fixed group of establishments were using over time. Based on the type of computing they were using in 1994, we classified them into four categories: bold, curious, cautious, and gone. The bold establishments had done two things. They had experimented with client-server systems quite early. Then sometime later these users retired their mainframes — mostly by 1994. The curious were not as quick to experiment, but did not resist client servers for long. Most of these establishments began to experiment sometime in the early 1990s. However, by 1994 very few of them had retired their mainframes. The cautious did not experiment, nor did they even consider retiring their mainframes. As of 1994, these users intended to stay with mainframes, upgrade them as they always had, and so on.
b161-ch06.qxd
27/05/04
7:04 PM
Page 193
Chapter 33 193
Figure 1.
Type of computing used by the establishments in the survey group.
The gone were a more mysterious group. They stopped answering the surveys. Nobody is certain why, but it is easy to speculate. Some of them belonged to firms that went bankrupt. Some closed the computing establisment and moved it elsewhere. Some simply got tired of the surveys.
Why co-invention matters The bold had one thing in common: very low co-invention expenses. Their computing organizations tended to be simple. That is not the same as saying their computing needs were simple. It means that their computing tended to take place in small groups and to be nonsynchronous (for example, unscheduled). These buyers predominantly used computers for simulation and numerically intensive tasks, not complex data warehousing or other communication-intensive tasks. Most of their computing did not require simultaneous and extensive coordination of hundreds of employees. In addition, the bold were usually scientists and engineers. Despite many idiosyncrasies in their needs, the bold did their own co-inventing and relied less on market solutions than did the typical commercial user. Also, these users were accustomed to boldly going where no one had gone before — in this case, to client-servers before everyone else. The curious tended to be a more heterogeneous group, unified mostly by their willingness to try a piece of the client-server pie but not swallow
b161-ch06.qxd
27/05/04
7:04 PM
Page 194
194 Part VI
the whole. They had more complicated computing organizations than the bold and fewer scientific applications. Instead, the curious had some mix of back-office accounting and on-line information delivery. They had a few applications that resisted the new platform and a few that did not. They usually had a few idiosyncratic applications — either written by in-house staff or by a small third-party mainframe software vendor. The bottom line: Not all applications could be recreated on a new platform. The curious eventually made some effort to benefit from clientservers. They experimented, started small, and tried to grow. However, as of 1994 the co-invention costs associated with customizing all their computing on the new platform were prohibitive against an entire switch away from mainframes. Virtually all of the cautious looked alike, and yet they differed greatly. Their computing facilities ran idiosyncratic and extremely valuable on-line applications (such as reservation systems). The computing applications were tied closely to the functioning of the organization (such as banking services). The computing coordinated hundreds of employees’ actions. At the cautious establishments, co-invention costs of even the simplest experiment on client-servers were very high. These users could not change to client-servers even if they really wanted to. Most did not bother to experiment, for what would be the point? All in all, three factors drive up co-invention costs: the complexity of the computing, idiosyncrasy of computing demands, and thinness of vendor markets for software tools. The first two are features of buyers that change very slowly. The last one has gotten much better over time.
Some implications Our analysis provides important dues as to why the experiences of the bold offered only limited lessons for the curious and the cautious. Engineers and scientists tended to be among the bold, while commercial users tended to be among the curious and cautious. Is it any wonder that vendors, who had early success with the bold, completely underestimated the difficulty of the transition to client-servers with the commercial users? The applications are different, and organizations must change more dramatically to take advantage of client-server functionality. Co-invention also explains why the appropriate business model changed in client-servers in the last few years. In the early years, the sales were predominantly from computer engineer to computer engineer. Many engineering firms and software start-ups thrived in this situation.
b161-ch06.qxd
27/05/04
7:04 PM
Page 195
Chapter 33 195
In more recent times, commercial users dominate. Commercial users prefer the reassuring handshake of a salesman in a suit and the reliability of a proven company. This plays to the comparative strengths of firms that integrate the co-invention of the past with the needs of commercial users. Firms such as IBM and Andersen Consulting could get into this game late and still do well. The future of this market now depends on whether some of the early high-flyers-SAP or Cisco Systems or Oracle or Sun, just to name a few of the many companies that seem to be making the transitionwill adjust to a new type of customer or competitor. Finally, this leaves me optimistic about the future prospects for clientserver computing. The market for software tools has been growing, and at a pace that makes it hard to track. The consulting market is thriving and still growing. Lessons are being shared across enterprises. A few more key software tools, protocols, and design doctrines could further reduce the costs of customizing client-servers to user needs. The next time you look at this market, remember the role of co-invention. Vendors invent and sell. Buyers co-invent and customize. The vendors and technologies that succeed are the ones that reduce a buyer’s co-invention expenses.
{Editorial note: This is a synopsis of work that was eventually published in January, 1997. It is titled, “Technical Progress and Co-Invention in Computing and in the Use of Computers.” It appears in the Brookings Papers on Economics Activity: Microeconomics, pp. 1–78.}
b161-ch06.qxd
27/05/04
7:04 PM
Page 196
34 Which Industries Use the Internet?
Advances in frontier technology are only the first step in the creation of economic progress. The next step involves use and adoption by economic agents. Adoption by users usually takes time, as more inventions and resources are typically necessary before economic gains in public welfare are realized. This principle applies with particular saliency to the Internet, a malleable technology whose form is not fixed by location. To create value, the Internet must be embedded in investments at firms and households that employ a suite of communication technologies, TCP/IP protocols, and standards for networking between computers. Often organizational processes also must change. The Internet will have a greater impact if and when it diffuses widely to commercial firms. This is particularly true because commercial firms make the majority of investment in Internet infrastructure, and at a scale of investment reaching tens of billions of dollars. With that as motivation, two colleagues — Chris Forman and Avi Goldfarb — and I sat down one day for what turned out to be a lengthy conversation. We discussed learning about industries that lead and lag in Internet use. This was important to each of us because we regularly analyze Internet industries and teach in business schools (Forman at Carnegie Mellon University and Goldfarb at the University of Toronto). While every industry association tracks its own members, and many trade magazines report on frontier users, we had grown tired of being unable Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, November 2002. 196
b161-ch06.qxd
27/05/04
7:04 PM
Page 197
Chapter 34 197
to compare across reports. Nobody had constructed a census of all industries that allowed for comparisons between users. We decided to construct such a census for ourselves and find out who really leads and who lags.
Definitions To be precise, we constructed a census on adoption, the most common yardstick for measuring a new technology’s use. We examined data on establishment use. An establishment is a single point of contact, usually using a single mailing address. Most large firms have multiple establishments, in which case they appear in our study multiple times. We analyzed the dispersion of Internet use in two distinct layers, participation and enhancement. Participation involves basic communication, such as e-mail use, browsing, and passive document sharing. It also represents our measure of the basic Internet investment required to conduct business. This was easy to measure. Next, we tried to measure investment in and adoption of Internet technology for enhancing sophisticated business computing processes. In spirit, enhancement uses Internet technologies to change existing internal operations or to implement new services. It is often carried out for competitive advantage. We considered several different measures of enhancement. We tried to identify enhancement from substantial investments in electronic commerce, or e-business, applications. In the end, we settled on a very conservative measure. The true rate of adoption is likely to be higher, but qualitatively similar to our ranking. We look for commitment to two or more of the following projects: Internet-based enterprise resource planning or TCP/IP-based applications in customer service, education, extranet, publications, purchasing, or technical support. (See our appendix if you are curious.) Employing the Harte Hanks Market Intelligence Survey, we examined 86,879 commercial establishments with 100 or more employees at the end of 2000. Using routine statistical methods, we projected results to all establishments with more than 100 employees. This covers the work places for two-thirds of the US labor force at that time.
Participation Our first finding was overwhelming. Participation is high in every industry, reaching near saturation in a majority of them. The average rate of participation is 88 percent. Establishments in all but four industries are at 90 percent or higher. With rare exception, the Internet reaches almost
b161-ch06.qxd
27/05/04
7:04 PM
Page 198
198 Part VI
everywhere. We concluded that participation is virtually ubiquitous in all establishments except, at worst, a few industries. This dispersion is consistent with some popular perceptions about the Internet and its adoption, including its low cost, availability almost everywhere, benefit to almost any business, necessity for US business. You might add an additional observation: This rate of diffusion is remarkable for a technology that is less than a decade old. No major historical technology diffused this fast to such a disparate set of industries — not electricity, telephony, the steam engine, or the automobile. The fax machine was the closest related experience, but even the fax did not spread this fast.
Enhancement The lead adopters of enhancement exceed rates of 25 percent. These areas include management of companies and enterprises (27.9 percent); and media, telecommunications, and data processing (26.8 percent). These first two lead user industries are remarkably different. The former represents the financial side of the Internet revolution. It includes corporate headquarters for multidivisional firms, securities firms, and financial holding companies. The latter includes publishing firms, representing the change the Internet brings to media. It also includes information and data processing services, a category that includes firms like America Online and other Internet access providers. This variety at the top is not a surprise; the business press has largely described the wide impact of this technology’s diffusion. A variety of industries use the Internet to create competitive advantage. However, it does confirm just how varied the impact of Internet technology diffusion was. The second tier of lead users again includes a wide mix: finance and insurance (19.9 percent), professional and scientific services (19.6 percent), utilities (21.1 percent), and wholesale trade (17.2 percent). This latter sector of the economy includes establishments that heavily use sophisticated applications combining database software with communication technologies. The third tier of enhancement adopters includes manufacturing (approximately 15 percent on average). Within this group, notable lead categories include computer and electronic manufacturing (23.5 percent), printing and related support activities (18 percent), and oil and gas extraction (18.4 percent). These are all long-time lead users in computing, but for very different reasons. The lead users in retailing were also just as interesting. This included motor vehicle retailing (20.5 percent), electronics and appliance stores (25.6 percent), book and music stores (20.4 percent), and non-store retailers, such as catalogue companies (21.2 percent).
b161-ch06.qxd
27/05/04
7:04 PM
Page 199
Chapter 34 199
To be sure, low adopters (under 6 percent adoption) also did not surprise us. These include transit and ground passenger transportation (4.7 percent), food services and drinking establishments (5.6 percent), and social assistance (5.9 percent).
Comparisons We compared our findings against other well-known studies of Internet use. Our estimate of enhancement is close to estimates for manufacturing undertaken by the US Census Bureau in 1999. Also, according to estimates by the Bureau of Economic Analysis, industries spending the most money on computing tend to be those where a high fraction of establishments are adopting the Internet for enhancement. Most interesting of all, we compared our list of lead industries with similar lists from more than two decades ago. The list of leading computer users in the late 1970s to early 1980s remains on our list of medium to large adopters. These lead industries are banking and finance, utilities, electronic equipment, insurance, motor vehicles, petroleum refining, petroleum pipeline transport, printing and publishing, pulp and paper, railroads, steel, telephone communications, and tires. Historically, wholesale trade is a low user; so too is water transportation. Both are lead users in our study. How did these industries change status in two decades? In short, they include many establishments that use communications to enhance logistical operations-something that was difficult to do electronically more than two decades ago. Here is one other remarkable fact: Aside from these exceptions, a historical list of laggards also corresponds with our list. We concluded that, with a few notable exceptions, the leading and lagging users of the new economy look a lot like leading and lagging users in the old computing economy. It appears that the costs and benefits of innovative information technology have not changed much in decades. Why did anyone ever think the new economy would be different?
Broader perspective Our findings lead us to four conclusions. First, our findings warn against inferring too much from the use of Internet technologies in manufacturing, as tracked by the US Census Bureau. Establishments in manufacturing are medium to high adopters, neither leaders nor laggards. Establishments in other industries are outside the picture developed by the census. Finance and media have far more lead adopters — as a fraction of total establishments within each sector — and possibly a very different set of applications.
b161-ch06.qxd
27/05/04
7:04 PM
Page 200
200 Part VI
Second, we find that Internet technology producers, as well as their distributors, are frequent adopters. This echoes a common observation that information technology and electronics manufacturers are intensive users of computing and communications. However, there are two differences between the common observation and this study: Both manufacturers and distributors of electronics are lead adopters, and these establishments are far from being the only lead adopters. They are just two among a crowd. Third, the composition of this distribution raises a question. Familiar lead industries in information service; science and technology; and finance, insurance, and real estate readily adopted the technology. Conversely, most laggard industries (that is, infrequent computing and computer adopters during the last two decades) did not suddenly become Internet intensive. The reasons that some laggard industries eventually became adopters are telling. These exceptions give insight about the nature of building competitive advantage using Internet technologies. The appearance of transportation and warehousing as lead industries shows that the Internet influenced establishments in which logistics played a key role. At the same time, this exception proved the rule. Every other industry followed historical norms, a pattern that raises a large open question about why there is such durability in the factors shaping the dispersion of innovation information technology across industries. Finally, these estimates foreshadow the geographic distribution of participation and enhancement. Because participation is almost at a saturation point, the same will have to be true across most locations, simply for the sake of statistical consistency. Some lead industries in enhancement, such as corporate headquarters and financial firms, disproportionately locate establishments in dense urban settings. Hence, there must be an urban bias to the Internet’s use. That said, many industries from disparate settings are close to these leaders in terms of their adoption behavior. If the locations of establishments from these industries do not overlap much (and they will not), then adoption will disperse widely across locations. Once again, this is remarkable for a technology that is so young.
{Editorial note: The full study is called “Digital Dispersion: An Industrial and Geographic Census of Commercial Internet Use.” It is available at http://papers.nber.org/ papers/ W9287.}
b161-ch06.qxd
27/05/04
7:04 PM
Page 201
35 Where Did the Internet Go?
What happened to the Internet after it grew up? Where did it go? This is not like trying to identify a former child star after his cherub cheeks fade with adulthood and then find his home. Internet technology’s form is far more malleable and far more capricious in its movement from location to location. To create value, this technology must be embedded in investments at firms and households. So, by definition, each user of Internet technology always has a physical location. When the Internet first commercialized, many predicted it would bring the development of a new virtual world. Some predictions were ridiculous, a by-product of hype and self-interested soothsaying. Today, I am interested in examining the more serious of these predictions, particularly those about geography. On the one hand, this technology was supposed to encourage interaction without physical presence. It brought everyone closer together into one global village. On the other hand was the historical tyranny of urban density; that is, the users in the biggest elite cities — New York, Chicago, and San Francisco — got access to the latest technologies while everyone else got a second-rate version. For several reasons, almost every previous revolution in voice communications technology diffused first to urban areas. This was true of cell phones, digital switches, the original telephone, and even the telegraph. Et tu, Internet?
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, January 2003. 201
b161-ch06.qxd
27/05/04
7:04 PM
Page 202
202 Part VI
With that as motivation, two colleagues — Chris Forman and Avi Goldfarb — and I talked about our frustration at the lack of good information about where the Internet went. Although several censuses had measured Internet use in households, nobody had conducted a census of business users. This was a big oversight, because business was the larger investor in Internet infrastructure. The three of us decided to construct such a census and find out who really led and who lagged. I wrote about one aspect of this survey in the past. But in case you didn’t read it, here are the basics about our survey. In our survey, an establishment is a single point of contact, usually using a single mailing address. Most large firms have multiple establishments; they appear in our study multiple times. Participation involves basic communication, such as e-mail use, browsing, and passive document sharing. It also represents our measure of the basic Internet investment required to conduct business. Enhancement measures investment in sophisticated business-computing processes. We looked for commitment to two or more of the following projects: Internet-based enterprise resource planning or TCP/IP-based applications. Employing the Harte Hanks Market Intelligence Survey, we examined 86,879 commercial establishments with 100 or more employees at the end of 2000. Using routine statistical methods, we projected results to cover all establishments with more than 100 employees, a threshold that included the workplaces for two-thirds of the US labor force at that time.
An arising global village The global village is almost here, at least across the US. It is remarkable how close we have come. Virtually all establishments in major urban areas (those with populations greater than 1 million) participate in the Internet. In all but 10 of 57 major urban areas that we studied, close to 90 percent of all establishments participate in the Internet. The bottom 10 large metropolitan areas range from 89.1 percent in Pittsburgh to 84.6 percent in Nashville. Although these are the lower adopting areas, an 84.6 percent usage is still considerable. Table 1 presents a summary of the main results for different-sized areas. These are metropolitan statistical areas (MSAs), as designated by the US census. The tyranny of urban density also shows up. On average, large MSAs are at 90.4 percent. Participation in medium-population MSAs averages 84.9 percent. Low-population MSAs have even lower participation
b161-ch06.qxd
27/05/04
7:04 PM
Page 203
Chapter 35 203 Table 1.
Greater than 1 million 250,000 to 1 million Less than 250,000
Participation and enhancement in US cities. Average participation (percentage)
Average enhancement (percentage)
No. of areas
90.4 84.9 75.5
14.7 11.2 9.9
57 116 143
rates: 75.5 percent, on average. In other words, when it arises, very low participation in urban settings exists, primarily in small MSAs. Rural establishments show a similar pattern. Participation is quite high in some rural establishments, but there is a spread. In five states, the rate of participation in rural areas is lower than 80 percent, and 18 areas are below 87 percent. About a third of the states have rural adoption rates noticeably worse than in urban areas. In other words, the Internet is in almost every medium-to-large business establishment, whether urban neighborhood or rural countryside. Email is almost as common as postal mail, with small exceptions for some smaller cities and rural areas. So, in terms of participation, score one for the global village. But as we shall show, enhancement tell a slightly different story. The Internet’s ubiquity has not quite vanquished urban density’s unrelenting tyranny. In other words, participation does not cost much, so it is common everywhere. Could we say the same for enhancing business computing, which costs a great deal more?
An unrelenting tyranny Our look at enhancement seems to give urban density the upper hand. Overall, establishments in urban settings are more likely to adopt enhancement than those located outside major metropolitan areas. Table 1 shows the adoption of enhancement in MSAs of different population size, highlighting again that large MSAs are somewhat exceptional. Overall, the average adoption rate for enhancement was 12.6 percent. Establishments in large MSAs have 14.7 percent adoption rates. Medium MSAs average 11.2 percent. The rates are even lower in small MSAs, 9.9 percent on average. Just to settle bragging rights, Table 2 lists the top 10 locations in the US for major urban areas, using urban counties as the unit of observation. To nobody’s surprise, San Jose, California, home of many high-tech firms, wins.
b161-ch06.qxd
27/05/04
7:04 PM
Page 204
204 Part VI Table 2.
Top 10 enhancement adopters among MSAs.
MSA San Jose, California Denver, Colorado Salt Lake City/Ogden, Utah San Francisco, California Houston, Texas Seattle/Bellevue/Everett Minneapolis/St. Paul, Minnesota Portland, Oregon/Vancouver Oklahoma City, Oklahoma Atlanta, Georgia
Table 3. Population Greater than 1 million 250,000 to 1 million Less than 250,000
Adoption rate (percentage)
No. of establishments observed in county
20.0 17.1 16.7 16.5 16.2 16.0 15.9 15.6 15.4 15.4
638 778 535 608 1,320 799 1,411 683 339 1,426
Lead users and enhancement adoption. Establishments in top quartile (percentage)
No. of areas
27.5 19.5 19.0
57 116 143
Interestingly, though, San Jose does not win by much, though that is not easy to see from the table. Many cities have comparatively high rates of Internet use for advanced business applications. For example, Cleveland/Akron, Ohio, which few would mention as a high-tech place, is 13th, with a 14.7 percent adoption rate. The greater Chicago area comes in 21st place with 14.1 percent, while the greater Los Angeles area comes in 31st, at 13.5 percent.
Rural view At the other end of the spectrum, rural establishments are closer to smaller than to larger MSAs. The exceptions are interesting. Enhancement rates in the leading states are comparable with the leading metropolitan areas. The lead state is Minnesota with a rate of 15.5 percent, followed by Rhode Island, South Carolina, Louisiana, New York, Ohio, West Virginia, Wyoming, Utah, and Alaska. The differences arise at the lower end of the spectrum. Of the bottom half, 24 states have rural enhancement rates below 10 percent, whereas only three states have urban rates under 10 percent.
b161-ch06.qxd
27/05/04
7:04 PM
Page 205
Chapter 35 205
We thought we should score one for the tyranny of urban density. But then we looked closer. Looks are deceiving.
Industrial location It’s no secret: Different industries are concentrated in specific parts of the country. For example, auto manufacturing is disproportionately located in Michigan, film production and distribution in Southern California, and financial services in New York. To be sure, you can find some businesses, such as retail and retirement centers, almost everywhere. But the point is obvious to geographers: The biggest cities contain a different mix of industries than found elsewhere. More to the point, it just so happens that those cities also contain the industries with the highest enhancement adoption rates. These include management of companies and enterprises; media; telecommunications; data processing; utilities; finance and insurance; professional, scientific, and technical services; and wholesale trade. To get the point across, we figured out the portion of these leading establishments as a percentage of the total number of establishments in an MSA. We took only from the upper quartile of lead industries. This percentage is highest in large MSAs (27.5 percent). Indeed, these industries account for much of the difference between larger and smaller MSAs.
Global village 1, urban tyranny 1: Who’s winning? So what is really going on? Medium-to-large commercial establishments have a high propensity to invest in Internet technology. If we were going to find a global village anywhere, it was going to be with these users. In addition, we observed their behavior late enough in the diffusion cycle to overwhelm geographic biases associated with very early experimentation. That is, early experimentation tends to favor areas with a high proportion of technical and scientific users. We did not find such a bias. Finally, we observed technology use, not new product design or production. The latter tends to receive much more attention in public discussion, but leaves a false impression about the geographic distribution of users. Much high-tech production is concentrated in Silicon Valley, California; Redmond, Washington; Boston; Austin, Texas, and in a few other well-known places. Users are much more spread out.
b161-ch06.qxd
27/05/04
7:04 PM
Page 206
206 Part VI
The global village is almost with us, but our study still offers meager support for the tyranny of urban density: Distinct differences exist between the establishments found in the most populous urban centers and the least dense, even within the same state. Because participation was not costly, it was surprising and disturbing to find low participation in any establishment in any area. To be sure, if these disparities persist, then it is worrisome for business prospects in those locations, because every other establishment in the US takes this technology for granted. The dispersion of enhancement is much more skewed. Yet, this fact is more understandable as an economic matter. It could arise from the thin technical labor markets in smaller MSAs and rural areas. This difference would drive up costs of operating facilities employing Internet technology. Preexisting single-establishment organizations would hesitate to open their own complex Internet facilities until costs decrease. We speculate that enhancement diffusion will follow a traditional pattern, taking time, innovation, and resources before the majority of geographic areas realize economic welfare gains. To be sure, the concerns about low growth are real for the areas in which adoption lags, but this has little to do with the experience in the majority of areas, which do not lag at all or will not lag for long.
{Editorial note: The full study is called “Digital Dispersion: An Industrial and Geographic Census of Commercial Internet Use.” It is available at http://papers.nber.org/ papers/ W9287.}
b161-ch07.qxd
27/05/04
7:04 PM
Page 207
Part VII
Microsoft, from the Sublime to the Serious
This page intentionally left blank
b161-ch07.qxd
27/05/04
7:04 PM
Page 209
36 Not a Young and Restless Market
Forget the technical details about Windows 95, forget the hype, and lighten up. For the next few minutes, view the computer industry as a soap opera. This is a surprisingly great way to learn about the economics of operating systems. Watch an average daytime soap and compare it to the latest computer trade publication. They are not much different. The soap opera takes place in a region called Grass Valley, while most computer market stories occur in a region called Silicon Valley. All the soap actors review events with the utmost seriousness and drama; everyone involved in our industry also takes it too seriously. In both cases, day-to-day developments are hypnotic. In both, the main players and the basic stories change very little from one year to the next. (Good soaps are also occasionally interrupted by commercials for laundry detergent and floor wax, but that is not relevant for this comparison.)
The software salesman Bill sells software. Bill is energetic, opportunistic, and combative. His company can single-handedly make or destroy any other firm. Bill has vision-vision that has infuriated virtually everyone in the Valley for the last 15 years. Bill is rich and famous. He has been on the cover of Time.
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, December 1995. 209
b161-ch07.qxd
27/05/04
7:04 PM
Page 210
210 Part VII
Bill’s life looks like something from Dallas, except that some details are more unbelievable, and nobody has tried to do away with Bill. Bill’s company has a new product called Windows 95. According to the press releases, Windows 95 is the most revolutionary operating system since Charles Babbage invented computers. According to Valley gossip, it involved the most expensive marketing campaign since MacArthur’s invasion of the Philippines. Though Bill would never admit it, Bill’s product is an imitation of the operating system for a computer Apple Corporation made several years ago. However, Bill expects to sell many more operating systems than Apple ever sold computers, even though, by every engineering measure and usability standard, Apple’s old computers look as good as Bill’s new operating system. Not surprisingly, this really frustrates the employees of Apple. Yet, they cannot complain; Apple’s founders made most of their money selling imitations of a computer developed at Xerox Laboratories.
The chip builder Andy sells chips at Intel. Like Bill, Andy has thrived over the last fifteen years. Unlike Bill, Andy is neither filthy rich nor abrasive in public. This explains why Andy is both more popular in the Valley and not as famous outside it. Andy obeys Moore’s law, even though it is not a law. It is, in fact, an old prediction about how fast Intel could cram transistors on chips. Moore’s law has important repercussions for the decline of chip prices and the growth of the personal computer market (and perhaps for the US economy’s long-term health). Andy’s biggest concern is that he can’t continue cramming in transistors as fast as Moore’s law says lie should. Everyone in the Valley believes that Andy is partly right: Chip evolution will probably slow down someday-maybe in our lifetime. However, those in the Valley aren’t holding their breath. While Windows 95 is fine and dandy with Andy, he has not lost much sleep over its fate. Most computers will still have Intel inside whether this version of Windows succeeds or not. Andy calls this “guaranteed demand.” Others call it monopoly.
The lawyer When Janet, the Attorney General, first arrived on the scene, she was Bill’s enemy. She briefly became his friend until a judge disallowed their
b161-ch07.qxd
27/05/04
7:04 PM
Page 211
Chapter 36 211
peace treaty. Then they started feuding again. Janet will not draw a clear line in the sand because she wants to keep her options open. In some circles this is called antitrust policy. Janet recently visited Bill’s company and asked him to explain its behavior. If she had stopped shifting the line, thinks Bill, he could have built Windows 95 to avoid a future visit. Bill is right and so is Janet, and neither is backing down. Some call this a standoff. Others figure that Bill is waiting for Janet’s boss to lose the next general election.
The specter The ghost of Thomas Watson Jr. haunts the halls of IBM. The ghost wails, “What have you done to my company?” Many investors, white as ghosts, echo him. Some people say these visits began when Andy and Bill became prominent in the Valley. IBM’s official position is that there are no ghosts. Though nobody there will admit it, IBM may have nurtured its own downfall. Many years ago, IBM had several good opportunities to control the direction of technical change. Instead, it sold Bill and Andy the keys to the store and helped their companies grow. Oddly, no executive was ever fired for this blunder. Instead, scores of workers received layoff notices or retired early. (Even though this is a soap opera, sometimes truth is stranger than fiction.) Windows 95 resembles an operating system that IBM has been marketing for a few years. IBM expects to sell many fewer operating systems than Bill’s company, even though by every engineering measure and usability standard, IBM’s old operating system looks as good as Bill’s new one. Nobody at IBM complains too loudly, however, because IBM’s operating system arose from a failed joint venture with Bill’s company. Some say IBM managed the venture badly. Others whisper that Bill outmaneuvered IBM. As the Roman philosophers said about their empire’s collapse, this would be a comedy if it were not such a tragedy.
The nightmare The Blob continues to eat everyone. This monster is Bill’s biggest nightmare. It involves a cast of thousands. The Blob is often called networking. Who controls networks? No one controls networks! Who controls the Internet? No one controls the Internet! The Blob just keeps getting bigger and bigger and eating everything. Windows 95 is Bill’s latest attempt to control this monster. (We shall see.)
b161-ch07.qxd
27/05/04
7:04 PM
Page 212
212 Part VII
The cure A bartender works the local watering hole where industry has-beens and wanna-bes swap past glories and future plans. Some of these people were actually stars and miss the limelight. Some are stars only in their own minds. Many claim to have an idea that will change the market for years to come. The bartender shrugs his shoulders and nods in agreement. The bartender moonlights down the street at the country club where the big players socialize. In the last few years the country club has not changed much. Most of the members joined over a dozen years ago, including Andy and Bill. The country club members look and talk just like the little guys in the bar. Patrons of both places dress in jeans and open-collar shirts. Everyone believes in the virtues of entrepreneurial capitalism. Everyone believes that US government trade policy should favor American monopolies over Japanese monopolies. Nobody in either place admits that these two beliefs are incompatible. Yet, the bartender notices a big difference. None of the country club members worry about revolutionizing the industry, and none of them worry whether the little guys in the bar will ever grab the limelight. If he could just bottle the country club confidence and certainty, thinks the bartender, he would give it to his little customers in the bar. Then the patrons of the two places would be exactly alike.
Closing credits Tune in tomorrow, and the market will not be much different. Bill will still he selling software; Andy will still be selling chips. The antitrust issues that bother Janet will not have disappeared. Ghosts will still haunt IBM. Bill will still be trying to stop the networking Blob from eating everything, but Bill and Andy will still be members of the elite, for good or ill. Viewing Windows 95 as one episode of a soap gives us a new perspective on recent events. Is Windows 95 truly revolutionary? Hardly: The same characters who controlled the industry yesterday will control it tomorrow. Is Windows 95 a well-designed product? Possibly, but good design is no guarantee of popularity. Does Bill have vision? Yes, but so what? It is easier to succeed when a product inherits marketing advantages that everyone envies and nobody can emulate. More to the point, Windows 95 arose from this market’s structure and cannot change it in fundamental ways. Nonetheless, these events will continue to hypnotize us. So please stay tuned.
b161-ch07.qxd
27/05/04
7:04 PM
Page 213
Chapter 36 213
{Editorial note: This parody was inspired by a particularly over-the-top bit of hype prior to the release of Windows 95. To be sure, I thought Windows 95 has its virtues, but it solidified something that was already in place. It did not change the world in a fundamental way, just as one episode in a long running soap opera does not really change the basic plot. Microsoft’s and Intel’s centrality had been established earlier. The players in this piece of parody were Bill Gates, CEO and founder of Microsoft; Andy Grove, CEO of Intel; Janet Reno, Attorney General during most of the Clinton years; and Thomas Watson Jr. the second president of IBM. The bar tender is fictional.}
b161-ch07.qxd
27/05/04
7:04 PM
Page 214
37 Return of the Jaded
Events in the PC market have all the elements of a good Hollywood movie. Yes, that’s right, a movie. While making a Hollywood movie about the PC market and its executives might sound farfetched, it really isn’t. After all, the actions of capitalists occasionally serve as grist for Hollywood scriptwriters. Look at Wall Street and Jerry Maguire, for example. If Michael Douglas and Tom Cruise can keep audiences interested in an ambitious, overpaid, self-absorbed stockbroker or sports agent, just imagine what they could do with some of the personalities who head major software companies. If Hollywood could make a film about the PC industry, what would it look like? My speculation is that the PC business would resemble a science fiction trilogy. Even though science fiction movies usually contain exploding spaceships (which are much more engaging than imploding business plans), it sometimes seems as if Bill Gates is from another planet. More to the point, such movies can actually teach us a few things about the way commentators describe recent market events. But to get the point, you must suspend disbelief for a moment and consider the following reviews of three possible future films.
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, October 1998. 214
b161-ch07.qxd
27/05/04
7:04 PM
Page 215
Chapter 37 215
Soft Wars This campy comedy laced with a few tragic undertones captures the imagination of many moviegoers for its heart, originality, and good intentions. The film focuses on the adventures of the lead character, Jobs, a calculating, socially adept, self-absorbed entrepreneur of the street. Jobs possesses a touch of flair, an ego the size of Connecticut, and a boyish desire to be at the center of the future. The movie borrows many themes from old western capitalist ideals about an individual entrepreneur pursuing a dream, but in a new twist, translates these themes into a suburban garage in a remote valley on a Silicon moon. There Jobs and his buddy, Woz, dream of achieving glory by fomenting a quixotic revolution against the established order known colloquially as the empire. The revolt against the empire is more than just a business proposition to Jobs; it is also a social statement. In the empire, all the soldiers wear the same blue suits with red ties and loafers, eat Shredded Wheat for breakfast, golf regularly, and do not recycle. In contrast, Jobs and his followers never wear ties, start their mornings with Pop Tarts and a latte, jog sporadically, and know how to separate cans from bottles. Many moviegoers can identify with Jobs’ socially liberating creativity and irreverence. Others identify with his status as the outsider battling the stultifying order of the establishment. The film has dramatic tension, lots of action, and several outrageous business prospectuses. With the help of his friends, Jobs launches several missile attacks from his garage. Calling these the Apple I and the Apple II, these attacks get the attention of some mid-level managers in the empire. Then, tragedy strikes! Just as the audience begins rooting for Jobs, he meets a jobsian fate. The Apple III explodes on launch. Next there is a poignant and melodramatic rescue of Princess Lisa, an obsessive love interest of Jobs. Unfortunately, this rescue comes too late, and Lisa dies. The Lisa episode leaves Jobs vulnerable to doubts from his backers, who are also growing tired of his histrionics. Besides, most of them were perfectly happy to see Princess Lisa snuffed. She was not that pretty, and there wasn’t that much under the hood, either. Nobody was ever sure what Jobs saw in her. In the climactic scene, Jobs tries to delay a revolt from the faithless by launching his most creative attack yet, which he calls the Macintosh. But the revolt happens anyway. In the midst of it, Jobs utters a prophetic line, “If you strike at me, I will grow more powerful than you can possibly imagine.” Most audience members have no idea what Jobs meant by this, but as with anything Jobs says or does, it oozed panache.
b161-ch07.qxd
27/05/04
7:04 PM
Page 216
216 Part VII
Audiences are riveted by this movie. It contains a mix of personal hubris, higher ideals, and unexpected irony. Indeed, critics have dissected the final scenes ever since the movie’s release. Jobs is banished from his company, which subsequently acquires many trappings of a corporate setting. The credits roll as a man in a gray suit sells Jobs’ last great product as if it were soda. Is this irony or tragedy? The product begins to sell successfully, planting the seeds for the spread of recycling and the ultimate destruction of the empire.
Return of the jaded This film chronicles the same events as in Soft Wars, but they are viewed from the perspective of Bill, a calculating, socially awkward, selfabsorbed scion of an established family. Bill possesses a bad haircut, an ego the size of Connecticut, and a boyish desire to be at the center of the future. The film makes no social statement. The story opens with the empire’s generals sending a few managers to clean up the mess caused by Jobs’ explosions. They disapprove of Bill’s haircut and his attitude, but they overlook it because they are in a rush. They hire Bill on a temporary contract to help with the cleanup. The film contains a comic subplot that revolves around the tension between Bill’s demeanor and his handlers’ needs. While an insolent attitude like Bill’s will later lead to good music and bad haircuts among those in Seattle who do not have trust funds, in this film it sparks a series of hilarious confrontations. Much to the dismay of the empire’s generals, Bill insists on disobeying orders. Yet, at every turn, the empire finds it cannot get rid of him. Most audiences laugh heartily as the empire’s managers make Chaplinesque mistakes, as Bill escapes every banishment attempt. The film’s defining moment occurs one night when Bill has a dream. In it, he meets Yoda, a new-age Jim Henson puppet, who looks like a mix between Buddha and Kermit the frog with age spots. Yoda tells Bill about reaching a “jaded night;” the most enlightened state any programmer can hope to achieve. Yoda declares the film’s main theme: “To have a jaded night, a programmer must do what is good for the industry, not what is good for his company. He must use the force of open standards.” Bill wakes up in a sweat after this vision. “Open standards?” thinks Bill as he shivers at the thought, “What a radical and senseless act.” Then Bill pauses and reconsiders. “But what the hell? I don’t have stock options in the empire, and it might lead to some profitable opportunities for me in the future.”
b161-ch07.qxd
27/05/04
7:04 PM
Page 217
Chapter 37 217
In broad daylight, Bill puts an open software standard into the empire’s systems. This soon wreaks havoc in many far-flung corners of the universe. Movie audiences enjoy watching the farce unfold, as the empire’s managers flail against a force they cannot control. Indeed, in another hilarious act of buffoonery, the empire’s generals design a new weapon, the OS/2 Death Star and invite Bill to help them build it! (Most audiences find this part of the movie utterly implausible but suspend disbelief readily enough anyway.) As Bill sabotages the empire’s Death Star, a rebellion forms. It is composed of anyone who had ever been slighted by the empire’s managerswhich is almost anyone who does not wear a blue suit and has not tasted the sweetness of an Apple. Bill emerges as the unlikely leader of this new rebellion, which lives in a galaxy of open standards. The final scenes involve a dramatic battle loaded with exploding egos. The rebel alliance’s attack is intelligent, compact, and indelible. The empire appears to have better weapons and more resources, but its attack is slow, clumsy, and misdirected. Many audiences cheered the decline of the empire, but found the movie’s ending somewhat unsettling. Bill helped accomplish what Jobs set out to do — that is, loosen the grip of the empire on the universe — but Bill did it while making money and not with any higher ideal in mind. Additionally, there was never any chance Bill could become a teen idol for anyone except other socially inept geeks. Bill, to put it bluntly, holds no appeal for mass audiences.
The Umpire Strikes Back This sequel is more notable for the story behind the movie than the movie itself. The producers and financiers, desperate to move the film forward, ceded creative control to Bill in the middle of filming. Bill then released several new versions of the script, and each new rewrite left less room for other actors. There were whispers that many of the other actors found Bill to be heavy-handed. The movie is set several years later in the so-called open standards part of the galaxy, as uncertainty fills the air. There is an uneasy truce between the empire and the open galaxy, which has just launched attack 3.1. The members of the open galaxy endlessly debate how to overthrow the empire, while the empire plots to regain universal hegemony with a rebuilt Death Star. This film introduces a new foil for Bill named Garey, who claims to speak for millions. At the start of the movie, Garey threatens to report Bill
b161-ch07.qxd
27/05/04
7:04 PM
Page 218
218 Part VII
to the Umpire, also known as the DOJ (the Department of Jaded). Bill, tired of Garey’s whining, orders him to be frozen in ice, which leaves a permanent scowl on the latter’s face. While Garey is totally and utterly unlikeable, audiences enjoy his tirades. Some movie critics see Garey’s role as analogous to that of Greek mythology’s tragic Cassandra, who carried the truth of impending doom to an unhearing audience. Others see Garey just as Bill does, as a fool in lawyer’s clothing. At the start of the film, Garey visits the DOJ, initiating an investigation. Garey’s visit ends with him in a huff, as the DOJ finds Bill guiltless of any transgression. The DOJ then writes a consent decree, which does nothing. With the DOJ in retreat, Bill is free to do as he wishes. What will he do next? Many of Jobs’ original followers are stunned by Bill’s success and resist joining his rebellion. Yet, many friends of Bill’s parents resist joining his rebellion too, unsure of the reliability of a boy with a bad haircut. Bill then makes a fateful decision and announces it to the papers. He reveals that he has built a new Death Star with 95 windows. It looks remarkably like the empire’s OS/2 Death Star that Bill helped build and destroy in the last movie. In a press release Bill declares: “My ship is good for the industry. It recycles many old programs, shreds data like never before, and includes golf in the standard package. These are exactly the functions that many users have requested.” Many of Bill’s parents’ friends are impressed. This announcement brings dissension to the open galaxy. Garey screams betrayal. The remainder of Jobs’ followers plan a rebellion, but their plans lack gravity. Many users choose to worship the Sun, joining another open standard run by a band of eunuchs. Always practical, Bill forms an alliance with the Sun worshippers against the empire, secretly vowing to destroy them as well. The feeling is mutual. The Sun worshippers secretly vow to sue Bill at the DOJ again. The movie ends as Bill’s transformation becomes complete. The empire signs a peace treaty with Bill. The empire agrees to abandon their own Death Star, making room for the ship with 95 windows. Both sides release press announcements describing the treaty as “win-win.” As a final gesture, Bill announces that he is getting married, building a castle on Lake Washington, and taking up golf. Bill’s parents are relieved; he is embracing his heritage! Even though it lacked the flair and originality of the previous movies, audiences flocked to see this big-budget spectacle. Yet, many viewers
b161-ch07.qxd
27/05/04
7:04 PM
Page 219
Chapter 37 219
complained about all the loose ends. Would they have to return in three years for an update? Indeed, as it was released, this movie started rumors about more sequels. The latest rumor says that Bill prefers a script called Gone with the Windows. This movie is set during a civil war, but largely takes place at a coffee plantation inhabited by Sun worshippers. Rumor has it that Bill’s minions pick a fight with the eunuchs over the making of cappuccino. Bill’s Death Star then turns the java into dry roast. Unfortunately, it is difficult to be more precise about Bill’s plans, because, as usual, Bill has refused to share the details with anyone who lives more than 15 miles in any direction from Redmond. There is also talk of making Back to the Future IV in which Jobs returns to his company to recapture his past. This suggestion appeals to the many people who are desperate for an alternative to Bill. There are reports that Bill will financially back this project because he needs a few new ideas from Jobs.
Lessons learned What do we learn by looking at the PC industry through the lens of science fiction movies? Bill Gates and Steve Jobs are both odd and unprecedented figures, to be sure, but the activities of capitalists make for good movies only in rare cases. Movies need plots. Plots must have character development and stories. It is a stretch to fit real events into these requirements. In other words, the PC’s market battles do not begin and end according to any schedule, nor do the wars follow any script in which the actors change, learn, and mature. Indeed, fights may simply go on and on; they do not have to end before newspapers grow tired of reporting about them. Reality does not need to fit into a movie script. More to the point, even when market events are entertaining, their raison d’etre is not entertainment. Yet, only the entertaining stuff, such as dramatic personal triumphs and quirky confrontations, makes headlines. Such reports are amusing and gripping, but they reveal more about the archetypes and stereotypes reporters use to tell stories about our world than about anything actually happening. It makes one wonder how often reporters get frighteningly close to turning real events into fiction. Sometimes it is enough to make an observer jaded.
b161-ch07.qxd
27/05/04
7:04 PM
Page 220
220 Part VII
{Editorial note: This piece of parody was inspired by Gary Reback’s crusade against Microsoft and by news articles about him and Bill Gates. The reporting descended into the sort of celebrity journalism more common to a tabloid. Some reporters did not do their homework and relied more on archetypes than facts. It was easy to portray Bill as a fallen hero and usurper of the revolution, but the truth was more subtle than that. I used the movie review as a motif for making that point. Paul David suggested the pun “The Umpire Strikes Back.” That said, never did I think this topic would actually make a very good movie. Eventually, however, movies were made about the rivalry between Steve Jobs and Bill Gates. Sure enough, many of these are not very engaging. Of all of these, my favorite is “Triumph of the Nerds,” inspired by the book, Accidental Empires, written by Bob X. Cringely.}
b161-ch07.qxd
27/05/04
7:04 PM
Page 221
38 Bill, Act Like a Mensch!
Dear Bill, I hope you do not mind an intrusive letter that appeals to your wellknown exaggerated sense of paranoia. Microsoft and the PC industry are threatened with destruction. I have some friendly advice on how to avoid a complete loss. The threat is not technical, nor commercial. It is political. Nineteen state attorney generals are trying to get reelected by beating up on your firm. Senator Orrin Hatch made points with his constituents by making you look awkward. This is just the beginning. Whether or not you win this trial, a significant group of people in the policy community are now considering alternative ways to regulate the industry and your firm in particular. I am less interested in discussing the merits of the federal antitrust case and more interested in the managerial response to the political economy behind the case. Your political problems could be solved with one bold stroke: Microsoft needs to adopt the mensch strategy. More precisely, Bill, your company needs to hire a mensch on its management team. Better yet, you could simply become a mensch yourself — or at least occasionally act like one. It also probably would be okay if you just faked being a mensch and did not tell anyone. Allow me to explain why the mensch strategy is a good one for a firm in your situation.
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, April 1999. 221
b161-ch07.qxd
27/05/04
7:04 PM
Page 222
222 Part VII
What is a mensch? A mensch is a bit of Yiddish my ancestors brought with them from Eastern Europe. It literally means “a human being”: but that does not do it justice. It is a compliment reserved only for uncommonly wise, warm, and strong human beings. The greatest fictional mensch was Humphrey Bogart, a.k.a. Rick in the movie Casablanca. Why is he a mensch? Because, even though he loves her, he gives away Ingrid Bergman to another man for the sake of the French resistance. He makes a good speech when he does this, comparing life to just a hill of beans. Every time he finishes that speech, my wife tears up and says; “Ah, what a mensch!” It is hard to find a nonfictional mensch in the PC industry — or any industry, for that matter, especially among CEOs. Donald Trump is not a mensch. Leona Helmsley certainly does not have it in her. “Chainsaw” Al Dunlop seems to define the anti-mensch. Yet, a mensch will pop up from time to time in unexpected places. When Ted Turner promised to give one billion dollars to the United Nations because his stock appreciated, he was acting like a mensch. Never mind that nobody thought he was much of a mensch before this particular action, except maybe Jane. It inconvenienced him, and we respect him for putting his money where his ideals are. Do not misinterpret me. Mensch-hood is about more than charity. When Steve Wozniak decided to finish his college degree at Berkeley after he achieved fame and fortune at Apple, he too was acting like a mensch. It’s obvious that Wozniak did not go back to college to advance his career. It involved sitting in uncomfortable desks at a public university, listening to boring professors, and chatting with twenty-year-old engineering majors who all wanted to go work for Microsoft. He almost certainly finished college for the love of knowledge or possibly just to make his parents happy. Like Turner, he put his time and money where his ideals were, at some personal inconvenience. Finally, many people think Bob Metcalfe is a mensch, period. I have never entirely been certain why, but he does seem to be very busy. Ever since he left 3Com, he has used his fortune and energies for the sake of one cause or another. He also seems to be frequently inconvenienced by putting his money where his ideals are. Look, even if you do not think he is a mensch, you must admit that he talks a good game. That is something you can aspire to, too. Bill, everyone thinks you are smart, clever, and dogged. Everyone also thinks you possess uncommon business acumen. Yet, when it comes to politics, the right set of attributes are not necessarily “smart, clever, and dogged.” These are often the wrong set. The right set of attributes are those
b161-ch07.qxd
27/05/04
7:04 PM
Page 223
Chapter 38 223
associated with being a mensch. Please do not take offense, Bill, but nobody has ever said you or your company possesses uncommon wisdom, warmth, or character. And, yet, to be fair, you seem capable. Moreover, the PC industry will be in deep trouble if you do not acquire some of these qualities. That should at least motivate you to try.
The political economy of computing Why did nineteen state attorney generals and the senior senator from Utah go on record stating that they dislike your business practices? It is a bit of oversimplification, but a useful one for this discussion, to say that the politicians who jumped on the anti-Microsoft bandwagon were reacting to the company’s clueless political behavior. It is not difficult for any company to act in a politically savvy manner, and Microsoft should be no exception. It struck me that the very things Microsoft is good at — being smart, dogged, and clever in business and confrontational in legal battles — are the very things that get in the way of your acting in a politically savvy manner. Has no one ever told you, Bill, that your firm is more of a government to many people than, well, the federal government? Because of that, like it or not, you are held to many of the same standards as a government. I can see no better way to explain to you how to react to these additional responsibilities than to advise you to start acting like a mensch.
Basic lessons in being a mensch How would a mensch run Microsoft? Well, pretty much the same way as you do now — that is, keep the organization lean, innovative, and efficient — but your company has to start recognizing political realities. Microsoft should buy goodwill with community leaders, not just in Seattle, but all over the country. Making significant donations to libraries, as you recently began to do, is a good start. Next, a mensch would not be obnoxious to people except those that are the most offensive. If you want to blow off Gary Reback, most people would understand, but it would not be statesmanlike of you. Microsoft needs to have a large staff whose sole job it is to patiently diffuse the arguments of the leaders at firms and groups who ostensibly hate you. This also means the company has to soften its style in public — that is, it should not use in-your-face rhetoric every time there is a dispute with a rival.
b161-ch07.qxd
27/05/04
7:04 PM
Page 224
224 Part VII
A mensch would also buy goodwill with both uncooperative ISVs and OEMs, and with users who do not like your products. A mensch would extend a warm hand to rivals like Sun and to ISVs like Real and Corel. You do not have to have lunch with Scott McNealy or Larry Ellison on a regular basis, but the name calling should stop. The basic point is this: Some users like Sun’s products, employ Real networks, and persist in using WordPerfect. When somebody screws up their favorite product, they get angry. Many of them call their congressional representative because they think Microsoft is to blame. You must take these accusations seriously, or at least refute them without appearing arrogant. Finally, Bill, a mensch would hire a few business school professors from the University of Washington to take your upper management through a remedial course in the “Social Responsibility of Business” or “Management in a Non-Market Environment,” or whatever they call it there. That might be hard for the egos of your management team. But believe me, it would be good for the company’s image.
Act like one too The true test of a mensch is that he acts like one at defining moments. Bill, it would sure be nice if, now and again, you did a few things that were in the interests of the industry, broadly construed, even when it inconvenienced Microsoft. Why would you possibly do that? A mensch knows better than to push every single competitive advantage at every single competitive moment. I occasionally poll my students just to find out what public reaction is, and you are not looking good. My MBA students have come to class with news stories about the negotiating style of Microsoft. It is apparently common for Microsoft executives to baldly state in the midst of negotiations that Microsoft is capable of crushing the person sitting across the table from them. I partly sympathize with what you are trying to do; being frank about the future can save you and your rivals some difficult times. But for goodness sake, some of these stories make it appear that you need a remedial course in “Negotiations 101.” A basic lesson of every negotiating class is that there is no point in stating what is already known when the statement offends, humiliates, or otherwise angers the other party. It makes it unnecessarily hard to come to an agreement. A mensch would not violate a basic norm of street commerce. Even a con artist will keep the victim’s ego intact while he takes the victim’s
b161-ch07.qxd
27/05/04
7:04 PM
Page 225
Chapter 38 225
money. (And if that is not enough, it is also quite dangerous to advise one’s competition about where not to compete. It borders on illegally attempting to steer them away from competing with you. Anyway, never mind that. That is for a court to decide.) Next, Bill, a mensch would turn down the volume in your public relations department. While a court case brings out the most strident in all of us, a mensch would still remind the staff of some obvious stuff. Your firm has one of the highest, if not the highest, market capitalizations in the world. And, not unrelated, Microsoft is one of the most profitable companies in the two-hundred-year history of western capitalism. In light of this, most politicians think it is completely implausible that Microsoft should be paranoid of every young programmer sitting in a garage somewhere. If you want to keep your employees from becoming complacent by motivating them with stories about vague threats coming from all directions, that is fine with everyone. But even if you actually believe such a paranoid view of the world, as a mensch, you should know better than to admit it in public.
Managing like a mensch Finally, remember when Sun announced the creation of Java? It has now come to light that Microsoft knew it was negotiating a contract that gave it a right to come out with its own version. Microsoft intended to use its version to “pollute the Java market.” As it turned out, Microsoft’s action just earned enmity with many ISVs. It immediately became clear that such action would hurt, and possibly kill, the diffusion of Java by confusing the future path of development. A mensch would have known better than to take such baldly selfish actions that angered so many people. What would a mensch do instead? For the sake of argument, pretend that you were cynically faking your mensch-hood. In that case, you would act as if you had the industry’s welfare in mind. Why is this a good strategy? If Sun’s idea fails, which is, after all, what happens to most new computer languages, then nobody can blame you. And if the coalition of former Unix ISVs behind Java fractured, you also do not have to take the blame. You then would have an excuse to come out with your own version. You even could have put on airs that you were trying to unify under your new standard. A mensch appears to play fair, even when it inconveniences the mensch’s own business. Finally, if you were cynically acting like a mensch, you could be Machiavellian about it, and do some subtle things to discredit Java — that is, upgrade reluctantly, fix bugs and incompatibilities slowly, hide some
b161-ch07.qxd
27/05/04
7:04 PM
Page 226
226 Part VII
code Sun needs, provide support in name only but never put good people on it, and so on. Look, we all know that most of the time nobody at Microsoft does this kind of stuff on purpose (ahem, right, Bill?). So if you do, in fact, intentionally act this way in just this one instance, make sure you do not get caught. It would ruin your reputation for being a mensch, and it would land you in antitrust court all over again.
The future The industry buzz is that this trial “proves” Microsoft will always use its discretion in a selfish manner. This perception will hurt Microsoft for a long time. For example, it has given the open-source movement a jump start, even though that movement would probably have failed on its own. It has also made Sun’s position in the Java suit seem sympathetic and motivated more interest in Java, even though that technology also probably would have failed on its own. Finally, some of my MBAs are certain they do not want to work for Microsoft because they do not want to work for a company that is disliked by everyone else. Bill, I understand how much you must have resented the intervention of the Department of Justice lawyers. Your view is that Microsoft was only doing what many other firms would have done in the same situation. But, like it or not, Microsoft is no longer just any firm. That means Microsoft cannot blow off the DOJ lawyers or dismiss the views of its critics. It does mean that the Microsoft spokesmen have to articulate a simple political message or face worse consequences. Bill, we are all in this together and many of us are depending on you to do what is good for the industry. Here’s looking at you, kid. Best of luck!
{Editorial note: During the trial I appeared on a local Chicago National Public Radio talk program to discuss the case. The next day I was misquoted in an Internet publication by a columnist who was part of the radio discussion. A manager at Microsoft saw an on-line version of the quote and emailed me to protest my quote. I found out about the mis-quote and emailed him back. He then told me his side of the story. This essay was inspired by the series of emails we exchanged. He sincerely could not understand why his firm was being subjected to scrutiny. It was one of
b161-ch07.qxd
27/05/04
7:04 PM
Page 227
Chapter 38 227
many times that I encountered employees who were well-meaning and who thought that it was enough to be “smart, clever and dogged.” It started me down this path of trying to find ways to describe the issues of the case in plain language. In addition, that encounter is symptomatic of the central contradiction of Microsoft, which has both admirable entrepreneurial features and a habit of stepping over implicit lines for appropriate behavior.}
b161-ch07.qxd
27/05/04
7:04 PM
Page 228
39 Aggressive Business Tactics: Are There Limits?
Most people don’t really care whether Microsoft or the US Department of Justice comes out on top. It’s no different to them than a Sunday football game between the Washington Redskins and the Seattle Seahawks. Sure, the outcome matters to the participants and their fans everywhere, but that is quite a provincial concern. Most people are happy to read about the highlights in the paper the next day. Does the Microsoft-DOJ case matter in any less provincial sense? Is it about anything more than a civil disagreement between the richest firm on the planet and its home government? The answer is potentially yes. There’s a chance that issues of competition in the PC software industry — particularly those that arose in this trial — will influence competitive activities in other fast-moving industries such as the Internet. The Microsoft case doesn’t raise a sui generis set of issues, as the lawyers would say. (Translation: The legal issues are not unique to just this market.) On the contrary, the case is taking the first steps toward applying old antitrust norms to the fast-moving information economy. Or to put it simply, this decision will change competitive tactics at many large firms. Like it or not, that’s one fallout from this trial. Look at Judge Jackson’s 200-page “Findings of Fact” the document he issued last November. To be sure, there’s a risk that these findings will become obsolete, overturned, reinterpreted, and otherwise made obsolete Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, February 2000. 228
b161-ch07.qxd
27/05/04
7:04 PM
Page 229
Chapter 39 229
by further events. (There is also a risk that this column will be obsolete by the time it’s printed.) That said, it’s possible to summarize what tactics upset the judge, tactics that I believe will become a benchmark for many commentators in the future. Before doing this, I must come clean on one thing. Many industry participants have strong opinions about the good and bad features of the findings. Some of these opinions are bought, some are passionate, and some are simply thoughtless. In my own case, my opinions are not bought, they’re passionately held, and I hope they don’t come across as thoughtless. That said, I am not an extremist in either direction, mostly because I believe strongly in the inevitability of unforeseen consequences. Any evaluation is guesswork at best. That precaution aside, I do regularly teach MBAs about competitive tactics in the computer industry. If nothing else, this case will change business tactics in this industry. With certainty, I predict that soon every business school professor and consultant will add a few PowerPoint slides to their lecture on “competitive tactics for large firms.” What will those slides say? Here’s my take on it.
Traditional antitrust norms These norms apply to innovative sectors: the judge’s findings signal that all large high-tech firms should expect close scrutiny from now on. It signals the end of the era in which high tech was the only major US industry in which big firms were so free of antitrust scrutiny. Judge Jackson put forward behavioral norms for platform providers. He accepted the basic notion that application developers depend on platform providers and are, thus, beholden to them. From there he applied several antitrust norms about how interfirm relations should be conducted when this dependence is great, largely using a standard line of reasoning in antitrust. For example, seemingly consistent with a narrow reading of this case, Judge Jackson identifies several specific tactics that resulted in the anticompetitive reduction of consumer choice. That is, Microsoft’s tactics foreclosed markets to new entrants while bringing no obvious efficiency gains to society. These tactics include using exclusive contracts to restrict a rival’s growth, placing unnecessary barriers in front of potential rivals by withholding information, and punishing distributors that carry a rival’s product. The specific tactics are not interesting, but the judge’s general approach is. One would have to discard decades of legal precedents of
b161-ch07.qxd
27/05/04
7:04 PM
Page 230
230 Part VII
antitrust law not to hold these specific acts as illegal at a firm with market power. In other words, Judge Jackson is close to concluding that Microsoft took actions that are otherwise forbidden to any other monopolist in any other industry. Few commentators have noticed how remarkable this finding is, especially coming from a Reagan appointee. It is a foot in the door to wider change. It affirms that the basic rules of antitrust law apply to the development of new software. This is a basic legal point that has been unresolved ever since the government ended its botched antitrust trial against IBM almost twenty years ago. More to the point, if those laws apply here, they also apply elsewhere. The same rules apply to the Internet, an obvious conceptual leap that should be getting more attention. If the wider consequences are still not apparent, think about it from a tactical level. Certain tactics, such as exclusive contracts, would become forbidden for platform providers and firms that acquire positions of dominance. That’s a big deal. Many growing Internet firms employ such tactics and will still want to use them when they become big.
Dominant firms Such firms should not withdraw routine support too often. For example, Microsoft is also accused of making it unnecessarily difficult for another firm to bring something to market, even when it required a small amount of routine effort. Judge Jackson notes that Sun Microsystems and Real Networks have similar complaints. Courts typically eschew these types of disputes because both sides can usually make a plausible argument in their favor. Resolving them requires some speculation about what is routine and what is not. In this case, that speculation looks like this. On one hand, Sun and Real Networks could not diffuse their innovations without Microsoft’s cooperation, and it’s in society’s interest to let Sun and Real Networks try. On the other hand, such cooperation might be expensive to offer; Sun and Real Networks don’t have rights to receive help any time they ask for it. Once again, a narrow reading is deceptive. Judge Jackson’s findings emphasize the restrictions to consumer choice, which resulted from Microsoft’s actions and the implausibility of its excuses for these restrictions. If anything, Judge Jackson is close to using a principle like this: A platform provider must do for everyone what it does for almost anyone. In court, Microsoft’s lawyers offered logical explanations for why the company withheld support in some cases, but Judge Jackson did not find their explanation credible or plausible. (Note the distinction: logical is not
b161-ch07.qxd
27/05/04
7:04 PM
Page 231
Chapter 39 231
the same as plausible.) This seems to indicate that a platform provider can avoid suspicion as long as it plays clean elsewhere. But bad behavior on narrow issues, such as those highlighted earlier, leads to a presumption of suspicion on these more ambiguous questions. A threat to selectively alter a relationship with another firm is a commonly used competitive tactic. Any restraint on a dominant firm’s ability to be selective represents a big change.
Twisting arms This is risky if it freezes out a competitor: Judge Jackson highlighted how Microsoft galvanized its business partners into taking actions. Specifically, he describes how Microsoft did much to kill Netscape, enlisting other firms, distributors, and business partners in the pursuit of this priority, whether the partner liked it or not. The story was well known, so that is not novel. However, the judge’s evaluation of arm-twisting tactics was relatively novel. If anything, it doesn’t appear that he cares who wins a fight. However, he does not want dominant incumbents to have an excessive ability to decide when new entrants can bring goods to market, a standard concern of antitrust law once again. Indeed, Judge Jackson was very eloquent on this issue, especially in the last part of his findings. Law students will quote these passages for years. I’m not as eloquent, but I can briefly explain the issue. In a nutshell, it’s one thing when a firm fights head to head with a competitor, but it’s very troubling when a platform provider enlists other mercenaries in that fight. This turns a one-on-one case into one against many, potentially with many reluctant conscripts. In many plausible situations, this is not in society’s interest. Judge Jackson seems to have bought into the view that this is one such situation. For all his eloquence, the judge did not resolve this issue in a general way. In other words, how do you know when the situation is good or bad? For example, are there clear legal principles for governing the chaotic alliances of the Internet today? If a dominant firm participates in an alliance, when do such alliances turn from permissible to illegal? The basic principle seems to be that it’s okay to twist arms for narrow needs other than foreclosure, but if the arm twisting only has defensive purposes, such as foreclosing entry of a rival product, it risks antitrust scrutiny. That said, many legal insiders think the judge defined this issue too vaguely and it is leading to a legal ruling that may not survive appeal. Higher courts usually require firmer legal guidance than found in a few eloquent words.
b161-ch07.qxd
27/05/04
7:04 PM
Page 232
232 Part VII
More to the point, the competitive issues raised in this case arise for very general reasons. They will arise again with any dominant platform provider in the next decade. By highlighting the most troubling facts and questions, Judge Jackson has begun to define the scope of antitrust law for the fast-moving economy. Are these the right principles and the right norms? This is the debate that needs to take place and which Judge Jackson’s decisions open.
{Editorial note: For reasons that seem extremely imprudent in retrospect, Judge Jackson allowed reporters to keep a record of his thoughts during the trial. These were published after the trial, but before the appeals court had ruled. This greatly upset the judges on the appeals court. For this and other reasons, they requested another judge for further hearings. Most of Jackson’s reasoning and remedies were remanded down to the court for reconsideration. Interestingly, the court did not dismiss all the findings, nor overturn the basic ruling that Microsoft possessed monopoly power and used it inappropriately, i.e., that they used their monopoly power to take actions that benefited their firm without benefiting their customers. In short, they were guilty of monopoly maintenance.}
b161-ch07.qxd
27/05/04
7:04 PM
Page 233
40 Hung up on AT&T
The courts will eventually decide whether to accept the US Department of Justice’s recommendation to break up Microsoft. That gives all of us plenty of time to decide whether this is a good or bad idea. Here are a few points to think about. Many pundits believe AT&T’s divestiture offers lessons for how to break up Microsoft. This must stop. Whatever else you might think of the trial, its veracity, or its aftermath, don’t look to the phone company for lessons. Don’t even consider using the phrase “baby Bills.” It all points in the wrong direction. The confusion is partly understandable. If we don’t think too hard, AT&T’s experience looks relevant. After all, it was a large firm before its breakup (and it still is). It had one of the highest market capitalizations in the world (and it still does). It too lost a major antitrust trial (we will get to that in a minute) and found itself ordered to divest into separate corporate entities. It sounds like the present setting, doesn’t it? From such simple beginnings are bad comparisons born. A number of so-called lessons have now found their way into general discussion. The top three seem to be these: • •
AT&T’s breakup wasn’t a big deal, so Microsoft’s shouldn’t be. AT&T’s stock recovered after the breakup, so the stock value of two Microsofts will also recover.
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, August 2000. 233
b161-ch07.qxd
27/05/04
7:04 PM
Page 234
234 Part VII
•
Since the telecom market became more competitive after the breakup, so too will the computer market after the breakup of Microsoft.
None of these conclusions is particularly solid. All need significant qualification.
There is precedence Breaking up a company for violating antitrust law is a very big deal. But not for the reasons most people think. Some commentators think the proposed Microsoft breakup is a big deal because it will be complicated. These commentators are just naive. Breaking up is common and almost routine for American business. More to the point, high-technology companies do this frequently. For example, Hewlett-Packard just finished a relatively large divestiture of its Medical Equipment and Systems Division. This action created two firms out of thousands of patents, several laboratories, and many sales forces. Nothing being discussed about Microsoft is any more complicated than what Hewlett-Packard has done. In some dimensions it’s much easier. There is, however, one key difference. Hewlett-Packard split itself voluntarily. That is, some upper-level executives decided how to organize the breakup. They weren’t compelled to do this by force of law, as with Microsoft’s breakup. By the way, that’s also a big difference with AT&T’s case. To see why this matters, we need to get some facts straight about the AT&T case. To wit, AT&T agreed to divest itself “voluntarily” only after receiving a big nudge from Judge Harold Greene. More specifically, AT&T filed a motion for dismissal just after the prosecution finished its case. Judge Greene answered the dismissal request with a lengthy set of questions, providing AT&T with a road map of what legal issues it had to address during its defense. AT&T’s lawyers looked at the road map and knew they were going to lose. AT&T’s lawyers and management then made a very deliberate calculation. They decided to settle with the government and stop the trial. Why? Because they were trying to avoid a federal ruling that AT&T was a monopolist that violated antitrust law. Such a ruling would’ve made AT&T vulnerable to years of private antitrust lawsuits, where the federal ruling would become automatic evidence in the private suit. This would’ve been very ugly. So AT&T accepted the government’s proposal for a breakup as a way to forestall the federal ruling. The deal was ugly too, but it was better than the more open ended and much uglier alternative.
b161-ch07.qxd
27/05/04
7:04 PM
Page 235
Chapter 40 235
Microsoft faced a similar calculation in Judge Richard Posner’s chambers, where negotiations took place prior to Judge Jackson’s ruling. In sharp contrast, no breakup was under discussion with Posner. Microsoft could have settled for a few “conduct remedies” (which, incidentally, were lighter than those which Judge Jackson just imposed on them in addition to the breakup).To top it off, the newspapers published veiled threats that a breakup would be proposed if negotiations failed. In other words, Microsoft could have settled with the government, avoiding a federal finding against it, avoiding the appeals process, forestalling some private suits, and avoiding a discussion about a breakup. To be frank, I thought Microsoft would settle. So did the market. Indeed, I’m not going out on a limb by saying that 97 out of 100 CEOs would’ve settled simply to avoid the hassle — not to mention the expense — of what’s about to happen to Microsoft over the next year or two. Reportedly, negotiations got close to settling but failed. Go figure. Anyway, note the key difference. AT&T negotiated its breakup as a way to avoid further litigation. Microsoft is having a breakup foisted on it because negotiations failed. In one case the firm made its peace with the government by breaking up. In the other case the firm could’ve avoided the breakup but didn’t and is now kicking and screaming about it.
Stock value in the long run If Microsoft executes the breakup well, stockholders should do just fine. After all, Microsoft is loaded with many products and assets that won’t lose their value overnight. Nothing’s going to displace Windows on the desktop. Nothing’s going to displace Office. They both generate an enormous amount of revenue. It doesn’t matter whether those assets are divided between two companies or owned by one. More importantly, Microsoft’s mid-level positions are stacked with an abundance of talented and ambitious employees. And let’s just say what everyone knows, but no employee will say in a newspaper: Many of these talented people would be willing/happy/eager/able to run their own software company without Steve Balmer and Bill Gates. That said, everyone is expecting Microsoft’s upper management to resist a breakup with all the petulance for which it is infamous. That’s where the possibility for disaster lies. If a breakup is executed reluctantly, all comparisons with AT&T are irrelevant. More to the point, a divestiture isn’t fun. It can’t be. It requires coordinating consultants, accountants, and lawyers over many months. That requires patience, financial acumen, and a keen eye for legal detail.
b161-ch07.qxd
27/05/04
7:04 PM
Page 236
236 Part VII
Aside from needing a manager with these talents, a good divestiture requires a manager with a certain kind of temperament, someone who knows how to calm nervous employees. That is, defections are reduced during restructuring if the managers have less volatile temperaments. It’s no insult to say that these requirements don’t play to the comparative advantages of Bill and Steve. Said another way, both of them are demonstrably good at building commercial empires, but there’s little evidence of their ability to divide an empire. I bet both could manage it if they cared, but I also bet that neither of them cares passionately about a breakup, no matter what shape it takes. So here’s the point. AT&T’s stockholders eventually did well after its breakup, but so what? That doesn’t tell us anything about this case, where much rides on whether management executes the breakup well.
Competition in the long run The telecommunications industry did become more competitive after AT&T’s breakup. Prices declined in most locations. Equipment markets and long-distance markets exploded with new entrants. In addition, AT&T was a big, bloated firm prior to divestiture, and divestiture helped bring the spur of competitive discipline to the firm’s equipment and longdistance business. Most observers also believe the Internet revolution was enhanced by the decentralization of power in the industry — a decidedly good thing. Yet, for all these benefits, none of these circumstances resonates with any comparable detail in the Microsoft case. Well, then, what will a breakup do to the computer market? That is, if the conduct remedies are accepted, what additional effect will the breakup produce? At best, a breakup seems to enhance two potential pathways for more competitiveness. First, it’s in society’s interest to have two firms in Seattle fighting each other over the next revolutionary technology. To be sure, the browser wars are over, but there’s bound to be something else soon that influences both applications and operating systems. With this new competitive structure, we’ll see two versions of that technology instead of one version. Is that a good thing? Yes, it is, especially if the new technology is a sufficiently disruptive technology like the browser. Maybe it’ll be voice recognition; maybe it’ll be some sort of XML application. Who knows? It just has to be big enough to get new competition going, to induce multiple designs and lots of activity. Two choices are better than one in that kind of situation.
b161-ch07.qxd
27/05/04
7:04 PM
Page 237
Chapter 40 237
Second, a breakup will be good if it partly reduces the power of Redmond decision-makers to control the fates of others. Of course, this comes at a cost. A breakup reduces the power of those same decisionmakers to coordinate actions between the applications and operating systems divisions. Said another way: The biggest supporters for this proposal are my Silicon Valley friends who believe that there are many small software firms that have difficulty commercializing their software. Because Redmond is partly restrained by a breakup, so this view goes, it gives those small firms a chance at Microsoft’s expense. The biggest detractors of a breakup, on the other hand, are my friends at Microsoft who believe that most of the Valley’s small firms have an inflated sense of self-importance and unrealistic optimism about their commercial prospects. In this view, the breakup reduces Microsoft’s power and only hurts its ability to coordinate change. Unfortunately — and this is my larger point — none of this has anything to do with AT&T’s experience. Plenty of good things did happen after AT&T’s breakup, to be sure, but little of it’s comparable to what Microsoft may someday encounter.
{Editorial note: When the break-up of Microsoft was first announced as a potential remedy it was the only thing people could talk about. In retrospect, this essay is academic and the point is mute. Microsoft was saved from any hard choices by an odd combination of events. First, the Appeals court explicitly asked for a remedy that matched the severity of the crime and did not think Jackson had weighed all matters in balance. The court sent this remedy back to the lower court for reconsideration. Second, the 2000 election brought in a new administration. At heart the Bush administration had little interest in pursuing the case further. Third, and probably most significantly, after the September 11, 2001, terrorist attacks on the World Trade Center buildings in New York the DOJ began redirecting all efforts towards the newly perceived threat, closing open cases if they could. In this spirit, Charles James, then Assistant Attorney General for Antitrust came to a deal with Microsoft’s attorneys. Enough of the States Attorney General signed on to the deal to provide political cover. With some slight modification, this consent decree was approved by a new judge.}
b161-ch07.qxd
27/05/04
7:04 PM
Page 238
41 Falling Through the Cracks at Microsoft
Cases like Microsoft’s federal antitrust fight come along once a decade. However, the national press has reported the results, interviewed a few experts, and already moved on. It’s as if nothing happened on a grander scale. Why the silence about the bigger issues? For one reason, the trial was genuinely complicated, so bumper sticker debates on “Night Line” — not to mention MSNBC — shed little light on the core issues. Also, the whole thing has lost its immediacy. It’s stuck in limbo, waiting for appeals courts to rule. In addition, as with political events, the trial reporters focused on public relations blunders and daily changes in competitive tactics; that emphasis devalued the bigger questions. Finally, and even more disturbing, reporting devolved into brain candy. For example, in the middle of the trial The New Yorker published a sympathetic article on why Bill Gates felt misunderstood. Despite the depth of the psychological analysis, the article left the impression that this trial was just another personality conflict between Gates and the universe. It’s partly that, of course, but that impression also trivializes the bigger issues. In truth, this trial inter-mixed two separate stories: one about policy and the other about managerial decision making. The trial highlighted a fissure in public policy about high-tech markets in which two contrasting frameworks for examining issues exist. Also, this case arose because
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, October 2000. 238
b161-ch07.qxd
27/05/04
7:04 PM
Page 239
Chapter 41 239
Microsoft’s managers seem to misunderstand how the marketplace and the courtroom view this fissure. Most reporters still don’t understand this fissure. Since Microsoft fell through this crack, it’s worth explaining.
Vertical contracting We might call the first conceptual framework the vertical contracting framework. It’s one of the oldest stories in antitrust. In fact, some of the oldest examples come from the era of railroad growth, which predates the US Sherman Antitrust Act. This framework focuses on the use of discriminatory tactics. Why? Because a dominant firm can play one business partner off another, using discriminatory methods as a way to achieve de facto exclusive terms with downstream partners. That’s a policy issue because society has an interest in eliminating exclusive arrangements. These may interfere with new entry by other firms or with the emergence of a wider variety of designs by existing firms. This framework worked with Judge Jackson because, to be frank, Microsoft left an incriminating trail of evidence about its discriminatory tactics. Microsoft executives weren’t particularly subtle with Dell, Compaq, and Gateway about any OEM carrying another firm’s browser. Nor was there much nuance to Microsoft’s contract restrictions with OEMs about their ability to tailor their designs, a contract provision known as first screen restrictions. The prosecution also had all sorts of high-powered e-mail evidence in which the boys of Redmond conducted their business dealings sans social graces. More on this later. Anyway, and more to the point, the vertical contracting framework was a near certain winner if the arguments were presented coherently. The Department of Justice lawyers — like lawyers everywhere — wanted a victory, and this framework offered an expedient way to get it. It was used because it could win in court. That’s how trials work in America.
Another framework The second conceptual framework might be called the platform framework. It begins with the premise that Microsoft has assumed/acquired/ grabbed a central role as a platform provider. That is, Microsoft makes it its business to act as a firm that coordinates the direction of change. It acts as a focal point, coaxing others into using specific technical implementations and moving things forward as best it can.
b161-ch07.qxd
27/05/04
7:04 PM
Page 240
240 Part VII
This isn’t easy, of course. Successful coordination of technical change requires control over many things. While it isn’t usually in society’s interest to let one firm control the rate and direction of innovation too much, it can be in society’s interest to let somebody coordinate many firms, at least to some degree, if that coordination/control moves society’s technology forward. That excuses some of Microsoft’s behavior, to be sure, but not all of it. Accordingly, proponents of this framework worry about whether a dominant firm is exerting control as a selfish end in itself or with the purpose of moving the platform forward. The latter goal benefits consumers, while the former arguably doesn’t. Yes, it also provides Microsoft with a partial defense. Since there can only be a few platform providers at a time, do any other potential platform providers seem particularly better? After all, would anyone trust Larry Ellison any more than Bill Gates? And was the IBM era any better for people outside IBM? No, no, and not really. In other words, didn’t Microsoft get to this position by being comparatively good at its work? At the same time, this view raises several troubling questions. Why would a dominant platform provider help a complementary firm that has become large enough to act as another platform provider? Why would a dominant platform provider not use access to technical information as a bargaining tool? Why would a dominant platform provider let middleware firms have enough design freedom to grow into a major force? Let’s face it: A platform provider wouldn’t do any of these things, but it’s in society’s interest if they do. To be clear, the courts didn’t address either side of these questions. For tactical reasons Gates’ legal team seemed to worry about conceding that Microsoft was a platform provider at all. To do so would concede that Microsoft had some control, which, in turn, makes it especially vulnerable to an antitrust violation under the vertical contracting approach. Also, there was another difficulty. No antitrust law really exists for platform providers. It would’ve been rather risky to base a defense on it. Conservative judges usually don’t like new theories.
Two frameworks and a specific firm To summarize: The Microsoft antitrust trial was largely fought on the basis of the vertical contracting framework. The defense never really offered a full view of platforms, the day-to-day strategic concerns at Microsoft. Neither did the prosecution. This expediency deferred the really big question: What legal limits, if any, should the government place on platform providers for innovative markets?
b161-ch07.qxd
27/05/04
7:04 PM
Page 241
Chapter 41 241
To put it more starkly, the government lawyers figured out how to use a very old framework to win a legal case in a very new industry. Winning overrode other considerations. In contrast, Microsoft executives were appalled that a vertical contracting framework was being used to examine their behavior. They lost because they didn’t take care to guard against violating these old and established legal norms. So this gets us to the last question: If this vertical contracting framework is so old and so well known, how could Microsoft’s executives (and in-house lawyers) leave such an incriminating trail? Or to put it another way, if this legal stuff is really antiquated and doesn’t matter much to Microsoft’s core business, why did the company do so many things that invited scrutiny? Why didn’t it play clean under the old rules if it could still get what it wanted by playing clean? Consider a few examples. Didn’t Microsoft’s managers know that sending a threatening letter to Compaq to withdraw the operating system license would come back to haunt them? Sure, it was explained in the trial that this letter was a bureaucratic mistake. But a good in-house lawyer should have known that the letter should never have been sent in the first place since it’s an action that’s easily misinterpreted. It makes the company look out of control, brazenly willing to abuse its market power. Similarly, didn’t Microsoft’s lawyers know that exclusive contracts with ISPs would be interpreted as a potential tying arrangement, a clear antitrust violation? Sure, these provisions had little real competitive benefit in practice, but that’s all the more reason not to use them. They don’t look innocent on the surface because such contract provisions are absolutely illegal at dominant firms. Their use here made Microsoft look unwilling to regulate its own behavior. Finally, didn’t Microsoft’s managers understand that threatening to withhold technical information from Netscape could be interpreted as a discriminatory behavior? Sure, there are limits to what Netscape had rights to know, but so what? It was obvious that Microsoft would face a lawsuit unless it justified its decisions with anything other than very precise reasoning. Any quid pro quo for technical information is always suspicious when dominant firms are involved. Where was Microsoft’s Antitrust Compliance Program? Such a program could’ve stopped all these actions or at least muted them to some extent. At best, it infuses a company with respect for the legal responsibilities associated with having a dominant market position. At a bare minimum, it limits the amount of incriminating evidence. Antitrust compliance is routine among Fortune 500 companies because, if nothing else, these programs are an ounce of prevention for keeping companies out of court. So the absence of such a program at a
b161-ch07.qxd
27/05/04
7:04 PM
Page 242
242 Part VII
firm with the world’s highest market capitalization represents something extraordinary. This can happen only when somebody at the top is asleep or nobody at the top cares. Since this firm is usually awake, it’s not hard to reach the obvious conclusion. But that conclusion is puzzling. Why does a company’s management that’s otherwise strategically brilliant and admirable in many respects remain so opposed to meeting minimal legal requirements from the federal government? If these particular requirements ostensibly make little difference to its overall business, Microsoft should just do them and move on. But if there’s resistance to surrendering power for the simple reason that surrendering anything reduces Redmond’s control over others, maybe the government case was using archaic legal rules to regulate the right point after all. What do you think?
{Editorial note: This essay was inspired by a wide array of musings. It was fascinating to watch Intel settle its case with the FTC in the late 90s, while the Microsoft case dragged on. Intel staved off a huge amount of bad publicity and did not have to accept a particularly binding consent decree. Yet, Microsoft seemed not to take any lesson from this example. I still think, as do many others, that Microsoft made a strategic error in going to trial against the DOJ instead of settling for the best consent decree they could get prior to trial. I still think that many of the basic things the government asked for — like open distribution channels — would not have hurt Microsoft’s business much (And if it did, then they did not deserve what they were earning by keeping the channel closed to others). I still think that Microsoft paid a huge price for all the bad publicity about their hard-ball tactics and retaliatory behavior. It engendered much ill-will in the industry and it hurts the firm’s ability to work with others going forward. For a lengthy time it also distracted the top executives quite a bit, which surely hurt their ability to do many of the things they do so well, like bring out new products. So one had to wonder what was going on in the executive suites in Redmond. It has become quite clear from later books, particular Breaking Windows by David Bank, that Bill Gates personally approved of the hard negotiating tactics with judge and the DOJ. Did Bill understand or forecast how dangerous it would be to push the fight to its limit?}
b161-ch08.qxd
27/05/04
7:04 PM
Page 243
Part VIII
Platforms and Standards
This page intentionally left blank
b161-ch08.qxd
27/05/04
7:04 PM
Page 245
42 Markets, Standards and Information Infrastructure
Today’s worldwide information infrastructure encompasses a broad spectrum of activities. Telephones, local area networks, wide area networks, and supermarket scanners all play a part. That infrastructure involves both simple and sophisticated equipment-everything from wireless communication devices, to microprocessors, to thousands of miles of copper cables. This information infrastructure did not arise overnight, nor did any single policy vision guide it. A host of legal, economic, and historical factors shaped its development over most of this century. The processes were decentralized, usually market oriented, and seemingly too chaotic for any organization to control. Of course, some have tried. AT&T, IBM, the Federal Communications Commission, and the US Department of Defense, among others, have briefly, and sometimes successfully, coordinated the development of one component of the whole. No single policy vision coordinates infrastructure development today. Nor could any centralized decision process possibly guide such a complex engineering network. By default, decentralized market mechanisms, private firms, and standards development organizations take responsibility for many technical standards within the information infrastructure.
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, December 1993. 245
b161-ch08.qxd
27/05/04
7:04 PM
Page 246
246 Part VIII
Approaching the standardization process Standardization activity plays a dual role when decentralized, marketoriented decision-makers make technical decisions. These dual roles, as coordinator and constraint, show up in both short- and long-run analysis. Failure to account for both roles has frequently obscured our understanding of the standardization process. In the short run, when market structure is relatively stable, standards coordinate contemporary and anticipated market behavior. The costs of using a network of components usually decline because standardization reduces the interconnection expense. Standards let component designers anticipate interconnection requirements so they can improve their part of the system. With standards, system users can invest in assets and rest assured that loss of connectivity will not depreciate the asset’s value. Yet this coordination benefit is not free. Standards limit the choices of users and vendors alike. Both become locked in to a set of technical constraints that they may change only at a high cost. Moreover, vendors recognize the strategic importance of locking users to a standard and spend, or waste, considerable resources on manipulating its development. In the long run, the ultimate importance of standardization arises from its impact on technical change. Because many parts of the information infrastructure have not reached the stasis associated with mature product markets, standards shape technology-based decisions. Standardization issues lie at the core of developments in digital cellular telephones, high-definition television, and large LAN communication protocols. In this setting standardization also plays a dual role as constraint and as coordinator. Stable, functional, predictable standards coordinate technical development. Yet standards also lock in users and suppliers over the long haul. Lock-in is especially costly when technical possibilities change rapidly, removing previous technical constraints and imposing costly new ones. Though standards will constrain technical improvement, that development will occur sooner and will involve more development among more components.
Definitions and distinctions The difficulty with any analysis is that today’s information infrastructure is a “network of networks.” It consists of a hodgepodge of public and private telephone networks, private local and wide area networks, mainframe- and mini-computing centers, and numerous communication
b161-ch08.qxd
27/05/04
7:04 PM
Page 247
Chapter 42 247
bridges between various sub networks. Telephone companies, computer hardware and software companies, satellite operators, governments, and virtually every user play some role in this network. Analysis breaks through this hodgepodge by focusing on one “economic network” at a time. All buyers and suppliers who have economic incentive to care about a system’s technical features comprise such an economic network. Either all users desire to communicate with one another, as in a traditional telephone network, or all users need electronic components to work with one another, as when an industry wide network of buyers uses the same standard bundle, or the minimal set of components necessary to ensure system performance. Notice that the use of “network” here is not conventional. Economists view telecommunications networks as more than just their physical linkages and electronic signals, more than just the physical equipment extant at any given time. Economic relationships extend beyond physical boundaries of equipment. Though many buyers and sellers of the same information technology may not buy equipment or services from the exact same supplier, they still may he a subset of the same economic network if they use compatible equipment. All activity in an economic network centers around interoperabilitywhether a component may serve as a subsystem within a larger arrangement of components. In the simplest case, compatibility standards define the physical fit of two components. Familiar examples are modular phone jacks on telephone cords and handsets, and compatible telephone switches. More complex are the standards that determine electronic communication channels. The need for these standards is obvious, since successfully filtering, transmitting, and translating signals across telecommunication networks requires precise engineering. Similar needs arise in the design of circuitry linking computers, their operating systems, and application software programs. More generally, compatibility solves but one issue in a wider array of coordination problems. Most on-line commercial networks — Prodigy, Compuserve, America Online, or the private networks of thousands of commercial organizations and private firms — are sophisticated electronic networks. These often involve on-line transaction processing, employ a mix of sophisticated telecommunications and computing equipment, and must operate reliably on a daily basis. Accomplishing these various functions involves all the coordination activities associated with the successful management of a business enterprise. Products and services must be defined and tied to billing. Output must be controlled and its quality assured. Electronic signals must be routed without hesitation. An organization must also develop capital capacity and plan the requisite staffing to meet long-run service needs.
b161-ch08.qxd
27/05/04
7:04 PM
Page 248
248 Part VIII
Sometimes these decisions involve coordinating actions within a single organization. Often they involve coordinating decisions across divisions within the same company, or among upstream and downstream vendors, or between a vendor and a governmental regulator. Economic research to date focuses primarily on the factors influencing the development of compatibility standards. This focus on the nexus of economics and technology is a bit narrow, since it virtually ignores the important organizational costs just mentioned. Nonetheless, since interoperability is necessary for coordination on any level, this restricted view does not invalidate the merits of the analysis of compatibility. It simply means that typical analysis ignores lots of the messy details of coordinating organizations in practice. As I will point out, sometimes this hole matters and sometimes it does not. Another key is the economist’s taxonomy of processes that develop standards. Unfettered market processes may develop standards as a de facto result of either a sponsored or an unsponsored market process. In a sponsored process, one or more entities, suppliers, or cooperative ventures create inducements for other economic decision-makers to adopt a particular set of technical specifications and become part of an economic network (such as pre-divestiture AT&T-sponsored telecommunication standards). An unsponsored process has no identified originator with a proprietary interest, yet follows well-documented specifications (the QWERTY keyboard). Voluntary industry self-regulation may also play a role when economic networks arise out of the deliberations of standards development organizations. Of course, government bodies may also shape the development of economic networks (such as the FCC). Government organizations have no compelling reason to involve themselves in the development of every network. They often do so because important public policy issues are at stake, as when domestic and foreign firms use standardization as a competitive weapon. They often do not do so because external forces, such as dramatic technical change, outstrip the ability of any administrative process to guide events, making it easier to leave decisions to market participants. When to rely on a market process instead of on government decision-making is an open and active topic of debate, one that usually hinges on trade-offs between imperfect market processes and imperfect government intervention.
Short-run analysis Short-run and long-run analyses require different approaches. Short-run analysis presumes that the number of key decision-makers-such as firms
b161-ch08.qxd
27/05/04
7:04 PM
Page 249
Chapter 42 249
or potential users-is virtually fixed. This is not bad if many rigidities — a firm’s technical expertise, economies of scale, and various other competitive advantages associated with incumbency — limit how many firms can feasibly produce for a market in the short run. By implication, short-run analysis is not appropriate for investigating how technical innovation influences the adoption of standards and the number of suppliers, and vice versa. Also, since rigidities differ in importance in different markets, the appropriateness of this type of analysis will also differ by industry. For short-run analysis, we conveniently distinguish between networks in which many suppliers provide related services, a few do, or only one dominates. Table 1 summarizes some of the differences that arise between such systems. These distinctions help organize insights about patterns of outcomes and the factors that produce them. Of course, determining which markets belong in which categories is not always obvious in practice. Indeed, much controversy is essentially argument over which type of analysis applies to which specific market.
Many decision-makers and too many cooks Standardization may not easily arise when decision-making in a market is diffuse-when a market has many buyers and many sellers, none of whom
Table 1. Criteria
Short run analysis: trade-offs between different market structures. Unsponsored standard
Dueling sponsors
Single-sponsored standard
Decision-making Diffused to many firms
Concentrated in a few firms
Concentrated in a single firm
Severity of coordination problem
Difficult to reach agreement between all interested vendors and users
Depends on willingness of vendors to design components that mix and match
All decisions internalized by single firm: depends on management of firm
Pricing
Typically very competitive-pricing close to cost
Oligopolistic pricing- Monopolistic typically some markup pricing: high over cost markup over cost
Primary distortion
Decisions subject to band-wagons-society will likely not get optimal technology
Vendors’ strategy determine networksvendors will lock-in users and lock out rivals
Monopolist will manipulate technology to own advantage: blockade as much entry as possible
b161-ch08.qxd
27/05/04
7:04 PM
Page 250
250 Part VIII
is responsible for a large percentage of economic activity. This trend is disturbing since diffuse market structures are typically very competitive and tend to allocate scarce resources efficiently through price mechanisms. Many policy issues would be simplified if diffuse market structures gave rise to desirable standards. Coordination problems and the lack of sponsorship. When decisionmaking is diffuse, coordination problems sometimes arise. Such terminology is not a statement about whether an economic enterprise coordinates its own employees around a single objective. Rather, it means that all potential users and suppliers could benefit from as much technical interoperability as possible, but instead go off on their own. The sheer number of decision-makers hinders adequate communication that would solve the coordination problems. Even if all firms could communicate, differences of opinion make consensus unlikely. Moreover, standards that serve as focal points are unlikely to arise very easily, because every potential supplier and user of a standard is a small part of the whole. Each decision maker has too little incentive to make the investments that will coordinate the design decisions of other users and lead to general interoperability. Market structure alone may hinder network growth because standardization does not arise, or it arises too late. The slightly different Unix systems proliferating in the 1970s and 80s serves as a good example. This observation immediately leads to one disturbing prediction for the growth of private telecommunications networks: if standards are unsponsored, different firms’ networks will likely not work with one another without considerable adjustment. Private networks often develop according to internal imperatives. When these networks grow larger and brush up against one another, they may be unable to work together simply because no sponsor ensured that they were initially developed in a technically compatible manner. For example, after the introduction of supermarket scanners, suppliers took years to coordinate their deliveries with the inventory management of grocery stores, if they coordinated them at all. Similar factors have slowed the introduction of scanners into the retail clothing sector. Table 2 summarizes some of the differences between sponsored and unsponsored standards structures. When unsponsored economic networks develop and build capacity, they often swell and shrink for many reasons that may have only a tenuous connection to the long-term economic welfare of market participants. In a characteristic bandwagoning effect, networks may be slow to start when they are small. Many potential adopters will sit on the fence, waiting to make expensive and unrecoverable investments until a large fraction of other users choose a clear technical standard. Networks may not
b161-ch08.qxd
27/05/04
7:04 PM
Page 251
Chapter 42 251 Table 2. Standardization and technical change: trade-offs between different market structures. Criteria
Single-sponsored standard
Unsponsored standard
Systematic innovation
All decisions internalized within single firm-likely to be accomplished as fast as technically feasible
Component innovation
Sponsor tends to resist cannibalizing rents on existing products-component innovation is slow
Coordination of technical change
Firm’s administrative process coordinates changes in the design of own products
Degree of lock-in in the long run
Likely to be high because users have no alternative
Firms must coordinate changes within SDO’s: likely to be administratively difficult and slow Component vendors must frequently innovate to stay ahead of the competition: component innovation is fast No one is responsible for technical change: is uncoordinated and uncertain Lock-in is as low as possible because competing vendors will try to keep lock-in low
develop at all if most participants are lukewarm about a new standard due to technical uncertainty, for example, even though all would collectively benefit from it. Alternatively, bandwagons may also gather speed (remarkably!) quickly once a network becomes large enough to justify investments by potential adopters who, in the early phase of development, had delayed making commitments. The lack of communication between all the potentially affected decision-makers exacerbates such bandwagons, though professional organizations can often provide communication channels to bridge some of the troubles. Lock-in. A costly problem arises if most vendor and user capacity for a network becomes locked-in to a technical alternative, making it costly for users and suppliers to change fundamental technical specifications. Either hardware or software embodies difficult-to-change technical features, or humans cannot be retrained easily to work with a different technology. These costs are especially high when a network must changebe upgraded, expanded, or replaced-and the network serves as an essential part of an organization’s day-to-day operations. Change risks significant downtime that arises from the costs of fixing the almost inevitable mistakes any change produces. In the 1980s, the Federal Aviation Administration updated its air traffic control systems across the country; the small margin for error made the upgrade especially difficult.
b161-ch08.qxd
27/05/04
7:04 PM
Page 252
252 Part VIII
Lock-in produces two related problems: •
•
A network may not become as large or as valuable as possible because users lock in to a disparate variety of formats and each finds it costly to change later. If many potential adopters wait for a “shake-out,” early adopters may make crucial choices between technologies, thus bearing a disproportionate influence over standards. Technical designs may not permit easy alteration to accommodate the different needs of the later decision-makers.
Perhaps the disproportionate influence of early users is justified because these same users bear a high risk for being intrepid-their investments in a network can become obsolete or “orphaned.” However, this argument sidesteps the question of whether society gets an optimal technology or not, which is the central policy concern. The issue is not solely that past investment influences future technical choices, which happens quite often and complicates choices, but is a sober fact of life. For example, the installed base of color television sets in the US today use a set of standards that is incompatible with many of the new HDTV standards. Many observers think that abandoning this installed base is too costly and, thus, recommend using a high-definition standard that is backward compatible with the installed base, even if doing so sacrifices some of the pictorial quality possible with HDTV technologies or raises costs. More importantly, society can be locked-in to the wrong technology after the fact. When viewed with hindsight, “society” could regret previous decisions. Even though past choices constrain future choices, future decision-makers never have an opportunity to persuade previous decisionmakers about that choice. Hence, past choices will likely be short-sighted. Can a small number of cooks do any better? Diffuse decision-making leads to situations where (1) communication and sponsorship are unlikely, and (2) coordination problems are likely. Therefore, market structures with few vendors then should not suffer as much from coordination problems. However, such a conclusion is hasty if not qualified properly. In markets with few vendors, the proprietary interests of the vendors lead them to take strategic actions designed to produce outcomes they favor. While this reduces the severity of some types of coordination problems, it also induces other types of distortions. The “dueling sponsors” arrangement best illustrates these concerns. Here each sponsor has proprietary interests in an array of components that perform similar functions, but competitors employ different technical standards. The VHS/ Betamax duel in the VCR markets is a well-known
b161-ch08.qxd
27/05/04
7:04 PM
Page 253
Chapter 42 253
case. Such battles are common today in high-tech industries (IBM vs. DEC in minicomputers, MS Word vs. Word Perfect in word processing, FDDI vs. ATM in network communications). These duels may start as multifirm contests but quickly reduce to a handful of dominant participants. Sometimes a fringe of niche market suppliers follows the leaders, leaving two or three technical standards to dominate all choices. Network duels also commonly arise as subplots to related larger product market duels. Various banks may belong to incompatible ATM networks, and United Airlines and American Airlines sponsor competing airline reservation systems. If recent experience is any guide, this type of market structure will likely characterize many, if not most, private economic networks in the future information infrastructure.
Dueling locks-in users Economists are of two minds about dueling. On the one hand, dueling may prevent the economic network from becoming as large as it possibly could, even if all users would benefit from a larger network. Unlike an unsponsored network, dueling encourages a vendor to lock in buyers. Dueling sponsors have incentives to design incompatible systems if incompatibility raises the costs to users of switching to a rival sponsor’s system. Similarly, the sponsor of a system would like nothing better than to raise the costs to the experienced user from switching vendors, since it makes a user reluctant to change networks. Vendors like to be the exclusive provider of a technology to a locked-in buyer for several reasons: • •
•
It provides the sponsor with market power during any repeat system purchase. It guarantees a stream of related business. In computing networks, for example, locked-in buyers will purchase CPU upgrades from their system sponsors, and often a majority of their peripherals and software. Locked-in users can he manipulated for competitive advantage. In the case of computer reservation systems, the sponsoring airlines were accused of locking in travel agents and then manipulating the screen to favor the flights of the system sponsor.
Similar factors, as well as several pricing issues, prevented ATM networks from working together as one large network for many years. Notice that a vendor may desire to lock in its buyers, but a vendor’s competitors will desire the opposite. While a vendor may try to raise the
b161-ch08.qxd
27/05/04
7:04 PM
Page 254
254 Part VIII
cost of switching, a rival may be working equally hard to lower those costs. From society’s standpoint much of this activity is wasteful. Wouldn’t society be better off if all competitors ignored lock-in and directed all their energy at making better products? Yes, but this will rarely occur because of the strategic importance of standards in a competitive duel. As with unsponsored economic networks, the market’s choice between dueling systems retains the sensitivity to small events, which is some cause for concern. Dueling has its good points. Despite the foregoing, economists are not uniformly pessimistic about dueling, which is where some confusion arises. Sometimes dueling sponsors will not design incompatible systems. When rival sponsors provide components that perform different or complementary functions, compatibility permits many mix-and-match possibilities among the components of rival systems. In turn, the profitability of producing compatible components (despite increases in competition) rises. The market for stereo equipment is a familiar example, as is the market for personal computer hardware clones and software applications under the DOS standard. Thus, dueling sponsors are likely to find it worthwhile to make investments to reduce interoperabihty costs when they do not produce every type of component. They might as well if each-has comparative advantage in the design and production of some but not all components, a common occurrence when market participants have different technical capabilities. This is probably a good explanation for the willingness of many firms, AT&T and IBM increasingly so, to participate in markets with nonproprietary standards. Dueling standards may also be economically efficient if a variety of standards is appropriate for a variety of potential problems. The crucial question is whether the market will permit entry of a new standard suited to a minority of users. This may depend on the strength of lock-in effects or the success of actions of system sponsors to foreclose or induce entry of complementary products, such as software. Specific conclusions depend on careful analysis of particular industries. Competition and innovation also counterbalance some of the distortions from lock-in, giving rise to another cause for optimism. Monopoly profits may he dissipated through competitive bidding between the rival system sponsors. Since many buyers anticipate that their vendors will later gain monopoly benefits from exclusive sales of complementary products, they will demand compensation before committing to investment in network capacity with proprietary features. Such demands can elicit “promotional pricing” from sponsors. The good news is that the networks with long-run economic advantages are likely to provide bigger price discounts. Also, competitive bidding for
b161-ch08.qxd
27/05/04
7:04 PM
Page 255
Chapter 42 255
new customers may spur incumbent system vendors to innovate. Some observers argue that intersystem competition was a primary driver of computer system innovation in the 1960s and 1970s. The bad news is that this benefit sometimes accrues only to new users and not necessarily to users with an installed base of equipment, who are already locked-in. Dueling may also induce actions that ultimately lead to the success of one economic network but also the loss of the sponsor’s control over it. A firm may broadly license a technology to establish it as a standard, but in so doing, sacrifice its control over the standard and much of the monopoly profits associated with that control. Sun Microsystems’ liberal licensing strategy with the Sparc architecture exhibits some of these features. Another variant of this phenomenon arises when a firm designs a product that does not contain proprietary technology. A nonproprietary system induces entry of more peripheral and software suppliers and hardware clones. This makes the hardware conforming to the standard more valuable to users, while the entry of more clones reduces the price. The development of software and peripherals for the IBM-compatible PC followed this pattern. Once the standard was widely accepted (partly as a result of all this entry), IBM no longer garnered much of the rents from being the original sponsor of the standard. Indeed, today IBM and a consortium of private firms are battling to determine the direction of the next generations of “IBM-compatible” machines. Perhaps the greatest weakness of the economic analysis of dueling systems is also its strength: the long list of possible outcomes. Prediction is quite difficult, particularly in view of the multiplicity of pricing and promotional strategies typically available to firms in information technology markets. Translating economic analysis into useful managerial advice pertinent to a specific market can be difficult.
A single chef makes a menu of favorite recipes Placing a single sponsor in charge of a standard is a natural solution to coordination problems. The structure of a single firm internalizes all design decisions and upgrading and maintenance problems. Unifying control within a single firm generally eliminates competing designers, providing users with certainty about who controls the evolution of standards and their ultimate compatibility. We cannot overemphasize this potential benefit from single-firm sponsorship, especially in markets subject to uncertain and rapid changes in technology. Many readers will recognize this as the traditional model of telephone networks under AT&T’s pre-divestiture leadership and as IBM’s
b161-ch08.qxd
27/05/04
7:04 PM
Page 256
256 Part VIII
vision for integrating computers and telecommunications under the System Network Architecture model. Many other firms have also tried to adopt this model, though competition often forces them into duels. The problems with dominance. Unfortunately, single-firm sponsorship by a supplier also brings much baggage with it. Generally, large firms have disproportionate influence upon market processes, which they manipulate to their advantage, to the detriment of society’s long-term interests. Most of these concerns fall under the realm of anti-trust economics or traditional regulatory economics. Anti-trust and regulatory issues arise whenever a dominant sponsor competes with small plug-compatible component suppliers in some or all component markets. IBM battled plug-compatible component suppliers from the late 1960s onward. Similarly, from the mid-1950s on (and growing thereafter), AT&T faced competition in customer-premises equipment markets and long-distance. Today the Regional Bell Operating Companies are beginning to face competitive bypass to their services from nonregulated suppliers of fiber-optics. Anti-trust concerns arise because the dominant firm always wishes to prevent the component firms from gaining market share (and may even want to drive them out of business), while society may benefit from the added competition. Controlling and manipulating technical features of a product, or effectively raising the costs of interconnection, may enhance a dominant firm’s strategies aimed at gaining competitive advantage. Essentially, a large system sponsor and small component supplier do not possess the same incentives to be interoperable: a small firm usually wants interoperability and a large firm does not. The benefits to vendors from accessing a rival network’s users is counterbalanced by the loss of market power from facing competition from a rival vendor. Vendors with larger markets are less likely to desire compatibility with smaller rivals (than the smaller rival does with them) because larger firms gain less from selling to a few more customers and could lose more from facing more competition. IBM’s role in blocking the development of ASCII standards for mainframe computers’ and allegedly in plug-compatible equipment markets as well’ exemplifies this behavior. Dominance and policy issues. Competitive behavior presents two difficult issues: •
•
Under what conditions will a dominant firm manipulate a technology to its advantage and to the detriment of potential entrants and consumers? Can and should such behavior be regulated? That is, do the benefits from preventing inappropriate market conduct outweigh the sideeffects from imposing an imperfect legal or regulatory rule?
b161-ch08.qxd
27/05/04
7:04 PM
Page 257
Chapter 42 257
Most observers stumble on the first question, and even if observers clearly describe (in nonpolemic tones) a sponsor’s strategies that are inappropriate for society, they may fail on the second set of issues. Policy rules that prevent inappropriate behavior will almost always deter perfectly acceptable behavior as well. As a result, many relevant debates remain unresolved. Such debate, for example, surrounds any analysis of leveraging — using monopoly power in one component market to gain competitive advantage in another. Most economists agree that courts have carelessly applied this concept, though few agree on an appropriate definition. Definitions aside, a network sponsor surely can delay entry of complementary component suppliers, and possibly foreclose entry altogether. AT&T’s resistance to designing modular telecommunication connections delayed entry of competition for customerpremises equipment. However, the unresolved policy question is whether such behavior should or can be regulated to any good end. One big problem, though not the only one, comes when courts get in the business of second-guessing every innovation, especially those with exclusionary features. Such unwarranted interference may have a chilling effect on many firm’s willingness to introduce any innovation, which normally is not in society’s long-term interest. The legacy of the IBM antitrust victories has left firms considerable latitude in the use of standardization for strategic purposes. However, future cases will probably further test key legal rulings. The recent Federal Trade Commission investigation of Microsoft and the recent anti-trust suits against Nintendo foreshadow such a trend. Also, important legal standards are likely to come from several ongoing trials that raise issues in intellectual property rights in computer software standards, and also in trials that attempt to modify Judge Green’s restrictions on the Regional Bell Operating Companies. In sum, there are biases inherent in having a dominant firm. There are also gains from coordinating product characteristics and standards. No consensus on these issues is likely to emerge soon in telecommunications or any other network industry. Issues regarding sponsorship are likely to remain controversial as long as there is no consensus regarding the proper role for monopolies in nascent industries.
Long-run analysis: changing the basic recipe The discussion until now has treated the growth of standards as the byproduct of initial market conditions. Such an approach is obviously incomplete for long-run analysis: as network industries mature, standardization
b161-ch08.qxd
27/05/04
7:04 PM
Page 258
258 Part VIII
alters a market’s structure. While this feedback is easy to recognize, it is not well understood. Usually, several factors may be at work at once in the long run. Converters: To bridge or not to bridge. Perhaps the most unsatisfying feature of the short-run analysis of economic networks is its use of a strict concept of lock-in. Are some features of a technology more immutable than others? Are there degrees of lock-in? Economic analysis has yet to fully explain situations where interoperability is in constant flux, where the “standard bundle” changes frequently as suppliers update and revise products. The analysis of converters partially addresses this issue. Converters (or translators or emulators) bridge the gap between otherwise incompatible networks. These products, whether supplied by a system sponsor or third parties, reduce the costs of interoperability. A number of third-party vendors today supply programs that enable Apple Macintosh computers to use IBM software. Also, many software programs now come with simple utilities for translating text or databases from the format of one software package to another. These bridges clearly have value to buyers, so they arise in virtually every economic network. The interesting feature of converters is that vendors unequally share the costs and benefits from their introduction and refinement. At the very least, the incentives to introduce a converter then will probably not match society’s. Most of the benefits of the IBM/Macintosh converter accrue to users of a Macintosh system, so IBM has little incentive to help in its development. Indeed, a public vendor sometimes may actively seek to prevent the entry of gateways and sometimes not, depending on the costs and benefits. Observers often accuse IBM of discouraging global compatibility between all computer languages. Because converters may lead to large, discreet changes in the boundaries of competition, conditions of competition can shift suddenly and asymmetrically due to their availability. Studies show that the introduction of the dynamo greatly influenced the AC/DC battle at the turn of the century, tipping the balance irreversibly towards AC. The economics of converters defies easy analysis because these products are always changing shape and their impact depends on temporary windows of opportunity. One year a converter may work only at great cost, and in the next the technology may work cheaply. One year, users invest in “anticipatory converters” to reduce the costs of a future switch between incompatible standards. In the next, a third party may enter with a new product that de facto standardizes switching. One year, a system supplier may resist the entry of all converters. In the next, de facto standards for conversion may be so well defined that a converter is longer needed. Thus, analysis tends to depend greatly on the context.
b161-ch08.qxd
27/05/04
7:04 PM
Page 259
Chapter 42 259
Technological innovation and industry evolution Since so few parts of the information infrastructure have reached the stasis associated with mature product markets, standardization lies at the heart of technical change. However, because standardization may both encourage and discourage innovation in the types of products and organization of the industry, unambiguous conclusions are difficult to reach. Does standardization encourage innovation? Because well-defined technical standards may provide component suppliers a more secure set of interfaces around which to design a product, they may encourage research and development into the design of new components for a network. Secure telecommunication transmission standards were important in hastening innovation in customer premises markets, such as facsimile machines and modems. Indeed, the success of a communications network sponsor, such as AT&T, comes from developing and standardizing the technology of its network. Ironically, the sponsor’s success lays the seeds for later thirdparty component competition. On the other hand, an installed base of users may also create an unintended hindrance for innovation on a mature network. An existing substitute network may hinder the growth of a new network, because the technology embedded in much existing equipment may be inappropriate for a new application, raising its cost. In addition, minority interests may be burdened with higher costs on an existing network, but may not be large enough to justify setting up a new network. For example, the existing AM network hindered the post-WWII growth of the FM network. Whether or not a network is sponsored, network capacity investment decisions determine the ultimate capability of the network. Since vendors often do not have sufficient incentives to embed interoperable technology in their equipment, one can make a case for limited government intervention aimed at guaranteeing a minimal amount of interoperability, at least to induce technical change and capacity investment. This is a frequently used argument for government regulation of electronic protocols in Internet, where fears of widespread technical chaos in the absence of minimal standardization arise. Does standardization encourage industry concentration? Economists are equally ambivalent about the influence of standardization and technical change on a network’s market structure. As noted, the factors producing less concentration are strong: network sponsors may have incentives to license their standard as a means to induce development of new components. Standards may also encourage product innovation and new entry by reducing technical uncertainty. The establishment of nonproprietary standards within the PC industry hastened the entry of multitudes of
b161-ch08.qxd
27/05/04
7:04 PM
Page 260
260 Part VIII
hardware, component, and software suppliers, which makes the industry incredibly dynamic and competitive today. However, the factors leading to greater concentration are equally as strong: buyers often have strong incentives to use a single economic network. If a firm has a proprietary right over the technically superior network technology, through appropriate strategic actions (and a little luck) the sponsor can perhaps mushroom its advantages into dominant control of several technically related market niches. We can interpret IBM’s early success in the mainframe market with the System 360 this way. Similarly, some observers claim that Microsoft uses its control of MS-DOS and Windows for advantages in related markets (though the US Government has not quite made up its mind whether to believe these claims).
Standardization and the evolution of the information infrastructure Economists have studied the long-run evolution of standards in a few industries rather intensively. Microprocessor markets, computing markets, VCRs, and broadcasting have sufficiently long and well-documented histories to point toward the following relationships between standardization and industry evolution. First, different types of sponsorship are appropriate for different types of innovation. If a dominant firm sponsors a technology, the sponsor is more likely to innovate on a systemic level-one that influences many components at once. Typically, systemic innovations are technically complex and more easily coordinated within a single organization. RCA’s shepherding of the introduction of color television (through its ownership of NBC) is one example of sponsorship working well. Sponsors of networks, however, tend to resist too much innovation, because sponsors do not want to quickly cannibalize their own products, which embody old designs. AT&T’s steady, but undramatic, introduction of digital switching equipment is an often-cited example, perhaps rightly or wrongly, for both the good and bad. In contrast, economic networks with diffuse ownership, where competitive dueling is more common, militate for greater innovation from suppliers of component parts. Component suppliers must cannibalize their own products with innovative designs just to keep ahead of the competition. One need only examine many information technology markets today to observe this trend at work. However, diffuse ownership, even combined with established producer or standards-writing groups, does not easily lead to systemic innovations, because of the difficulties of coordinating
b161-ch08.qxd
27/05/04
7:04 PM
Page 261
Chapter 42 261
complex technical change across many organizations. Consider Unix standards development. Second, there is a tension between the role of sponsorship in bringing about coordination and in leading to market power. When networks compete in the long run, they often become less sponsored, because many users resist the market power inherent in such sponsorship. Users choose products with wider supplier bases whenever possible, taking actions to reduce the degree of lock-in. Many users also strongly desire that at least one market institution take on a central coordination role, which leads them to a dominant firm because a single sponsor can often do a better job at coordinating a network than producer or standards-writing groups. The best example of both these tensions comes from the last 30 years of platform competition in the computing market, where users have gradually moved from sponsored networks, such as those based on the IBM360/370 mainframe platform or the DEC VAX platform, to nonproprietary PC networks, such as those based on the Intel x86 chip and MS-DOS operating system. Intel and Microsoft have recently taken on more and more of the functions typically associated with a system sponsor, while so much of the standardization in the peripheral and’ software market remains nonproprietary. In any event, prediction about the long-run evolution of an economic network is almost impossible because the success of an economic network is so closely tied to the success of the underlying technology, which is inherently uncertain (which, of course, does not prevent futuristic technologists from making predictions). Some highly touted technologies gain wide acceptance and some do not, but pinpointing the causes of success or failure is often difficult. In product markets that regularly undergo radical product innovation, it will not be clear at the outset how valuable a product or service will be, nor what the costs each technical alternative may impose on later technical developments, nor how large the network will grow as new applications develop. As a result, it is difficult to predict a market’s dynamics after standardization. For example, none of the important firms in the VCR industry in the late 1970s anticipated either the consequences for hardware competition from the development of the rental movie market, or the power of the economic links between geographically separate markets. In a more current case, technical uncertainty makes it difficult to predict whether the technical requirements implicit in ISDN (Integrated Services Digital Network) will limit or enhance competition. After all, ISDN will influence product design and network growth, which in turn may influence other factors such as tariff structures, network controls plant investment, and other regulatory decisions.
b161-ch08.qxd
27/05/04
7:04 PM
Page 262
262 Part VIII
The only predictable feature of many information technology networks is that they change. It is not surprising if two snapshots of any particular market niche taken sufficiently far apart in time may reveal different firms, radically different products and applications, and even different buyers. From an individual supplier’s or user’s perspective, this uncertainty complicates decisions with long-run consequences, since investment in physical equipment and personnel training is expensive.
Lock-in and control of technical options Most buyers and sellers in an evolving industry know that change will come and that its character will be unpredictable. Most product designers and users of compatibility standards thus associate potential problems with being locked-in to a narrow technical choice. One of the most interesting and least understood aspects of standardization processes is how attempts to avoid lock-in influence design decisions and market outcomes in dynamic settings. One approach to understanding standardization activity emphasizes the value decision-makers place on having strategic flexibility — retaining a choice among many future technical options. Its starting premise is that much technology choice involves discontinuous choices among alternatives, and an important determinant of an investment is the uncertain revenue stream associated with future technical alternatives. Product designers and technology users will expend resources today so as not to foreclose technical alternatives associated with potentially large revenue streams. The greater the uncertainty at one time, the greater the value placed on keeping technical choices open over time. The value of strategic flexibility may far outweigh the value of any other determinant of technology choice. Standards may influence a firm’s decisions on whether to design a new product for a given product line, delay introducing a new product, or invest in capacity for an existing product line. A firm may choose to expend extra resources to become part of the largest possible network (by designing a standardized technical platform) because it cannot be certain which of many future designs will best suit its customers. A firm may also expend extra resources to make its products compatible with a mixand-match network to give buyers assurance that many applications may be available in the future. A firm may hedge its bet by simultaneously employing different technical standards that permit it to reverse its commitment to a technical alternative.
b161-ch08.qxd
27/05/04
7:04 PM
Page 263
Chapter 42 263
Buyers will also expend resources to leave open options affected by technical uncertainties. Buyers require evidence that their technical options will remain open. The existence of many peripheral component suppliers assures buyers that an economic network caters to a variety of needs. Alternatively, users may purchase general-purpose technologies rather than an application-specific technology as a means of leaving open their options for future expansion. Elsewhere I discuss how federal mainframe computer users in the 1970s telescoped future lock-in problems into the present and made investments in “modular” programming as a result.’ Some, but not all, of this anticipatory activity is in society’s interest. Much of it can be a nuisance and possibly wasteful. From any viewpoint, expending resources on anticipated events that do not necessarily occur is quite frustrating. This is the aspect of today’s standardization processes that many firms see day to day, which may be one reason that the absence of standardization is so maligned in trade publications.
Organizational innovation or innovation by organizations As noted earlier, there are many situations in which all component suppliers have an interest in seeing the emergence and the growth of an economic network. Yet structural impediments may produce coordination problems. The strong mutual interest all firms have in the emergence of an economic network can lead firms to forego market processes and attempt to develop standards in organizations that combine representation from many firms. How do these groups work and do they work well? Do these groups ameliorate problems identified in the short- and long-run analysis of unfettered market processes? Consortia, mergers, and other forms of cooking by consensus. Every other day it seems that the newspaper announces the formation of another alliance or the merger of erstwhile competitors, where the participants come together to play a large role in the next phase of development of standards for some aspect of the information infrastructure. Though consortia do not have a long and well-documented history, a few examples point out some of the economic strengths and pitfalls of developing standards through these groups. Many of the same factors influence the merger of two firms, but this discussion will narrow its focus to consortia. The reader will easily see the implications for mergers. Table 3 summarizes the main features of consortia, and compares them to standards development organizations.
b161-ch08.qxd
27/05/04
7:04 PM
Page 264
264 Part VIII
Consortia are becoming increasingly popular in information technology industries, partially as an outgrowth of joint research ventures. Most of these groups seem to be concerned with the future of technologies and anticipated changes in related standards. The consortium jointly operates an organization responsible for designing, upgrading, and testing a compatibility standard. The consortium lets firms legally discuss technical issues of joint interest, while ostensibly avoiding antitrust problems, but retaining considerable independence in unrelated facets of their business. Good points to consortia. The greatest economic benefit of these groups comes from accelerated development of complementary components. Success is more likely when all the companies (who may directly compete in a particular component market) find a common interest in developing products that complement their competitive offering. Each company may offer different types of engineering expertise, whose full value cannot be realized unless combined with another firm’s talents. Each firm may anticipate specializing in one part of the system that the consortium sponsors. In this respect, the incentives for firms to come together in a consortium and jointly design a standardized bundle of components resemble the incentives for several firms to independently produce a mix-and-match set of components. However, consortia retain an interesting dynamic element. The consortium helps induce other firms to produce complementary components because the consortium’s existence acts as a guarantee that a standard’s future integrity will be maintained. Of course, there may still be Table 3.
The economic role of organizations: a comparison.
Criteria
Consortia
Standards organizations
Motivation for formation
Strategic alliances or outgrowth of joint research venture
Profession societies
Primary benefits
May accelerate development of complementary components
Forum for discussion of issue surrounding anticipatory standards
Main hindrance to success
Strategic interests of vendors override greater interests of organization or society
Strategic interests of vendors override greater interests of organization or society
Coordination of technical change
Will coordinate change only among the subset of cooperating firms
Administrative processes tend to be slow relative to the pace of technical change
b161-ch08.qxd
27/05/04
7:04 PM
Page 265
Chapter 42 265
insufficient investment in complementary products since no producer internalized the entire interest of the network, but some investment is often better than nothing, which is enough to begin development. Problems with consortia. Consortia are not a perfect solution to coordination problems. They can easily fall prey to some of the same structural impediments that prevented network development in their absence. The experience with the development of Unix standards in the 1980s amply illustrates these weaknesses. Many firms perceived strategic alliances as tools to further their own economic interests and block unfavorable outcomes. As a result, two different consortia, OSF and Unix international, originally sponsored two different Unix standards. Industry participants lined up behind one or another based on economic self-interest. In the early 90s different consortia (and firms) have sponsored slightly different forms of Unix, confusing the marketplace once again. While having two standards (or a few) surely is better than the multiplicity that existed before, there does not seem to be sufficient heterogeneity in user needs to merit two or more standards. Society would probably be better off with one standard, which supplier self-interest will prevent. The other potential danger with consortia, as when any group of competing firms cooperates, is that such organizations may further the interests of existing firms, possibly to the detriment of potential entrants or users. Consortia may aid collusive activities through joint pricing decisions, or may serve as vehicles to raise entry harriers, chiefly by stifling the development of technology that accommodates development of products that compete with the products of firms inside the consortia. We will need more understanding of consortia before it is clear whether this is a common practical problem, or an unfounded fear. After all, credibly inviting development of complementary components and simultaneously deterring development of competing components may prove difficult.
Standards development organizations One of the reasons private consortia are often unnecessary is that other well-established professional organizations serve similar functions. Many large umbrella groups that cut across many industries-CCITT, IEEE, and ASTM-have long histories of involvement in the development of technical standards. These groups serve as a forum for discussion, development, and dissemination of information about standards. In the past, such groups largely codified standards determined by market processes. Today a whole alphabet soup of groups is involved with anticipating technical change in network industries and guiding their design. Their role in designing
b161-ch08.qxd
27/05/04
7:04 PM
Page 266
266 Part VIII
anticipatory standards takes on special urgency in economic networks in danger of locking-in to technical standards. Do SDOs work? Standards development organizations play many useful roles in solving network coordination problems, especially those related to lack of communication. They can serve as a forum for affected parties to educate each other about the common perception of the problems they face. They can also serve as a legal means to discuss and plan the development of a network of compatible components, as well as document agreements about the technical specification of a standard and disseminate this information to interested parties. And perhaps most importantly, their standards can serve as a focal point to designers who must choose among many technical solutions when embedding a standard in a component design. These groups then are most likely to succeed when market participants mutually desire interoperability, need to establish a mechanism for communication, and need a mechanism to develop or choose one of many technical alternatives. Witness the involvement of grocers groups in the development of bar-codes for retail products. Note that most of these organizations are “voluntary.” Participating firms and individuals have discretion over the degree of their involvement. Though most firms or individuals belong to the relevant umbrella groups, their contribution of resources (and time) to development can wax or wane for a variety of technical and strategic reasons. This can lead to either extraordinary investment in the process to influence outcomes or to “freeriding” off the activities of the organization. These biases are well known, and can only be held in check by the professional ethics of the engineers who design standards. Problems with SDOs. Voluntary standards groups are no panacea for the structural impediments to network development. They will fail to produce useful standards when the self-interest of participants prevents it in any event. Designers thus must have some economic incentives for embedding a technical standard in their product, since use is optional. A dominant firm need not follow the recommendations of a voluntary standardization group. Moreover, it is not likely to do so if it believes that it can block entry and successfully market its products without the standard. IBM’s marketing of systems using EBCDIC rather than ASCII serves as one such example. Similar impasses may occur in a market with dueling technologies, although a voluntary group can play an important role in a duel. If it chooses a particular standard, it could swing the competitive balance in favor of one standard rather than another. However, each sponsoring firm may try to block the endorsement of its rival’s standard as a means to prevent this result, which may effectively prevent adoption of any standard by the voluntary group. The strategies employed in such committee battles
b161-ch08.qxd
27/05/04
7:04 PM
Page 267
Chapter 42 267
can become quite complex, ranging from full cooperation to selective compromise to stonewalling. In addition, probably no administrative process can guide the development of a network when a slow administrative process cannot keep up with new technical developments. When events become too technically complex and fluid, a focal point easily gets lost. This problem is already arising as private telecommunications grow and private groups attempt to coordinate interconnection of their networks based on the ISDN model. One objection to ISDN is that the value from anticipating developments (on such an ambitious scale) is reduced if, as parts of the ISDN standard are written, the character of technology has changed enough to make the standard inadequate. The standard thus does not serve as a guide to component designers if the standards organization must frequently append the standard. Since no government administrative process could obviously do any better, market processes will usually predominate, coordination problems and all. Since the decisions of voluntary groups can influence economic outcomes, any interested and organized party will make investments so as to manipulate the process to its advantage. User interests tend then to be systematically unrepresented, since users tend to be diffuse and not technically sophisticated enough to master many issues. In addition, large firms have an advantage in volunteering resources that influence the outcome, such as volunteering trained engineers who will write standards that reflect their employers’ interests. Finally, insiders have the advantage in manipulating procedural rules, shopping between relevant committees, and lobbying for their long-term interests. Committees have their own focus, momentum, and inertia, which will necessarily shape the networks that arise. As a general rule, the consensus rules governing most groups tend to favor backwards-looking designs of standards using existing technology. As with consortia, standards may serve as vehicles to raise entry barriers by stifling the development of components from new entrants. The suppliers that dominate standards writing will want to further the interests of existing firms, not potential entrants or users. These biases are also well known, and are often held in check by the presence of anti-trust lawyers and, once again, the professional ethics of the engineers who design standards. Voluntary standards organizations thus can improve outcomes for participants and society, particularly when they make up for the inadequate communication of a diffuse market structure. They provide one more avenue through which a system may develop and one more channel through which firms may communicate. They are, however, just committees, with no power to compel followers. In highly concentrated markets,
b161-ch08.qxd
27/05/04
7:04 PM
Page 268
268 Part VIII
their functions can be influenced by the narrow self-interest of dueling or dominant firms.
Appropriate standards? Do decentralized mechanisms lead to appropriate standards? It is difficult to know. Neither blind faith in market processes, nor undue pessimism is warranted. Because standards can act as both a coordinator or a constraint, many outcomes are possible. Decentralized market mechanisms may produce desirable outcomes or distort them, depending on the market structure, chance historical events, and changes in the costs of technical alternatives. Diffuse market structures produce coordination problems and communication difficulties, but also much innovation. More concentrated market structures will alleviate some of the communication problems, but strategic interests will distort incentives away from optimal outcomes. Administrative processes may ameliorate communication problems, but internal political battles will distort outcomes in other ways. In my view, the progressive decentralization of decisionmaking in information technologies away from a few sponsors, such as AT&T and IBM, has to be good in the long run. This decentralization has unleashed an unmanageable variety of entrepreneurial activity. There is a natural (and some times legitimate) desire to want to manage and slow down the massive changes that accompany such entrepreneurial activity. However, such desires should not dictate the pace of change. Dynamism leads to economic growth and development and fantastic technical possibilities. The problems associated with standardization are an unfortunate, but bearable and necessary, cost associated with such change. If the dynamism of the last few decades is any guide to the future, no one should lament the much-maligned present state of affairs too loudly. Recent history makes me cautiously optimistic about the role of decentralized market mechanisms in guiding standardization development within economic networks today, provided that it is properly modified by a professional standard setting process. I look forward to analyzing future developments; they will be as important as they are interesting.
{Editorial note: This paper bridges between papers published in 1990 and 1999. See e.g., Paul David and Shane Greenstein, Fall, 1990, “The
b161-ch08.qxd
27/05/04
7:04 PM
Page 269
Chapter 42 269
Economics of Compatibility of Standards: A Survey.” Economics of Innovation and New Technology, Vol. 1, pp. 3–41, and Tim Bresnahan and Shane Greenstein, 1999, “Technological Competition and the Structure of the Computing Industry,” Journal of Industrial Economics, March, pp. 1–40. In that light, I am embarrassed to say that this article, looking back, has one big omission: it does not describe Microsoft’s successful pattern of making non-proprietary standards serve proprietary ends. I simply missed it. My only excuse is that this was written prior to the release of Windows 95, which made the logic of the strategy apparent, though that is a feeble excuse, since there was plenty around to work with by 1993. That is the way it goes sometimes. It would not be the first time that Bill Gates saw something that I did not. And, alas, it gives me more to write about in the future.}
b161-ch08.qxd
27/05/04
7:04 PM
Page 270
43 Industrial Economics and Strategy: Computing Platforms
To the uninitiated, and even the old hand, the computer industry is an intimidating agglomeration of firms, markets, and buyers, all changing quickly in response to the latest innovation or recently invented applications. Technological opportunities arise rapidly, altering the technical landscape more quickly than in any other industry. Established firms feel perpetually under siege, particularly when they compare their lot in life to that of other firms in other industries. The computer industry’s structure seems caught between forces of inertia and change, with the latter having an upper hand. Change occurs in two broad places: technical frontiers and market relationships. As is often remarked, the menu changes quickly and often, but market relationships change less often. Why, in the face of a rapidly changing menu of choices, do buyers continue to make many of the same choices year after year? Why do the same firms and products seem to reappear in computing in spite of technical change? When changes to market relationships occur, what does it tell us about forces for stasis or change? One cannot hope to develop a comprehensive understanding of the long-run forces for either change or stasis in one article. Nevertheless, to help in that endeavor, I outline a few key concepts. Instead of examining what firms should do, I explain why things happen, the former being
Source: © 2003 IEEE. Reprinted, with permission, from IEEE Micro, June 1998. 270
b161-ch08.qxd
27/05/04
7:04 PM
Page 271
Chapter 43 271
much more strategy-oriented. This will be accomplished by showing both historical and contemporary events. This review necessarily skims several books with important detail and theory. I’ll also liberally draw from recent collaborative research with Tim Bresnahan, as well as other related work done by economists and market analysts.
Forces for inertia Why do buyers continue to make many of the same choices year after year? Why does new technology turn over faster than most firms do?
Platforms A platform is a cluster of technically standardized components that buyers use together with components to make applications. Components include not only computer hardware but also software and trained labor. Computing has involved components sold in markets (hardware and some software) and components made by buyers (training and mostly software). Many of these components are long-lived assets. Thus, new technology can most easily find its way into computing when new components enhance and preserve the value of previous investments in computing platforms. Hardware components may advance, for example, without the need to change software or human capital. Thus, platforms tend to persist, whether for cost minimizing, coordination-failure, or strategic reasons. Vendors tend to sell groups of compatible product offerings under umbrella product strategies for platforms. Important computing platforms today include IBM 3090, IBM AS/400, DEC VAX, Sun SPARC, Intel/Windows PC, and client-server platforms linked together with wellknown computing and communications hardware and software. Even though these labels may have proprietary names associated with them, such a label may stand in for a complex, often unintegrated and decentralized, market structure. Many of the largest and most popular platforms for client-servers today and historically include many different computing, communications, and peripheral equipment firms, software tool developers, application software writers, consultants, system integrators, distributors, user groups, weekly news publications, and third-party service providers. The economic forces centered on a platform tend to work strongly within computer industry segments, distinguished by the types of
b161-ch08.qxd
27/05/04
7:04 PM
Page 272
272 Part VIII
computer applications demanded by users. For most of the last 30 years, most segments were distinguished by the size of tasks to be undertaken and by the technical sophistication of the typical user. A decade ago it was widely agreed that segments were distinguished by size, from big to small in the order of mainframes, minicomputers, workstations, and personal computers. Users were either technical (trained engineers or programmers) or commercial (secretaries or administrative assistants), which closely corresponded with the degree of sophistication and other features of the software or hardware. The most interesting development today is the blurring of these old distinctions, an event that might be called the competitive crash. The networking revolution is primarily responsible for blurring these oncefamiliar distinctions, making it feasible to build client-server systems for virtually any type of user in any size category, by building platforms out of subplatforms. Whether it is cost-effective to do so in every situation is an open question. Judging from many recent reports, in many situations it appears to be. One cannot hope to fully explain this blurring in a single stroke, as it resulted from complex and varied forces. More important for strategic thinking, these changes probably set the stage for the issues that will face this industry in the next few years. A well-developed body of competitive analysis has arisen to explain how platforms and segments operate. These forces are particularly strong in segments serving commercial rather than technical users. Most of the strategic and policy issues regarding platforms also arise in the commercial segments of computing.
Concentration Segments in which there are endogenous sunk costs (ESCs) will have a concentrated structure even with a great deal of demand. This pattern does not depend on how sellers interact strategically. The key to analysis is the definition of ESCs-expenditures undertaken by sellers, vendors, and sometimes users to make their products better. ESCs are irreversible and raise the value of a platform with no potential bound, appealing to a large fraction of potential customers. Fragmented segment structure is impossible when ESCs are important. If the segment structure were fragmented, a firm could invest in ESCs, thereby attracting many customers. Any other competing platform would have to do the same or lose its relative attractiveness. Thus, there is a tendency for all surviving platforms to have a high ESC; low-ESC
b161-ch08.qxd
27/05/04
7:04 PM
Page 273
Chapter 43 273
platforms are relegated to niche status or death. What happens if market demand grows? Instead of simply supporting more firms and thus more competition, a larger market will have higher ESCs, but still only a few firms. For better or worse, more ESC goes into high product quality and other things valued by buyers, not less concentrated segments. Notice these observations are principally about platforms, not firms. Expenditures to make a platform more appealing raise the demand for all of its components regardless of who actually makes these expenditures. If cloning firms, peripheral makers, or user groups succeed in enhancing the value of the platform, demand will rise. As a result, ESC theory can only be deployed to explain the concentration of platforms, not necessarily of firms. We can learn two general lessons about computer industry structure from this. First, concentrated segments arise due to broadly compatible platforms combined with marketing organizations to support customers who use them. The second general lesson is closely related but very frequently overlooked by observers of this industry. The creation of a platform is not merely an engineering feat; it also involves commercialization.
Standardization Computing platforms must adopt technical standards to make systems work well. In general, buyers and sellers of technology make platformspecific investments, where these standards are taken for granted. Mutually reinforcing behavior arises when standards coordinate behavior at any point in time, and also when they coordinate technical change and investment activity over time. Buyers, sellers, designers, or third-party software vendors make long-lived, platform-specific investments in a platform, and this tends to keep platforms in operation for long periods. For example, many of the standards in the IBM 360 (introduced in the mid-1960s) survived in the IBM 370 and its descendants. Many firms, vendors, users, programmers, and other important players have a stake in continuing the use of the standards used within this platform. This theory has implications for the origins and endings of platforms. Just as platform standards are hard to stop, they are equally hard to start. A platform needs a critical mass of adopters, complementary software, and sometimes other components. Positive feedback both underlies the survival of existing standards and helps new standards get over the hump of acceptance. If a new standard does get over the hump, positive feedback forces quickly favor it. This problem so conditions a firm’s behavior that it will take great pains to overcome its limitations and avoid producing an operating system
b161-ch08.qxd
27/05/04
7:04 PM
Page 274
274 Part VIII
wannabe — a rather distinguished group of products including IBM’s OS\2, DR-DOS, multiple variations on Unix, and so on. For example, when IBM introduced the AS/400, it arranged well ahead of time for the development of thousands of application programs. Firms will coordinate their marketing campaigns with technical goals, too. For example, when Microsoft unveiled Windows 95, it did so with fanfare designed to avoid the possibility that the marketing for the operating system would crash on launch. The literature on standards persistence has hotly debated whether, in an uncertain environment of new technology, it may be difficult to coordinate platform-specific investments. Theories differ about exactly which agents have difficulty coordinating which decisions. These differences are not trivial, because each type of assumption leads to very different policy conclusions about the desirability of persistence. Sellers’ roles in persistence are variously interpreted as efficient coordination or exploitative preservation of a monopoly position.
Focus on the platform, not the firm The overriding message is that industry structure can be understood if one focuses on platforms, not necessarily firms. Consider the insights this yields. During the early years of this industry, the firm and its platform were nearly synonymous. For example, IBM kept tight control over its proprietary technology embedded in the system 360/370 and its descendants. With time, a rather substantial third-party software and peripheral market grew up around the platform, though IBM managed to retain a large degree of control over the standards. In more recent experience, the dominant platforms became less centrally controlled. The most popular platform in the late 1980s was a descendant of the IBM PC, often called Wintel in the 1990s (Windows and Intel). From the beginning, this platform involved thousands of large and small software developers, third-party peripheral equipment and card developers, and a few major players in the hardware (IBM, Compaq, Dell, Intel) and software industries (Microsoft, Lotus, and so on). Control over the standard has completely passed from IBM to Intel and Microsoft, though neither firm can yet unilaterally dictate standards to developers (though both are trying). Indeed, the fight between software, subcomponent, and peripheral vendors and Intel and Microsoft continues today, lying at the heart of ongoing antitrust investigations. The complex and changing sponsorship structure of the PC tells us a lot about the theoretical robustness of the positive theory of platform concentration and persistence. Rapid and coordinated technical progress has
b161-ch08.qxd
27/05/04
7:04 PM
Page 275
Chapter 43 275
kept this platform in a dominant position for a decade and a half, which covers many generations of products. This is a rather remarkable outcome in light of the rate of technical change. More affirmatively, the equilibrium supply of platforms is concentrated even when the rate of technical change is extremely fast. In equilibrium, existing platforms tend to persist. These outcomes occur whenever buyers and sellers jointly value compatibility in a wide variety of industry structures, even those in which the equilibrium supply of firms is not concentrated or when firm persistence is in doubt. How do these ideas apply today? The emerging client-server platform has not yet standardized around a few key components, leaving many unsure about who controls what. Amidst this confusion, vendors fight for control over pieces of the emerging standard. Intuit would like to determine standards in home banking, Netscape or Microsoft in Web browsers, Oracle in network design and databases, Sun in network software, Microsoft in virtually every aspect of client-server software but particularly core client-software and server-operating systems functions, IBM/Lotus in shareware and electronic commerce, SAP in enterprise computing, and so on. By definition, the strategies of these firms overlap and conflict over the platform. All firms want to sell products, use competitive success to achieve control of the emerging platform, and be in a position to control technology design for years to come. These conflicts shape important parts of every firm’s product design and distribution strategies.
Forces for change Most disruption to market relationships comes from outside a segment. Yet, if historical patterns are any guide, such events infrequently go very far. More to the point, attempts at disruptive entry were somewhat common, but success was not. The interesting and somewhat puzzling question is this: since the menu of technical options changes so often and so rapidly, why is it so hard to radically disrupt market relationships? Why don’t the names of the leading firms change each year? What does it take to successfully disrupt existing market relationships?
New platforms Consider the founding of a whole new class of computer platforms, like mini- or microcomputers. From a narrow, technical perspective, these
b161-ch08.qxd
27/05/04
7:04 PM
Page 276
276 Part VIII
were radical developments, taking advantage of expanding technological opportunity to introduce whole new classes of products. From a competitive perspective, however, there has been very little disruption associated with these events. Successful foundings have tended to avoid competition with existing computer platforms, instead creating new segments for previously unserved classes of demanders. Established commercial platforms are very difficult to dislodge due to the strong ESC and backward-compatibility forces in commercial computing. Since commercial computing platforms need expensive marketing campaigns, it is cheaper to start a new platform in the non-commercial (or technical) arena. Of unserved users, technical users’ needs tend to be fewest, particularly since this class of users does not demand that platform components work well right away. Hence, new platforms have typically served the technical demands of users such as scientists and engineers. The historical record will not necessarily demonstrate that firms understood this strategic argument and deliberately avoided established platforms. It is probably a better theory that industry equilibrium initially selected those firms that found cost- and competition-avoiding strategies. Later, by the early 1970s, new entrants into the minicomputer industry understood this strategic logic very well. For example, Hewlett-Packard, upon entering the technical minicomputer business, was very cognizant of the costs of competing for commercial customers and the marketing strengths HP already had with technical customers. It also accounts for the formidable competition HP might expect from established commercial providers .
Mobility of platforms Some entry costs are lower when platforms move from an old customer base to a new one. Examples of mobility in computer platforms include the creation of the commercial super-minicomputer from the technical minicomputer and creation of the office PC from the hobbyist microcomputer. Adapting an existing platform to a new kind of use costs less than creating a new, fully capable platform. Simply put, many of the existing components of the platform can be reused. In general, the reused components are usually technologies; firms must make new investments in marketing connections, service networks, and so on. Even with these lower costs, any entrant platform must sell to an existing platform’s main customer base. Thus, it confronts the ESC and standardization forces leading toward persistence of the opposing platform. Often a platform moves to a new use that was previously badly served,
b161-ch08.qxd
27/05/04
7:04 PM
Page 277
Chapter 43 277
rather than continuing to serve the main body of customers in a pre-existing segment. Commercial superminicomputers, for example, serve customers much like traditional mainframe customers, but in smaller departments or firms. The conflict between the entrant’s lower costs and the incumbent’s advantages can be resolved in a variety of ways, and neither factor dominates. Platform mobility can lead to competition and expansion of the range of commercial computing uses. As with foundings, mobility has brought the industry closer to disruptive competition, but it tends to avoid direct competition with incumbent platforms. Unlike the analysis of founding, potential entry of mobile platforms necessarily comes from existing platforms; hence, the structure of industrywide supply is a key determinant of outcomes. In historical experience, mobility is rarely competitively disruptive to established dominant platforms. As an example of a less-disruptive platform, consider the entry of the commercial minicomputer. The development of distinct minicomputer and mainframe segments left a gap among small commercial sites, such as medium-size firms or departments in larger firms. Mainframes were too expensive and minicomputers lacked software and other support services. The invention by entrant Digital Equipment Corp. of the commercial superminicomputer was the breakthrough. From a hardware engineering perspective, DEC’s supermini platform, called the VAX series, was very similar to its technical ancestor, the PDP series. However, from a marketing standpoint, the supermini category was new. DEC coordinated development of the components of a commercial platform, including software, services, and support. Over the next few years, many different firms became part of the VAX network, providing software, services, and so on. The virtue of the superminicomputer over a mainframe was its combination of convenience, capacity, reliability, and low cost for small applications. This initially appealed to experienced users who were dissatisfied with the 360/370 platform, suggesting that users would be willing to pay for considerable service and customized support. It also appealed to geographically remote divisions in large organizations that did not want to contact a centrally managed mainframe through low-grade communication links. After an initial period of innovation in the components, superminicomputers began to be adopted for simpler commercial uses left behind by mainframes. These systems also began to compete at the margin for mainframe sites. Over time, the supermini segment took on increasing ESC and backward-compatibility equilibrium features, with corresponding tendencies toward concentration and persistence.
b161-ch08.qxd
27/05/04
7:04 PM
Page 278
278 Part VIII
This entry into commercial computing was cheaper than the creation of a whole new platform because it involved the mobility rather than the creation of platform components like hardware. It was more competitive than a founding because there was less product differentiation between existing and entrant platforms. Superminicomputing impinged on the mainframe’s traditional commercial customer body, but did not retain all the size and capabilities of the mainframe platform.
Potential entry and the future All participants in existing platforms, both buyers and sellers, approach new opportunities after long periods of investment in platform-specific components. It is very costly for a new platform to re-create the same investments found with old platforms. After some investment in components, a platform has sufficient capabilities to move toward somewhat more contested bodies of demand. If a new platform succeeds, it can eventually grow strong enough to move into another platform’s main market. As the computer industry has matured, so too has the possibility for mobility of platforms. From the perspective of any given segment, there has been an increase in potential competition from outside firms and possibly mobile platforms. From the perspective of any established platform, there are more opportunities to someday expand into another’s segment. Indeed, this trend lies behind the most unique feature of today’s market (by historical standards)-the division of technical leadership among so many firms. This is not like the industry of two decades ago in which a small number of firms such as IBM, DEC, and (arguably) Univac, Burroughs, CDC, NCR, Honeywell, and Wang led the pack. This is an industry in which dozens of firms lead, and any of them could potentially move ahead or fall back. This list includes IBM, Compaq, Dell, Intel, HP, Microsoft, Oracle, Sun, Cisco, SAP, 3Com, EDS, Computer Associates, Novell, and on and on. Nobody owns the dominant platform today. Instead it is more accurate to say that many firms shepherd development of important components within the commonly used platform.
Vertical disintegration As noted, it is rare in the computing industry of today for any firm to fully control all elements used on a dominant platform. This raises many difficult questions about how to interpret firm behavior in segments that have experienced vertical disintegration.
b161-ch08.qxd
27/05/04
7:04 PM
Page 279
Chapter 43 279
When different firms possess roughly equivalent technical skills and supply complementary components, technical leadership is easily divided among them. It is quite difficult to maintain leadership over a platform under such conditions. In this type of world, firms seem to take one of several strategies. One strategy is to act as coordinator for a whole array of third-parry vendors, all of whom commit resources to a new platform. For example, Sun Microsystems has a good reputation for standard coordination standards due to their good standing within the developer community. This strategy hopes to retain the advantages of speedy product introductions without sacrificing too much control over a product. Most vendors, however, rarely have the opportunity (or the resources) to introduce and coordinate an entirely new platform. Instead they must make peripheral components or software for an existing platform. Longterm strategic issues for these vendors involve the frequency of their upgrade cycles and the extent to which they can use proprietary standards. Vendors must decide whether they ought to respond to rivals that alter their complementary products, whether they ought to merge with other firms to cover a wider set of products within a platform, or whether they ought to build products for more than one platform. These are the central strategic issues in virtually all companies. The interesting feature of all of these decisions is that they inevitably have important platform-specific elements to them, involving issues of control and platform development. Hence, even in a vertically disintegrated industry, the day-to-day existence of most firms becomes intertwined with platformwide issues over platform development.” Thus, as every paranoid CEO knows, the wrong strategic choice can have massive consequences. For example, in the mid-1980s, Lotus CEO Jim Manzi decided to fully support IBM’s OS/2 and delay support for Microsoft Windows. This was part of an attempt to slow down the acceptance of Windows, and it hinged on the incorrect assumption that Lotus had sufficient market power to sink or float any operating system platform. This decision delayed the release of 1-2-3 for Windows 3.0, leaving the market entirely to Excel at an early crucial stage, and it (along with a few other related errors) effectively doomed Lotus spreadsheets to second-rate status forever.
Competitive crashes Many have inferred from the experience of the PC market that competitive crashes are frequent and easy to accomplish. Coincidences of circumstances, like those underlying the entry of the PC, do not arise with any great frequency in computing.
b161-ch08.qxd
27/05/04
7:04 PM
Page 280
280 Part VIII
By 1980 the PC market had already grown, largely to satisfy a hobbyist demand. The existing 8-bit architectures, which had not been developed by IBM, had aged technically more rapidly than expected and needed to be replaced with a 16-bit hardware architecture that would permit larger and more powerful programs. This need could have been met by a variety of responses from new or existing firms. IBM’s strategy combined two elements in an open architecture. Abandoning the vertically integrated strategy it had used in other segments, IBM used other firms’ technology in key areas such as the microprocessor, the operating system, and many applications. The architecture was open in a second, distinct sense. Any firm could add hardware or software components to an IBM-compatible PC, and eventually any firm could make an IBM-compatible computer. With the passage of time, history tends to highlight only the extraordinary events in light of today’s market structure. So analysts tend to focus on IBM’s later failures, misunderstanding its earlier success. The strategy of IBM’s Boca Raton-based PC team led to a quick introduction, a marketing splash, and spectacularly large hardware sales for many years. IBM’s marketing capability and reputation helped overcome the advantages of the incumbent platforms. Growth was rapid and sales were enormous. By the mid-1980s, the hardware dollar sales in PC platforms equaled sales in mainframe platforms (and exceeded them by the end of the decade). The competitive effect was also substantial. After IBM introduced the PC, the number of platforms available to buyers decreased in a short time. This was the first competitive replacement of an established computing platform by another. The rarity of such an event illustrates the remarkable coincidence of circumstances in this instance. First, there was an entrant (IBM) with a strong market and marketing position from outside the segment but within the industry. Second, the entrant came in without undercutting its position in its own segments, so IBM’s managers did not initially delay their action due to concerns about cannibalizing existing product lines. Third, the incumbent platforms were facing an abrupt and technically uncertain transition in their architecture, in this case from 8- to 16-bit computing. IBM was prescient and picked the forward-looking, faster chip. Fourth, the entering platform’s open architecture and vertically disintegrated market structure met the market’s need for rapid technical advance. More generally, changes in vertical industry structure have underlined entrant success in recent client-server platform experience. Quickly executed, mobility-based entry is easier in a vertically disintegrated market. Individual platform components rather than whole platforms can move to
b161-ch08.qxd
27/05/04
7:04 PM
Page 281
Chapter 43 281
new segments. The client-server platform takes the best of existing microcomputer platforms and the cheapest of powerful computer platforms. A wide variety of firms compete to steer the direction of a newly assembled platform, leading to a new kind of competition in which a very new kind of computer firm succeeds. From the 1960s through the early 1980s, foundings and mobility expanded the range of computer market segments, ultimately offering capable computers for both technical and commercial uses. Through the 1980s, all these different market segments advanced in parallel, with different kinds of customers, technologies, and supplying firms. In the 1990s, components that had been built up in different segments combined into new platforms and moved into competition with some of the longeststanding commercial successes. The recent transition in the industry was the inevitable, but very difficult to foresee, consequence of long-term trends. First, consider inevitability. The supply of potential entrants grew as platforms serving different kinds of uses grew up in distinct segments, out of competition with one another, and developed distinct technical and market capabilities. Vertical disintegration arose because small firms initially could take advantage of economies of specialization, especially related to rapidly changing technical possibilities in component markets. Second, think about the difficulty to foresee. Could IBM, the traditional dominant firm, have headed off the competitive threat? Probably not. The key damage was done long before IBM knew there was a threat, which was also long before the potential entrants themselves knew they were a threat. Thus, the demise of IBM’s position occurred due to the changing nature of competition and competitive equilibrium in the industry.
Diffusion of client-server platforms It is impossible to understand the competitive crash without understanding the pattern of diffusion of client-server technology. To do that, we must understand that users also invent to make the new technology useful. Tim Bresnahan, a frequent coauthor, and I have coined the term co-invention — the amount of invention users must perform after they adopt a system. All in all, co-invention costs are driven up by three factors: the complexity of computing at a user enterprise, the idiosyncrasy of computing demands at establishments, and the depth of vendor markets for software tools. The first two are features of buyers that change very slowly. The last one, a feature of the client-server industry in its early years, has gotten better over time.
b161-ch08.qxd
27/05/04
7:04 PM
Page 282
282 Part VIII
It is a cliche to say that the markets for software tools have been getting thicker and better. Capabilities have been growing everywhere. Lessons are being shared (through vendors) across enterprises. It seems only a matter of time before this market standardizes on a few key software tools, protocols, and design doctrines. These will bring down the costs of customizing a client-server to user needs. One of the hottest topics today is associated with standards on new platforms. Standards come from many directions, using parts of Windows NT, Java applets, Unix tools, Novel netware, TCP/IP, and dozens of other products from dozens of other vendors. In general, standards are not proprietary when buyers have choices among a variety of technical solutions, but they are when they force vendors to be open or risk losing business. This is because, if they can help it, buyers do not desire proprietary solutions to their co-invention problems. That said, there is a growing realization that many firms are placing proprietary technology on servers all over the world. The Java/NT fight is only the most recent and highest profile example. The appropriate business model has changed in client-servers over the last few years. In the early years, the sales were made from computer engineer to computer engineer. Many engineering firms and software start-ups thrived in this situation with a less than perfect product. Many of these firms, from SAP, Cisco, and 3Com to Oracle and Sun, are now trying to make the transition to providing products for commercial users. Commercial users prefer the reassuring handshake of a large firm and a salesman in a suit, which plays to the comparative strengths of traditional firms such as IBM, EDS, and Accenture Consulting. These traditional vendors could get into this game late and still do well, despite the built-in ESC factor of the older firms. So too will many third-party consultants and vendors who translate lessons of the past into tools that users need today. Again, the eventual structure of the market is an open question. Remember the role of co-invention: vendors invent and sell, buyers coinvent and customize. The vendors and technologies that succeed are the ones that find ways to lower the buyer’s coinvention expenses. The segmentation that arises, if it does arise, will follow the ease with which buyers coinvent and vendors meet their needs.
Formulating strategy when platforms are fluid Despite the size, complexity, and technical turbulence of the electronics industry, market relationships in the computer industry tend to follow a few important patterns. Market segments tend to be organized around platforms. A small number of platforms tend to dominate any segment at
b161-ch08.qxd
27/05/04
7:04 PM
Page 283
Chapter 43 283
one time and tend to dominate it for a long time. New platforms tend to be founded within engineering markets first before moving, if at all, to commercial customers. Once established, dominant platforms are not easily dislodged, and if they face competition from anywhere, it is from previously distant platforms. Disruptive crashes of market relationships tend to be rare because so many forces prevent established platforms from losing their preeminence. While the main thrust of this investigation has been positive in focus, there are many implications for policy formulation. Compare common arguments for and against what firms ought to do against what tends to happen in practice.
Emergence of dominance after the competitive crash Today’s competitive crash seems to be one of those rare disruptions to market relationships. Eventually, there should arise only a few dominant platforms that persist over long periods. For now, however, market boundaries will blur until the diffusion of client-server systems determines how the platform will segment users, if at all. It is not yet apparent which early standards will persist, unify, and coordinate different firms and users, or fade away.
Persistence of incumbents As one thinks about formulating strategies, either as an investor or as a technology watcher, during (what now appears to be) this era of transition to client-servers, historical patterns provide some guidance. First, there should be a presumption that existing market relationships will persist into the future. This is not the same as saying the industry will remain technically static or that all relationships will persist. Incremental advances will be incorporated into existing platforms by existing vendors, if at all possible. IBM, Microsoft, Intel, Sun, and several other leading firms that potentially control key components in future client-server platforms claim they are under perpetual competitive siege. This must be taken with appropriate measure. None of these firms would stay profitable if they failed to innovate, but none of them would lose market share immediately if they made a minor incremental technical mistake. One should interpret most of today’s competition in terms of attempts to expand capabilities with the purpose of becoming the dominant platform in client-servers.
b161-ch08.qxd
27/05/04
7:04 PM
Page 284
284 Part VIII
On predicting the direction of technical change Any particular guess about technical direction is bound to be wrong. Yet, making such guesses is central to investment strategies. The menu of technical options represents an enormous superset of the potential commercial options under consideration by users and vendors. At best, several patterns can frame analysis. First, there are strong biases toward adding incremental technical advances into existing platforms and away from radical new platforms. Second, the list of potential entrants from a technical standpoint is always large, but the list of feasible entrants from a commercial standpoint may be quite small. Thus, to all but the analyst with the most finely tuned crystal ball, it will be difficult to distinguish between what is really under siege and what is not, who really has commercial momentum, and who is losing it. Additionally, what may have been appropriate in one era and with one set of platforms can become outdated quickly, as the next eras platforms come with different users, firms, or technical foundations.
Communications in vertical relationships In a market where vendors invent while buyers co-invent and customize, close contact between buyer and seller facilitates communication to aid in finding solutions to bottleneck problems. While the concentrated structure of the typical platform seems to work against sharing information widely, an unheeded rush to abandon contact between buyer and seller where no commercial relationship presently exists seems counterproductive for co-invention activity.
Firm contact in vertically disintegrated markets The fight for control of technical standards is an important facet of competition between vertically disintegrated firms producing for the same platform. This fight is simply a fact of life. It pervades virtually every phase of market development: institutionalized standardization committees and related decision-making bodies, joint-product development and marketing agreements, and contractual arrangements between ostensible upstream and downstream suppliers. This fight cannot be wished away or ignored. The principal social expense of these fights seems to be a reduction in technology coordination within a single platform. The principal benefit is the check it places on the
b161-ch08.qxd
27/05/04
7:04 PM
Page 285
Chapter 43 285
ability of any single firm to determine the direction of technical change exclusively for its own benefit.
Competition check on vertical relationships Does competition among platforms limit the scope of potential abuse of vertical relationships by a firm that dominates vertical relationships within a platform? This question goes back to the IBM antitrust trial in the United States, but has many antecedents. The historical record gives mixed indicators. On one hand, there seems to be a reasonable presumption that platform competition does not limit vertical relationships, at least in the short run. Established platforms cannot be easily dislodged, except by unexpectedly mobile platforms in rare circumstances. Indeed, there seems little reason for a broad presumption that the threat of potential entry from another new platform will alone restrain behavior. On the other hand, the entry of client-server platforms seems different from the past. First, they are not yet platforms largely defined around proprietary standards, so the lessons from the era of proprietary platforms may be inappropriate. Second, a client-server’s development takes advantage of vertical disintegration in the PC market. Until such time as standards are clearly defined, threats by buyers to use different combinations of components to form client-server arrangements may provide a competitive threat for existing combinations.
Use of structural reform as a policy instrument Governments have been known to attempt to deconcentrate markets where firms appear to have monopolized or attempted to monopolize a product market. The popularity of calls to break up Microsoft today echoes this antitrust tradition. These actions may arise from concerns about the inherently large size of a firm or its behavior. There is no point in reducing the degree of concentration of a market (as an end in itself) if concentration naturally arises due to the strong tendency of a single platform to dominate. Similarly, policies aimed at restricting established firms (rightly or wrongly accused of monopoly behavior) may only give rise to another firm that achieves similar ends. This would suggest that policy-makers should take a cautious approach to structural reform if there is any reason to doubt its need. If
b161-ch08.qxd
27/05/04
7:04 PM
Page 286
286 Part VIII
structural change is strongly desired by the political process, a policy encouraging radical change should not necessarily break up an incumbent. Instead, it could favor policy initiatives with a broad presumption in favor of encouraging platforms from potential entrants, even entrants not yet on the horizon.
Growing new platforms for today’s systems As a result of commercial initiatives from Sun, Oracle, and others, it is popular today to speculate about a future in which a reconfigured clientserver system consists of thin clients and fat servers. This contrasts with today’s networks, which are often a hodge-podge of components from a variety of users for a variety of applications. Historical patterns suggest that a new platform of this type, like any new platform, will succeed if (1) it is first diffused to technical and sophisticated users, (2) it eventually offers a radical new functionality that the old platform cannot imitate, and (3) it develops into a platform that uses or reuses the capabilities of existing platform components already in place. This is a tall order for any new entrant, especially in today’s market, where competitive forces are so unforgiving. Yet, it also strongly suggests that some types of commercialization strategies will succeed, while others will almost certainly fail, which should help guide marketing and development.
Parting words What does the future hold? It is a cliché that high-technology markets change frequently. Despite such change, this article tries to show that the same concepts provide insights about market behavior this year, last year, and the year before that. These concepts also represent my best guess about what will likely remain relevant in the future. That said, almost by definition something in this article will become incorrect by the time it is published. As much as one can make a sure bet in this industry, I am willing to make this one: for the foreseeable future, the market economics and strategies of the dominant firms will be interlocked with the economics and strategies of the dominant platforms in use. Understanding how platforms evolve is essential to understanding how this industry will evolve.
b161-ch08.qxd
27/05/04
7:04 PM
Page 287
Chapter 43 287
{Editorial note: This article was inspired by classroom conversations. It summarizes a line of research published in three articles, written jointly by Tim Bresnahan and myself. See “The Competitive Crash in Large Scale Computing,” in The Mosaic of Economic Growth, Landau, Taylor and Wright (eds.) Stanford University Press, Stanford, CA, 1996, “Technical Progress and Co-Invention in Computing and in the Use of Computers,” in Brookings Papers on Economics Activity: Microeconomics, Brookings, Washington, D.C., 1996, and “Technological Competition and the Structure of the Computing Industry” Journal of Industrial Economics, March 1999. Looking back, this article has one big omission: it does not describe the full range of strategies for platform formation and defense, as practiced by Microsoft and Netscape during the Browser Wars, or as practiced by AOL, Intel and Cisco in the late 1990s. With these actions fading into the haze of the recent past, it is possible to assess the strengths and weaknesses of particular choices. Once again, it gives Tim and I more to write about in the future.}
This page intentionally left blank
b161-Ind.qxd
27/05/04
7:07 PM
Page 289
Index
3Com 222, 278, 282 @ Home 102, 103 Accenture 118, 282 adjustment cost 182, 184 adoption 196, 197, 203 Advanced Micro Devices (AMD) 10 Akamai 140 Amazon 82, 96, 123, 162 America Online (AOL) 25, 26, 81, 90, 94, 96, 98, 101–103, 160, 162–164, 174, 247, 287 American Graffiti xi American National Standards Institute (ANSI) 26 American Standard Code for Information Interchange (ASCII) 256, 266 Andersen Consulting 94, 195 Andreeson, Marc 91 AOL-Time Warner 102, 103, 106, 108 Apple Corporation 47, 48, 210, 222 Apple I 215 Apple II 49, 215 Apple III 215 Apple Newton 49 archival 62 archivist 64 Arpanet 89 AS/400 274 Aspray, Bill xiv AT&T 10, 59, 86, 98, 101–103, 106, 136, 141, 143, 162, 163, 233–237, 245, 254–257, 259, 260, 268 World Net 94, 96, 100 Atlanta 141, 142 Austin 142 Automatic Data Processing 62 average practice 188
Baan 94 Babbage, Charles 210 Institute xiv, 70, 76 backbone 139, 176 backward-compatible 19 Balmer, Steve 235 Bank, David 242 Bardeen 8 Beanie Babies 127 Bell Labs 8 Bergman, Ingrid 222 best-practice technology 187–189 biological metaphor 17, 21 Blodgett, Henry 130, 132 Blue-Mountain Arts 24 Boca Raton 280 Bogart, Humphrey 222 Boston 10, 117, 142 Bouchard, Tony 74 Brattain 8 Bresnahan, Tim xi, 3, 60, 182, 191, 269, 271, 281, 287 Bricklin, Dan 49 Brookings 195, 287 browser 48, 90, 130, 175, 236, 275, 287 Burroughs 278 Bush administration 237 Business 2.0 98 business model 57, 99, 130, 194 Business Week 96, 98 Cable News Network (CNN) 148 calculator 4 carpal tunnel syndrome 38 Center for Disease Control (CDC) 278 Chicago 6, 80, 83, 86, 135, 141, 142, 201, 204 289
b161-Ind.qxd
27/05/04
7:07 PM
Page 290
290 Index Chicago Cubs 148 Chicago Tribune 127 Cisco 25, 26, 94, 97, 98, 102, 103, 118, 126, 162, 195, 278, 282, 287 Clarke, Arthur C. 12 Cleveland/Akron 204 client-server 6, 18, 21, 150, 181, 183, 192, 271, 272, 280 platform 283 system 16 Clinton, Bill 83 Cockburn, Iain 34 co-invention 9, 191, 193–195, 281, 282, 284, 287 common-carrier regulation 106 Compaq 94, 239, 241, 274, 278 competitive local exchange company 134 complement 26 CompuServe 90, 163, 247 Computed Tomography (CT) scanner 52, 170 Computer Associates 94, 278 Computer Intelligence Infocorp 182, 192 consortia 263, 264, 265 converge in complement 51 in substitute 51 converter 258 Corel 224 Cravath, Swain & Moore 72 Cringely, Bob X. 220 critical mass 273 Dallas 141, 142 David, Paul xi, 73, 74, 220, 268 Defense Advanced Research Projects Agency (DARPA) US 89 Defigueiredo, John 122 Dell 94, 96, 118, 239, 274, 278 Detroit 142 Diamond, Stephen xiv diffusion 5, 30, 37, 45, 58, 198, 205 of the Internet 129 digital divide 107 Digital Equipment Corporation (DEC) 10, 70, 261, 271, 277, 278 Digital Island 140
Digital Subscriber Line (DSL) 134, 137, 138, 176 Dilbert 43 Disk Operating System (DOS) 254 disruption 46, 49 DoCoMo 31 Domino’s Pizza 80 dot-bomb 121, 124 dot-com 103, 112–114, 118, 121, 125, 126, 131 bubble 115 crash 118 Downes, Tom 33 Dr. Koop 162 Draper, Fisher & Jurvetson 23 Draper, Tim 23 Dunlop, Al 222 EarthLink 174 e-Bay 96, 102, 159, 162, 176 e-commerce 99, 112, 122 ecosystem 18 Electronic Data Interchange (EDI) 90 Electronic Data Systems Corporation (EDS) 94, 118, 170, 278, 282 Ellison, Larry 224, 240 e-mail 29, 30, 123 diffusion 31 extra 177 emoticon 31 endogenous sunk cost 272 enhancement 197, 198, 200, 202, 203, 206 enthusiast 17 entrepreneur 119, 124, 125, 130, 268 Enterprise Resource Planning (ERP) 58 E-Rate 107 E-Schwab 96, 154 establishment 202 E-Stats 167, 168, 172 e-Toys 96, 119, 124, 162 e-trade 154 Evite 24 Excel 279 Extended Binary Coded Decimal Interchange Code (EBCDIC) 266 Faggin 8 Fast Company 98
b161-Ind.qxd
27/05/04
7:07 PM
Page 291
Index 291 Federal Aviation Administration (FAA) 251 Federal Communications Commission (FCC) 59, 85, 86, 108, 167, 245 Federal Express (FedEx) 80, 82 Federal Reserve 152 Federal Trade Commission (FTC) 242, 257 first mover 113 advantage 113 Fisher, Frank 46, 73, 74 flat rate 163 pricing 163, 164 Forman, Chris 196, 202 Frankston, Bob 49 Furby 99 Gates, Bill 13, 15, 48, 83, 88, 111, 213, 214, 219, 220, 235, 238, 240, 242, 269 Gateway 239 General Services Administration 65 Global Crossing 143 global village 201–203, 205, 206 Go.com 119 Goldberg, Rube 13, 14 Goldfarb, Avi 196, 202 Google 175 Gore, Al 83, 87 Gray, Martha Mulford 68 Greenspan, Alan 116, 118, 152, 156, 157 Griliches, Zvi 29, 32 Grove, Andy 213 Grubman, Jack 132 HAL 13 Harte Hanks Market Intelligence 197, 202 Hatch, Orrin 221 Hazlett, Lenis 3 Helmsley, Leona 222 Hewlett-Packard (HP) 94, 234, 276, 278 hobbyist 15 Hoff 7 Home Brew Club 13 Honeywell 278 Hotmail 23, 26 hybrid corn seed 29, 30, 33 I Seek You (ICQ) 25–27, 102 information superhighway 87 Inktomi 140
instant messaging 31, 160 Institute of Electrical and Electronics Engineers (IEEE) xiv, 26 insurance 37 Integrated Services Digital Network (ISDN) 261, 267 Intel 8, 9, 10, 50, 94, 98, 118, 126, 128, 210, 213, 242, 261, 271, 274, 278, 283, 287 International Business Machines Corporation (IBM) x, 8, 10, 26, 43, 48, 50, 59, 70, 71, 74, 81, 86, 89, 94, 97, 98, 101, 126, 128, 140, 141, 183, 195, 211, 213, 240, 245, 254–257, 261, 266, 268, 271, 274, 278, 278, 280, 282, 283, 285 Lotus 275 OS\2 274, 279 International Data Corporation (IDC) 66, 70 Internet 15, 99 access 33, 43, 101, 105, 154, 173, 177, 178 access provider 33 backbone 85, 142 diffusion 105 service provider (ISP) 33, 85, 139, 174 shopping 82 Intuit 275 irrational exuberance 125 Jaffe, Adam 166 James, Charles 237 Java 225, 226, 282 JDS Uniphase 126 Jobs, Steve 13, 215, 219, 220 Judge Green 257 Judge Harold Greene 234 Judge Jackson 228–232, 235, 239 Judge Richard Posner 235 Kubrick, Stanley 12, 13 L.L. Bean 79, 124 Land’s End 79 Landau, Ralph 185 laptop 6 last mile 95
b161-Ind.qxd
27/05/04
7:07 PM
Page 292
292 Index lead user 45 Lehman Brothers 157 Lerner, Josh 166 Level3 120, 143 Lloyd’s of London 38 Loftus, Joan 64, 65 Los Angeles 141, 142, 204 Lotus 274, 279 Lucent Technologies 102, 103, 126
Novell 94, 278 Noyce 7 NSFnet 140 NTT DoCoMo 106
Macintosh 47, 215 Mansfield, Ed xiii Manzi, Jim 279 McNealy, Scott 224 MediaOne 102, 103 Meeker, Mary 123, 130, 132 Mercata 25 Metcalfe, Bob 222 Micro xiv, xv Microsoft x, 15, 23, 25, 26, 48, 81, 91, 94, 95, 98, 101–104, 106, 126, 213, 220–226, 228, 230, 232–235, 237–242, 257, 261, 269, 274, 275, 278, 283, 285, 287 Mindspring 101, 103 Mindspring/Earthlink 160, 163 mix-and-match 254 Moore’s law 56, 111, 112, 210 Morgan Stanley 123, 157 Mosaic 90, 91 Motorola 8, 10 Microsoft Network (MSN) 96, 106 MSNBC 106, 238
Parley, Nan 71 participation 197, 200, 202, 203 PDP 277 Peapod 79–82 peer-to-peer 28 PeopleSoft 94 Perot Systems 170 Pets.com 119 Philadelphia 142 platform xi, 239, 271, 272, 273, 276, 278, 281, 285 provider 240 price 4, 6 Priceline 96 Prodigy 90, 247 product improvement 5,6 life cycle 17–20 productivity paradox 147, 148, 150, 151 slowdown 148 prototype 47 PSINet 120, 143
Napster 28 National Broadcasting Network (NBC) 260 National Bureau of Economic Research (NBER) 117, 120, 190 National Cash Register (NCR) 278 Nelson, Richard xiii Netcom 101 Netscape 88, 91, 100–102, 122, 123, 231, 241, 275, 287 network of networks 246 NetZero 119, 162 New York 141, 142, 201 Northern California 117 Novel netware 282
QWERTY keyboard 248 Qwest 120, 143
Office of Technology Assessment 67 on-line shopping 79 open standard 216 Oracle 50, 94, 97, 118, 195, 275, 278, 282, 286
Radio Corporation of America (RCA) 260 Real 224, 230 Reback, Gary 220, 223 recession 116, 117, 120 Red Herring 96, 98, 103, 139 Redmond 219, 237, 239, 242 Regional Bell Operating Companies 256, 257 Reno, Janet 213 repetitive stress injury 37 Rosenberg, Nathan xiii
b161-Ind.qxd
27/05/04
7:07 PM
Page 293
Index 293 Route 128 12 Royal Ahold 82 San Francisco 135, 142, 201 Silicon Valley 141 San Jose 203 SAP 195, 275, 278, 282 Scott-Heron, Gil 175 Seattle 142, 223, 236 selection bias 25 September 11th terrorist attack 143, 237 serendipity 48 shareware 48 Sherman Antitrust Act 239 Shockley 8 Silicon Valley 3, 10, 12, 13, 81, 111, 209, 237 Solow, Robert 161 spammer 177 sponsorship 250, 255, 259, 260 Spring, Michael xiv Sprint 141, 143 Spyglass 91 standard xi, 246–248, 250, 259, 262, 273, 285 standardization 246, 250, 257, 259, 276, 284 Stanford University 3, 10, 62–64, 68, 69, 74, 75 Stern, Scott 166 Stewart, Janet Kidd 127 sticky application 25 stock option 20 Sun Microsystems 94, 97, 98, 126, 128, 195, 224–226, 230, 255, 271, 275, 278, 279, 282, 283, 286 Takahashi, Eugene 74 Taylor, Timothy 185 Transmission Control Protocol/Internet Protocol (TCP/IP) 58, 59, 88 techno-have 42, 43 technological determinism 54 technologist 17 technology enthusiast 17 Telecommunications Act of 1996 86–88, 129, 133 Telecommunications Incorporated (TCI) 102, 106 Texas Instruments 8
The Brooks Act 69 The Industry Standard 22, 96, 98, 103 The New York Times 96 The New Yorker 238 The Wall Street Journal 96, 98, 117 Tinkertoys 13 Toshiba 3, 6 Trajtenberg, Manuel 6 transistor 8 Travelocity 96 tree of Zvi 34 Trump, Donald 222 Turner, Ted 222 tying arrangement 241 ultrasound 51, 52 uncertainty 46, 56, 113 United Parcel Service (UPS) 80–82 Univac 278 universal service 104, 107, 108 University of Illinois 12, 90, 111 Unix 225, 265, 274, 282 upgrade x, 18, 19, 20, 130, 187, 191 US Bureau of Economic Analysis 167, 168, 178, 199 US Bureau of Labor Statistics 163, 167, 173 US Census Bureau 167, 168, 170, 172, 199 US Department of Commerce 168 US Department of Defense 70, 85, 89, 245 US Department of Justice 102, 228, 233, 239 US National Bureau of Standards 68 US National Science Foundation 53, 88, 89, 140 US National Telecommunications Industry Administration 167 US versus IBM trial 72, 73 UUNET 94, 141, 143 UUNet 141, 143 VAX 277 vertical chain 93, 94 Victoria’s Secret 124 viral marketing 22–27 VisiCalc 49 voice recognition software 40
b161-Ind.qxd
27/05/04
7:07 PM
Page 294
294 Index Wal-Mart 124 Wang 278 Washington, DC 141, 142 Watson Jr., Thomas 211, 213 Watson Sr., Thomas 5 Weaver, Sigourney 54 WebTV 106 Webvan 82, 119, 124 Williams Communications 143 Windows 235, 274, 279 Windows 95 209–213, 269, 274 Windows NT 282 Wired 96, 98 word-of-mouth 22 WordPerfect 224
WordStar 48 workman’s compensation 40 World Wide Web 48, 53, 58, 155 WorldCom 136, 143 Microwave Communications Incorporated (MCI) 89, 94, 140, 141, 143 Wozniak, Steve 14, 222 Wright, Gavin 185 Xerox Laboratories 210 Y2K 130 Yahoo 126, 140, 162