Tracking a Transformation: E-commerce and the Terms of Competition in Industries
BRIE-IGCC E-conomy Project
BROOKINGS INSTITUTION PRESS
T T
This page intentionally left blank
BRIE-IGCC E- P
T T E-commerce and the Terms of Competition in Industries
B I P Washington, D.C.
The Brookings Institution is a private nonprofit organization devoted to research, education, and publication on important issues of domestic and foreign policy. Its principal purpose is to bring knowledge to bear on current and emerging policy problems. The Institution maintains a position of neutrality on issues of public policy. Interpretations or conclusions in Brookings publications should be understood to be solely those of the authors. Copyright © 2001
1775 Massachusetts Avenue, N.W., Washington, D.C. 20036 www.brookings.edu All rights reserved Library of Congress Cataloging-in-Publication data Tracking a transformation : e-commerce and the terms of competition in industries / BRIE-IGCC E-conomy Project. p. cm. Includes bibliographical references and index. ISBN 0-8157-0067-9 (pbk. : alk. paper) 1. Electronic commerce—United States—Case studies. 2. Competition—United States—Case studies. 3. Industrial relations—Effect of technological innovations on— United States—Case studies. 4. Electronic commerce—Europe—Case studies. 5. Competition—Europe—Case studies. 6. Industrial relations—Effect of technological innovations on—Europe—Case studies. I. BRIE-IGCC E-conomy Project. II. Berkeley Roundtable on the International Economy. III. University of California Institute on Global Conflict and Cooperation. HF5548.325.U6 T73 2001 381’.1—dc21 2001005817 987654321 The paper used in this publication meets minimum requirements of the American National Standard for Information Sciences—Permanence of Paper for Printed Library Materials: ANSI Z39.48-1992. Typeset in Adobe Garamond Composition by R. Lynn Rivenbark Macon, Georgia Printed by R. R. Donnelley and Sons Harrisonburg, Virginia
Contents
Acknowledgments
ix
Acronyms
xi
The Enablers: Tools and Markets 1 Tools: The Drivers of E-Commerce
3
Stephen S. Cohen, J. Bradford DeLong, Steven Weber, and John Zysman
2 The Construction of Marketplace Architecture
27
François Bar
E-Commerce: A View from the Sectors
The Boundary Conditions of Services
53 v
vi
3 E-Finance: Recent Developments and Policy Implications
64
Setsuya Sato, John Hawkins, and Aleksander Berentsen
4 The Future of Retail Financial Services: Transparency, Bypass, and Differential Pricing
92
Eric K. Clemons, Lorin M. Hitt, and David C. Croson
5 Web Impact on the Air Travel Industry
112
Stefan Klein and Claudia Loebbecke
6 Confronting the Digital Era: Thoughts on the Music Sector
128
Jonathan Potter
Standard Modules and Market Flexibility
7 The Internet and the Personal Computer Value Chain
137 151
Martin Kenney and James Curry
8 E-volving the Auto Industry: E-Business Effects on Consumer and Supplier Relationships
178
Susan Helper and John Paul MacDuffie
9 E-Commerce and the Changing Terms of Competition in the Semiconductor Industry
214
Robert C. Leachman and Chien H. Leachman
10 The Old Economy Listening to the New: E-Commerce in Hearing Instruments Peter Lotz
229
vii
Making and Moving Stuff
241
11 Electronic Systems in the Food Industry: Entropy, Speed, and Sales
253
Jean Kinsey
12 Lean Information and the Role of the Internet in Food Retailing in the United Kingdom
280
Jennifer Frances and Elizabeth Garnsey
13 E-Commerce in the Textile and Apparel Industries
310
Jan Hammond and Kristin Kohler
14 E-Commerce and Competitive Change in the Trucking Industry
332
Anuradha Nagarajan, Enrique Canessa, Will Mitchell, and C. C. White III
What Comes Next? The Evolving Infrastructure
What Will the Next Generation of Tools, Networks, and Marketplaces Look Like?
357
15 The Mobile Internet Market: Lessons from Japan’s i-Mode System
369
Jeffrey L. Funk
16 E-Commerce and Network Architecture: New Perspectives Michael J. Kleeman with David Bach
389
viii
17 The Political Economy of Open Source Software
406
Steven Weber
18 The Next-Generation Internet: Promoting Innovation and User-Experimentation
435
François Bar, Stephen S. Cohen, Peter Cowhey, J. Bradford DeLong, Michael J. Kleeman, and John Zysman
Contributors
475
Index
477
Acknowledgments
T
he BRIE-IGCC E-conomy Project, led by BRIE codirectors Stephen S. Cohen and John Zysman (UC Berkeley) and IGCC director Peter Cowhey (UC San Diego), includes professors Fran¸cois Bar (Stanford), J. Bradford DeLong (UC Berkeley), Martin Kenney (UC Davis), and Steven Weber (UC Berkeley). Special thanks are due the BRIE graduate students who provided significant substantive and editorial contributions to this work: Benjamin Ansell, David Bach, John Cioffi, Gary Fields, Brodi Kemp, John Leslie, and Abe Newman. Newman’s work on the media chapter was of distinct importance. David Bach made significant contributions to the overall effort and helped draft important elements. The value of John Cioffi’s contributions to this book, and to the several conferences that led up to it, cannot be overstated. He has been an indispensable partner in the effort. Thanks are also due to Patricia Johnson and Susan Jong for substantive contributions and to Michelle Clark and Noriko Katagiri as well; all four provided elements of the complex coordination required in research and production stages. Thanks, too, to the many readers who contributed comments, and in particular to Mary Clare Fitzgerald, Brian Kahin, Elliot Maxwell, Andy Pincus, and Lee Price, who helped organize the conference at which some of these papers were presented. Peter Harter, for his help in
ix
the formative stages of the E-conomy Project, also deserves thanks, as does Ann Mine for help with everything from logistics to rewriting. The generosity of the German Marshall Fund of the United States and the Alfred P. Sloan Foundation supported work for this book. There was also significant support for this project from IGCC. A companion volume, a result of the work of the Brookings Task Force on the Internet, The Economic Payoff from the Internet Revolution, has also been published by Brookings Institution Press.
Acronyms
3G ACM AOL-TW API APR ARPA ARPANET ART ASCAP ASIC ASP AST ASTA ATM B2B B2B2C B2C B2V BIOS BMI BSD
Third Generation Wireless Technology Association for Computing Machinery America Online–Time Warner Application Program Interface Annual Percentage Rate Advanced Research and Projects Administration ARPA Network Advanced Radio Telecom American Society of Composers, Authors and Publishers Application-Specific Integrated Circuit Application Service Provider Advanced Systems Technology American Society of Travel Agents Automated Teller Machine Business-to-Business Business-to-Business-to-Consumer Business-to-Consumer Business-to-Vehicle Basic Input Output System Broadcast Music, Inc. Berkeley Software Distribution xi
xii
BTE BTO CAD CATV CD-ROM c-HTML CLEC CM CMOS CNPS CPFR CPI CRS CRTC CSM DC DLC DMO DNS DOJ DOS DP DRAM DRI DSD DSL DVD DWDM EBPP ECN ECR EDA EDI EFS EMACS EPOS EU FCC
Behind-the-Ear Build to Order Computer-Aided Design Cable Television Compact Disk Read Only Memory compact HTML Competitive Local Exchange Carrier Category Management Complementary Metal-Oxide Semiconductor Cross-National Production System Collaborative Planning, Forecasting, and Replenishment Consumer Price Index Computerized Reservation Systems Canadian Radio-Television and Telecommunications Commission Competitive Semiconductor Manufacturing DaimlerChrysler Digital Loop Carrier Destination Management Organization Domain Name System U.S. Department of Justice Disk Operating System Data Processing Dynamic Random Access Memory Defense Research Institute Direct Store Delivery Digital Subscriber Line Digital Versatile Disk Dense Wave Division Multiplexing Electronic Bill Presentment and Payment Electronic Communications Network Efficient Consumer Response Electronic Design Automation Electronic Data Interchange Electronic Financial Services Editor MACROS Electronic Point of Sale European Union Federal Communications Commission
FinCEN FSF FTC FTP FTTH G2C GCC GDB GDP GDS GEMA GM GNU GPL GPRS GPS GSM GUI HDD HDR HDTV HHS HKMA HMO HP HTML HTTP IC3D ICT IDE I-EDI ILECS IM IOR IP IPO IPv6 ISDN
xiii
Financial Crimes Enforcement Network Free Software Foundation Federal Trade Commission File Transfer Protocol Fiber to the Home Government-to-Consumer GNU Compiler Collection GNU Debugger Gross Domestic Product Global Distribution Systems Gesellschaft für musikalische Aufführungs- und mechanische Vervielfältigungsrechte General Motors GNU Is Not Unix General Public License General Packet Radio Service Global Positioning System Global System for Mobile Communications Graphical User Interface Hard Disk Drive High Data Rate High Definition Television U.S. Department of Health and Human Services Hong Kong Monetary Authority Health Maintenance Organization Hewlett Packard Hyper Text Markup Language Hyper Text Transfer Protocol Interactive Custom Clothes Company Design Information and Communication Technologies Integrated Drive Electronics Internet-Enabled Electronic Data Interchange Incumbent Local Exchange Carriers Instant Messenger Interorganizational Relations Internet Protocol Initial Public Offering Internet Protocol Version 6 Integrated Services Digital Network
xiv
ISP IT ITE ITN ITS ITS JCI JCP JIT LAN LMDS LRIC LTL LTO M&S MFN MIS MITI MMDS MML MP3 MPU MRO MSDW N2K NC NCTA NPD NRA NTO NTT NVH OAC OECD OEM OEO OFTEL
Internet Service Provider Information Technology In-the-Ear InterTAN Incompatible Time Sharing System Intelligent Transportation System Johnson Controls, Inc. J. C. Penny Just in Time Local Area Network Local Multipoint Distribution Services Long-Run Incremental Cost Less Than Truckload Local Tourist Organization Marks and Spencer Most Favored Nation Management Information System Ministry of International Trade and Industry (Japan) Multichannel Multipoint Distribution Services Mobile Markup Language MPEG (Moving Pictures Experts Group) Audio Layer 3 Microprocessor Unit Maintenance, Repair, General Plant Operations Morgan Stanley Dean Witter N2K, Inc., now CDnow Network Computer National Cable Television Association Network Presence Database National Regulatory Authority National Tourist Organization Nippon Telegraph and Telephone Noise, Vibration, Harshness Open Access Coalition Organization for Economic Cooperation and Development Original Equipment Manufacturer Optical-Electronic-Optical U.K. Office of Telecommunications
OOO OSI OSS OTC OVPN PARC PC PDA PDC POS PUD PX QRP QVC RBOC RDC REIT ROM-BIOS RTO SACEM SAE SAGE SBC SEMI SIAE SIM/USIM SKU SMS STP SUV TCP/IP TDMA TL TSMC UAW UCC UCCNet UHL
xv
Optical-Optical-Optical Open Source Initiative Open Source software Over the Counter Optical Virtual Private Network Palo Alto Research Center Personal Computer Personal Digital Assistant Personal Digital Standard Point of Sale Package Express (PX) Pickup and Delivery Vehicle Package Express Quick Response Partnershipping Quality, Value, Convenience Regional Bell Operating Company Regional Distribution Center Real Estate Investment Trust Read Only Memory—Basic Input Output System Regional Tourist Organization Société des Auteurs Compositeurs Éditeurs de Musique Society of Automotive Engineers Semi-Automatic Ground Environment Southwestern Bell Corporation Semiconductor Equipment and Materials International Società Italiana degli Autori ed Editori Subscriber Identity Module/Universal Subscriber Identity Module Stock Keeping Unit Short Messaging Service Straight through Processing Sports Utility Vehicle Transmission Control Protocol/Internet Protocol Time Division Multiple Access Truckload Taiwan Semiconductor Manufacturing Company United Auto Workers Uniform Code Council UCC Open Format Internet Platform Ultra Long Haul
xvi
UMC UMTIP UNIVAC URL VAN VAR W3C WAP W-CDMA WDM WML WWW XML
United Microelectronics Company University of Michigan Trucking Industry Program Universal Automatic Computer Universal Resource Locator Value-Added Network Value-Added Reseller World Wide Web Consortium Wireless Application Protocol Wideband Code Division Multiple Access Wavelength Division Multiplexing Wireless Markup Language World Wide Web eXtensible Markup Language
I
The Enablers: Tools and Markets
This page intentionally left blank
1
. .
Tools: The Drivers of E-Commerce
on the proposition that the late-twentieth-century information technology (IT) revolution marks the beginning of a fundamental economic transformation. IT is producing one of those very rare eras in which advancing technology and changing organizations do not revolutionize just one leading economic sector but transform the entire economy and ultimately the rest of society as well. Information technology builds tools to manipulate, organize, transmit, and store information in digital form. It amplifies brainpower in a way analogous to that in which the nineteenth-century industrial revolution amplified muscle power. Rapid change by itself is not “revolutionary,” at least not as we are using the term. Rapid economic and technological change is normal: it has been a standard part of the economic history of every era since the beginning of the industrial revolution. Productivity explosions happen regularly as invention and innovation remake particular “leading sectors”—like air transport in the 1960s, television in the 1950s, automobiles in the 1920s, organic chemicals in the 1890s, and so on back to the original invention of the steam engine to automate the pumping of water out of coal mines. Each of these innovations massively boosted productivity in its particular slice of the economy. Each had diffusion effects that changed economic processes in many other parts of the economy. Each set off its own “long boom.” But information technology may well be different.
T
, , ,
Information technology is creating tools for thought. The first generations of these tools have certainly spawned a leading sector that has brought enhanced productivity growth and rapid innovation to a particular slice of the economy. These first generations contributed significantly to the end of the post-1973 era of relative stagnation and to the long boom of the 1990s. (As these tools diffuse, they look very likely to do the same for other advanced and advancing economies in the near future.) But that is not the whole story. The revolutionary potential lies within the tools that information technology provides to all economic sectors. These tools will affect every economic activity in which organization, information processing, or communication is important—in short, everything. They open new possibilities for economic organization across the board. They change what can be done and how it can be done across a wide range of industries. Most important, they may well require changes in ideas about ownership, property, and control—the way in which governments regulate economies in the broadest sense of that term. The dynamic, encompassing nature of this transformation creates pitfalls for traditional research strategies. To understand relationships, specify causal links, and design valid measures of complex economic processes is difficult enough when dealing with limited, specific, and fairly conventional arguments and issues. On the other hand, to weave a broad pattern of relationships without precise measures too often produces not great insight but banal superficiality. The more profound the transformation, the sharper the dilemma becomes. As context itself changes, things that were treated (for better or for worse) as parameters become variables. Research problems cannot be so easily isolated nor causal relationships cleanly specified. The logic of this book is to work with these constraints, not to fight them. Building on the proposition that an information revolution is in the process of fundamentally transforming our entire economy, we take a “bottom-up,” inductive approach. Claims of a discrete “Internet economy” or “information economy” are momentary, transitional characterizations. They will soon seem as meaningless as a claim that there is a “fax economy” or a “telephone economy.” Information technologies fade into the background as they become a set of tools for the economy as a whole. The “e” in “e-commerce” will disappear as all commerce becomes organized and integrated into electronic networks. This process happens unevenly, at different rates, and in different ways across the many sectors of an economy. The core of our research strategy is to track that process by examining what is happening in these different sectors.
The application of these tools for thought creates literally millions and millions of micro changes, which vary from industry to industry but sum to revolutionary potential. Their impact is, therefore, disruptive. The information technology revolution is a story about structural change; it is not primarily a macroeconomic or cyclical phenomenon. There is no promise, and little likelihood, of smooth growth, rising stock prices, and government surpluses stretching out to the horizon, nor of permanently low rates of unemployment, interest, and inflation. A different god governs the macroeconomy, who will decide whether and how these productivityenhancing changes—changes that provide far more than a trivial amount of growth potential—will be well used or wasted.
The New Economy: A Transformative Era The information technology revolution story has three intertwined themes. The first is technology development. The second is innovations in organization and practice. The third is speed and extent—the rate at which the first two stories are unfolding and the global reach of their implications. The technology theme is most familiar. In the 1960s Intel Corporation cofounder Gordon Moore projected that the density of transistors on a silicon chip would double every eighteen months. What came to be called Moore’s Law has been continually wrong—but on the upside. Computing power has more than doubled, and its price has fallen by more than half in the eighteen-month cycle. Consumers now routinely expect that the $1,000 personal computer they buy in a department store will have the processing power of what, five years ago, was a $20,000 workstation. What was once called supercomputing is now packaged in a run-of-the-mill desktop PC. The past forty years have seen something like a billionfold increase in the world’s installed computing power base. There simply is no historical precedent for a technology whose raw measures of capability progress at anything like this rate. And despite repeated roadblocks in semiconductor manufacturing technologies that seem to threaten an imminent slowdown in the cycle, innovation has (until now, at least) successfully overcome the impediments. There is no compelling reason to believe that we have come anywhere near the endgame for Moore’s Law. It is a safe bet that raw processing power will continue to grow at a rate faster than we can figure out what to do with it.
, , ,
This points to the more fundamental rate-limiting factor in economic transformation—the human systems of organization and innovation. An enormous increase in raw processing power generated by semiconductors is simply an economic potential. It becomes important only if this potential is utilized. Thus the key question as the semiconductor revolution has proceeded has always been, “what is computer power useful for?” The technological determinants of the answer to that question are changing and will continue to change steadily as the price of computing drops, the size of a computer shrinks, and the possibilities for useful applications expand. The organizational determinants of the answer are harder to theorize about simply because there is no equivalent of Moore’s Law for human systems. At each point in the past forty years, the critical step in the transformation of technical potential into economic productivity has been the discovery by IT users of how to employ their ever-greater and evercheaper computing power to do the previously impossible. In a real sense the leading-edge users and the innovative applications that they have developed have been the drivers or at least the shapers of technological change, because they are the creators of the meaningful demand for better, faster, and cheaper computers. And it is this core demand created by user-side innovation that has sustained and rewarded technological development. At first computers were used as powerful calculators to perform complicated and lengthy sets of arithmetic operations. The first leading-edge applications of large-scale electronic computing power were military.1 The burst of innovation during World War II that produced the first handtooled electronic computers was funded and driven by the demands of war. The Korean War won IBM its first contract to actually deliver a computer: the million-dollar Defense Calculator. The military demand in the 1950s and the 1960s by projects such as Whirlwind and SAGE—a strategic air defense system—both filled the assembly lines of computer manufacturers and trained a generation of engineers.2 1. Even before then the lead user had been the government. Charles Babbage’s difference engine was a British government-funded research and development project. The earliest application of largescale electronic tabulating technology was by the government, specifically the Census Bureau. The national census of 1880 required 1,500 clerks employed as human computers to analyze the data—and it took them seven years to do so. See Anderson (1988). By 1890 the Census Bureau was a test bed for Herman Hollerith’s mechanical calculator. 2. Campbell-Kelly and Aspray (1996) quote from Thomas Watson Jr.’s autobiography: “it was the Cold War that helped IBM make itself king of the computer business.” SAGE accounted for one-fifth of IBM’s work force at its peak. See Watson and Petre (1990). Relying on Flamm, Campbell-Kelly and
The first leading-edge civilian economic applications of large computing power came from government agencies and from industries like insurance and finance that performed lengthy sets of calculations as they processed large amounts of paper. The U.S. Census Bureau bought the first UNIVAC computer. The second and third orders came from A.C. Nielson Market Research and the Prudential Insurance Company. The Census Bureau used computers to replace electromechanical tabulating machines. Businesses originally used computers to do the payroll, report generating, and record analyzing tasks that electromechanical calculators had previously performed. But it soon became clear that the computer was good for much more than performing repetitive calculations at high speed. The computer was much more than a calculator, however large and however fast. The point is that innovative users—in the course of automating existing processes—began to discover how they could employ the computer in new ways. An early innovation was stuffing information into and pulling information out of large databases. American Airlines used computers to create its SABRE automated reservations system—which cost as much as ten airplanes.3 SABRE made it possible to understand in a much more precise way the fine-grained characteristics of demand for air travel. The insurance industry first automated its traditional processes—its back office applications of sorting and classifying. But insurance companies then began to create customized insurance products using newly accessible databases that could be organized, reorganized, queried, and analyzed for data patterns.4 The user cycle became one of first learning about the capabilities of computers in the course of automating established processes, and then applying that learning to generate innovative applications.5 User-driven innovation is aided by rapidly advancing raw technological capability. The growth of computing power has enabled the development of computer-aided design—from airplanes built without wind tunnels6 to Aspray state that 2,000 programmer-years of effort went into the SAGE system in the 1950s and early 1960s. Thus “the chances [were] reasonably high that on a large data-processing job in the 1970s you would find at least one person who had worked with the SAGE system.” See Flamm (1987); Flamm (1988). 3. SABRE was the first large-scale real-time information processing system. See McKenny (1995). 4. See Baran (1986). 5. The literature on this topic of the lead role played by users in generating innovation is vast. See Lundvall (1985); Lundvall (1988, pp. 349–69); Nooteboom (1999, pp. 127–50); Slaughter (1993, pp. 81–95); Hatch and Mowery (1998, pp. 1461–77). 6. Boeing’s 777 is the best-known example, but computer-assisted engineering, design, and manufacture are transforming the entire aerospace industry—not just a single firm or a single product.
, , ,
pharmaceuticals designed at the molecular level for particular applications. In this area, the computer’s major function is neither as a calculatortabulator nor a database manager but is instead a “what-if machine.” The computer creates models of what-if: what would happen if the airplane, the molecule, the business, or the document were to be built up in a particular way. It thus enables an amount and a degree of experimentation in the virtual world that would be prohibitively expensive in resources and time in the real world. The value of this use as a what-if machine took most computer scientists and computer manufacturers by surprise: before Dan Bricklin programmed Visicalc, who had any idea of the utility of a spreadsheet program? The invention of the spreadsheet marked the entry of computers into the third domain of utility as a what-if machine—an area that today seems equally important as the computer as a manipulator of numbers or a sorter of records. User-driven innovation has a particularly interesting implication for information technology. “What-if ” machines can be used, of course, in a self-reflexive way by turning the tools of experimentation back on themselves. In simpler words, computers have become the key design tools for innovations in computing. Today’s complex designs for new semiconductors would be simply impossible without automated design tools. The process has come full circle and will continue to chase itself. Progress in computing depends on Moore’s Law; and the progress in semiconductors that makes possible the continued march of Moore’s Law depends on progress in computers and software. Systems theorists refer to this kind of process as an autocatalytic process. In early 2000 Bill Joy wrote a compelling piece to describe the potential dangers and downsides of technological autocatalysis.7 What may have been lost or inappropriately deemphasized in the discussion surrounding Joy’s manifesto was the incredible upside potential of the same processes. Autocatalytic change can be extraordinarily fast, and it can self-accelerate in surprising ways. Consider the potential for computing with DNA. This lies in the fact that DNA makes possible base 4 computing, which would be significantly more efficient than base 2 (digital) computers. Right now DNA can be manipulated to process information, but only at a very slow speed. Digital computers helped to unravel the structure and function of biological molecules—indeed, one of the most demanding information processing tasks is to determine how a protein strand will fold in on itself 7. Joy (2000).
to create a molecule in three dimensions. Faster computers and processing algorithms that grew out of this demand created a next generation that helped us to manipulate the biological molecule; and new generations of computers will likely help reconfigure it in ways that permit much faster processing of information in base 4. Just as DNA pushes silicon forward, silicon will push forward DNA. The speed and extent to which computing of this magnitude can set off autocatalytic innovation and transform economies and societies is ultimately a function of the degree to which processing power is integrated into economic and social processes. This has been happening recently in two quite critical and mutually reinforcing ways. First, computers have burrowed inside conventional products to become embedded systems. This is the notion of the “smart” car, house, toaster, whatever you choose. Second, computers have connected outside according to a set of open standards to create what we call the World Wide Web: a distributed global database of information all accessible through the single global network.
Pervasive Computing: The Microprocessor Becomes Embedded What does it mean to say that computing is becoming pervasive? The new production and distribution processes that pervasive computing makes possible are visible every day at the checkout counter, at the gas pump, and in the delivery truck. At the checkout counter and the gas station, computers scan, price, inventory, discount, and reorder before the groceries enter the bag or the nozzle is rehung. In the delivery truck, handheld computers determine the next stop and record the paperless “paperwork.” But these are actually quite primitive applications precisely because they remain visible. The most important part of pervasive computing is the computers that we do not see. They become embedded in traditional products and alter the way such products “operate” in the broadest sense of the term. In automobiles, antilock brakes, air bags, and engine self-diagnosis and adjustment are performed by embedded microprocessors that sense, compute, and adjust. The level of automotive performance in systems from brakes to emissions control is vastly greater today than it was a generation ago because of embedded microprocessors.8 Today’s automobile is already quite smart. Tomorrow’s “smart car” will integrate additional functions 8. Microprocessors in cars today control windows, door locks, cruise control, braking systems, fuel mix, emissions control, and more. The number of microprocessors in a typical automobile has passed
, , ,
that will soon become as invisible as the controllers for antilock brakes. At some point the automobile itself becomes more like an information processing system with an engine attached than a several-ton steel mass that can tell the driver about how fast it is going and how much gas it has left. In toys, embedded intelligence rests on very simple computing products. From cash registers and cell phones to hotel doors, elevators, and pacemakers, embedded microprocessors are transforming our world from the inside by adding features of intelligent behavior to potentially all engineered products.9 As product reliability and the trust level of users improve over the next decade, we are certain to see an explosion of intelligence in medical devices. This is far more dramatic than a “wearable” computer. At some point, what is being “worn” or “implanted” becomes as central to the definition of the person as anything that is biological in origin.
Computers Become Linked: The Spread of Networks As the cost of communications bandwidth dropped, it became not only possible but natural to link together individual sensing, computing, and thirty. The hardware cost of these semiconductors was then some $1,500. The software cost of programming and debugging them was perhaps the same. See Mowery and Rosenberg (1998). Note that $3,000 of today’s computing power would have cost $90,000—more than four times the entire price of the automobile—-at 1990’s levels of semiconductor, computer, and software productivity. See James Carbone, “Safety Features Mean More Chips in Cars,” Purchasing Online, September 18, 1998 (www.manufacturing.net/magazine/purchasing/archives/1998/pur0915.98/092enews.htm [February 2000]); “High Tech Industry Positively Impacts Economies, Globally and Locally,” the Kilby Center, September 9, 1997 (www.ti.com/corp/docs/kilbyctr/hightech.shtml [February 24, 2000]). 9. It is difficult to produce reliable estimates of the scope of the embedded microprocessor business. It is, however, possible to see the imprint and importance of this segment in computing in the decisions made by the producers of microprocessors. For example, IBM is ceasing production of PowerPC microprocessors for mass-market microcomputers in order to concentrate on production for high-end embedded sales in automotive applications, communications devices, consumer electronics, and Internet hardware. See “The PowerPC 440 Core: A High-Performance Superscalar Processor Core for Embedded Applications,” IBM Microelectronics Division, Research Triangle Park, N.C. (www.chips.ibm.com:80/ news/1999/990923/pdf/440_wp.pdf [February 2000]). Motorola continues to produce PowerPC microprocessors for use in Apple mass-market microcomputers but has also worked closely with purchasers who pursue applications unrelated to personal computers: a “PowerPC-based microcontroller for both engine and transmission control of next-generation, electronics-intensive automobiles due in 2000” that can handle “the highly rugged automotive environment,” for example. See Bernard Cole, “Motorola Tunes PowerPC for Auto Applications,” EE Times, April 21, 1998 (www.techweb.com/wire/ story/TWB19980421S0011 [February 2000]). Intel as well has put a considerable share of its mammoth venture capital funding toward enhancing its competitiveness in the market for embedded chips. See Crista Souza, Mark Hachman, and Mark LaPedus, “Intel Weaves Plan to Dominate Embedded Market,” EBN Online (www.ebnonline.com/digest/story/ OEG19990604S0024 [February 2000]).
storage units. The key point is not that rapid transmission has become technically feasible,10 but that the costs of data communication are dropping so far and fast as to make the wide use of the network for data transmission economically feasible for nearly every use we can think of. At the asymptote, the marginal cost of sending a piece of information around the world in real time approaches zero. Sun Microsystems uses the advertising slogan “the network is the computer” to describe the scale and scope of rethinking of traditional processes that this may entail. Leading-edge users took advantage of early network systems to create new applications in their pursuit of competitive advantage. The origins of today’s Internet in the experimental ARPANET funded and built by the Defense Department’s Advanced Research and Projects Administration (ARPA) is well known. Networking began primarily as private corporate networks (or, in the case of the French Minitel, a public network with defined and limited services). Business experimentation began. And data communications networks started down a road of exponential expansion as experimenting users found new applications and configurations.11
Computers Become Hyper-Linked: The Coming of the Internet But few saw the next iteration of the potential of high-speed data networking until the http protocol and the image-displaying browser—the components of the World Wide Web—revealed the potential benefits of linking networks to networks. Every PC suddenly became a window onto the world’s data store. And as the network grew, it became more and more clear that the value of the network to everyone grew as well. For the more people there are on a network, the greater is the value of a network to each user—a principle that is now well known as Metcalfe’s Law.12 The build-out of the Internet has been extraordinarily rapid in part because of this network effect. It was also so rapid because the Internet was initially run as a set of protocols over the existing voice telecommunications infrastructure. This was not anything like an optimal foundation for 10. It was technically feasible, after all, to send bits across 4,000 miles at lightspeed during the reign of Queen Victoria—by telegraph. But it was very costly. See Standage (1998); Stephenson (1996); and Yates and Benjamin (1991). 11. On the role of users in promoting the trajectory of innovation in the telecommunications and data networking industries, see Borrus and Bar (1994 ); and Bar and others (1999). 12. After Ethernet inventor and 3Com founder Bob Metcalfe, who said that the value of a network is proportional to the square of the number of nodes on the network. See Shapiro and Varian (1999, pp. 173–225).
, , ,
packet-switched data traffic, but it worked nonetheless. Even before the new technologies designed from the ground up to manage data communications emerged—and they will replace data-over-voice—the global Internet had already established its incredible reach.13 More than 60 million different computers were accessible over the Internet by late 1999, up from less than 1 million in 1993 and less than 10 million in 1996. Figure 1-1 shows the rapid speed of Internet diffusion around the world. Some elements of the next generation of data networks are already evident. First, for consumers and small business, one dramatic advance will be broadband to the home to create high-bandwidth and low-latency connections. The problem of the “last mile” is being solved by cable, digital subscriber line technology (DSL), satellite, and other technologies that either work with or simply bypass the fact that most houses were built to maximize privacy, not connectivity, and have only a small copper fiber information pipe leading out to the world. To download a data file or follow hyperlinks will take a fraction of the time previously required.14 The acceleration in speed will change the kinds of tasks that can be accomplished over the Internet. The increase in bandwidth and decrease in latency will mean not only a faster Internet; it will also mean a different Internet, with much more sophisticated applications. We can see this process at work in the sudden explosion of demand for products like Napster that allow users to assemble in transitory, ad hoc networks to trade large data files—in this case, music. This development was unexpected,
13. Ever since 1987, the Internet Software Consortium has run a semiannual survey to count the number of “hosts” on the Internet. By July 2000 their count exceeded 93 million computers, all accessible one to another through the Internet. In July 1999 there were 56 million. In October 1990 there were only 300,000 computers on the Internet. In August of 1981 there were only 213. “Internet Growth (1981–1991)” (www.isc.org/ds/rfc1296.txt [January 2001]). 14. Whether the first generation of high-bandwidth low-latency connections will be cable modem, DSL, or wireless connections will be a matter of market competition heavily influenced by policy choices. But the connections will arrive quickly. And there are subsequent generations of still higher bandwidth connections on the horizon. Kim Maxwell forecasts video-on-demand beginning in 2003 and fiber optic cable to the home starting around 2015. See Maxwell (1999). Note, however, that as of the end of 1999, fewer than 2.5 million people worldwide had broadband connections to the Internet. See “Internet Access Technology Moving to the Masses, Reports Cahners In-Stat Group” (www. instat.com/pr/2000/mm9914bw_pr.htm [January 2000]). For an analysis of the importance of lowlatency and high-speed connections in making the Internet useful, see Jakob Nielsen, “Usable Information Technology” (www.useit.com/, [January 2000]).
Figure 1-1. Households with Internet Access Millions of households 1995
40
2000
35 30 25 20 15 10 5 North America
Europe
Asia-Pacific
Others
Region Source: Jupiter Communications.
and it poses a huge challenge—and also a huge opportunity—to the recorded entertainment industry. Second, wireless voice networks will soon be as extensively deployed as the wired phone network. Widely diffused wireless data networks will set off another round of experimentation and learning, a round that is already visible (for example, in Finland) in the form of something called “mcommerce” (the “m” stands for mobile). This round of network deployment already brings new applications, challenges to established equipment and software players, and struggles over standards complicated by the fact that wireless providers do not yet know which wireless applications will prove to be truly useful. Third, the capacity and cost of the very backbone of the network will evolve dramatically over the next years, bringing new architectures, lower costs, ongoing experimentation, and new applications. Current experiments with Internet 2 suggest that we will soon be searching for applications to fill available bandwidth rather than the other way around. There
, , ,
will likely be a veritable tsunami of new capacity that brings technically advanced applications and dropping costs.15
Networks Transform Organizations But the full story of the information transformation cannot be told just by recounting the sequence of technologies. A focus on the numbers that describe technological advance and diffusion hides much of the real story: how the growth of an information network will transform organizations and the dynamics of competition. It is not just imprecise but fundamentally misleading to measure this transformation through estimates of “ecommerce” or the “Internet economy.” One set of numbers places the Internet economy at $300 billion in 1998 and $400 billion in 1999, accounting for 1.2 million jobs.16 Another set of numbers reports an Internet economy only one-third that size.17 Of course, much of the difference springs from where different analysts draw the line between “Internet” and “non-Internet.” But to our minds, the major lesson is that it is already becoming impossible to talk about an “Internet economy” per se. There soon will be no slice of the economy that can be carved out of the rest and assigned to the “Internet,” if there is such a thing today. Instead, all of the economy will be linked to the Internet. Every business organization and consumer marketplace can make use of the information processing and communications tools that constitute this current wave of technological advance. The question then becomes how will the entire economy be linked into information processing and data communications? The short answer is that we do not yet know (although there is a huge industry in business books that try to make the case that we do). There are several broad analytics that are suggestive. Nicholas Negroponte in 1996 stressed that, as the costs of transporting and transforming physical goods can only come down so far, but the costs of transporting and transforming information can
15. Vinod Khosla, “The Terabit Tsunami” slide presentation, Kleiner Perkins Caufield & Byers, Baltimore, December 17, 1999 (
[email protected]). 16. These are the results from a series of Cisco-sponsored studies of the “Internet economy.” See the University of Texas’s Center for Research in Electronic Commerce, “Measuring the Internet Economy” (www.Internetindicators.com/indicators.html [February 2000]). 17. See Robert Atkinson and Randolph Court, 1999, “The New Economy Index” (Washington: Progressive Policy Institute) (www.neweconomyindex.org/).
approach zero, there are powerful incentives to convert as much of the economy as possible from “atoms” to “bits.” Graciela Chichilnisky makes a related argument about how knowledge-intensive growth can replace resource-intensive growth.18 But we know that information (or knowledge) is not uniformly communicable. Markets for knowledge are no more self-organizing than are markets for goods. Information does not “want to be free” any more than it “wants” to be anything else—it responds (or more precisely, those who create or control information respond) to incentives that are set in markets and in policy. Some human and economic processes clearly benefit from making a transition away from physical space into information space. For example, it will be much easier to design pharmaceuticals in the realm of information—that is, with advanced knowledge of the human genome at hand, allowing us to “build” custom chemical interventions tailored to the genetic locus of a disorder—than it is to test a random assortment of chemicals in a test tube to see which cells they kill and which they do not. Music does not suffer from transmission in a digital form (so long as it can be “reassembled” perfectly at the other end). But can the same be said of emotion? Of the unique experience of a perfect meal at a fabulous restaurant? Or even of the economist’s stock example of a local service, the simple haircut? The last few years have produced lots of anecdotes and some systematic evidence of a small portion of the kinds of changes we should expect. Traditional businesses that act as intermediaries—like stockbrokers and travel agents—will be irrevocably altered. Traditional products like automobiles will be marketed and serviced in new ways. Stores will not likely disappear, but the mix of stores and what stores do will change. New ways of reaching customers in both time and space will in turn drive new ways of organizing production and delivering goods to consumers. Today we can see a range of strategic experiments, in the form of new companies trying to exploit the web and established companies trying to defend their positions.19 But we simply do not know which of these experiments in corporate information and network strategy will be successful. All business plans are predictions, and all these predictions will be wrong.
18. Negroponte (1996); Chichilnisky (1998). 19. For a brief survey of some of these experiments and their consequences, see Froomkin and DeLong (2000).
, , ,
The Future: The Emergence of the E-conomy In the real world, technology uptake and utilization by businesses, governments, and consumers is nearly unpredictable. Uses emerge within a process of search and experimentation—and may well be something that we do not now expect.20 Economic historian Paul David points out that it took nearly half a century for business users to figure out the possibilities for increased efficiency through factory reorganization opened up by the electric motor.21 Finding the most valued uses for the next wave of computer and communications technology may not take quite as long, but it will take time and probably a longer time than many expect. An era of profound experimentation is a natural and desirable thing. Changes in the powers and capabilities made available by modern information technologies are redefining efficient business practices and sustainable market structures. They are redefining which activities belong inside a firm and which can be purchased from outside. They are changing business models and market structures. Those changes are only beginning. It is anyone’s guess and any player’s bet what the final outcome will be. From a market ecology perspective, the more broadly we experiment and allow failures to emerge, the faster we will learn. Of course, as in any competitive ecology, there is sure to be significant roadkill along the way. In the mid-1990s proprietary online information and communication services were said (with great certainty) to be the killer application. In 1998 selling things like pet food over the web to individual consumers was said, with equal certainty, to be “it.” In 2000 it was business-to-business (B2B) auctions. At each starting moment there were compelling arguments about why this particular application was the “right” one. A year later there were equally compelling arguments about why it was totally “wrong.” Part of this intellectual churn can be written down to media hype and the herd psychology of venture capital. But the more important part stems from a more profound cause. The uncertainty is fundamentally real, not just a function of faulty or hasty thought. What we know for sure is simply that at almost every stage up to today, the killer application of each wave of technological innovation has been a surprise. 20. In part because these elements of economic destiny are not an equilibrium position predictable in advance but are path-dependent. See David (1993); Rosenberg (1996); and Dosi and others (1992). 21. David (1991, pp. 315–47).
The E-conomy Unfolds: Innovations in Organization and Business Practice Technology and innovations in business organization and practice are yoked together—each pulls the other forward. Just as technology usually advances through experimental trial and error, innovations in business practice evolve out of day-to-day efforts to resolve real problems or take advantage of perceived opportunities. Organizations have their own ecology. Out of the swirl of fads, frustrations, tactics, and strategies—like justin-time, total quality, downsizings, knowledge management, outsourcings, strategic alliances, mergers, demergers, spin-offs and start-ups—has emerged a new reality. The ecology as a whole constitutes a rapidly entrepreneurial environment that is able to innovate and commercialize at much faster speeds than before. This rests on at least two important and interrelated changes in business practice: new responses to the “innovation dilemma” and to the “production challenge.”
Resolving the Innovation Dilemma It is often the case that large established firms are not very good at fully developing and commercializing technologies that disrupt their existing markets and procedures. The reasons are endemic to large organizations. Parts of a large company, often the biggest and most powerful parts, are not eager to contemplate the risky development of a new technology that could end up cannibalizing their market and destroying their division. Typically, that group will doubt the feasibility, the reliability, and the marketability of the potential technology. New markets are hard to imagine and harder even to assess quantitatively. Ironically, the more effectively a company is tied into its network of customers and suppliers, the more likely it is to sustain a course of innovation that maintains its position within existing markets and technologies. Thus the less likely it will be to undertake radical innovation. This often looks like a winning strategy. After all, substantial enhancements to existing product lines can generate considerable returns. This creates an innovation dilemma. Companies that are responsive to their customers actually risk getting locked into a set of arrangements that precludes them from grasping the competitive advantages of innovation.22 22. See Christensen (1997).
, , ,
The examples are legion. AT&T asserted that an Internet-style communications system was impractical. Motorola, the leader in analog mobile phones, missed the step in the shift to digital. IBM missed Internet routers. Microsoft came late to the web browser, web server, and web development tools.23 The dilemma is particularly poignant when an established company generates the technology but is unable to capture its value. The creation at Xerox PARC of the functioning Graphical User Interface, the page description language, the Ethernet—and their commercial exploitation by others (Apple and Microsoft, Adobe, 3Com)—is simply one of many examples of breakthrough technology lost inside of excellent established companies.24 This organizational dilemma is a major reason why start-ups and entrepreneurial companies have been the drivers of much of the radical innovation in the transition to an e-conomy. These companies have defined and developed new industries. They are the major source of Schumpeterian competition, which simply bypasses price competition in existing markets to build a business through radical innovation. Entrepreneurial start-up companies, however, face substantial obstacles. They require money, help developing business plans and strategies, supplier contacts, access to clients, legal advice, production and logistics services, and so on. The list of things that start-ups need but cannot generate easily from their own resources is very long. America in the 1980s and 1990s built up a business environment that made it not only possible but in many cases easy and straightforward to establish an entrepreneurial start-up. Early venture money paved the way in making available the funds to start and develop a company. Changes in the prudent-man rule allowed institutional money to enter the venture business and so greatly enlarged its scale.25 The scale of investment changed, and funds were suddenly available for the venture world to move from niche to centerpiece. In a similar fashion, the growth of compensation through stock options that reward success with stunning wealth allowed founders to share a significant portion of the risk and rewards of a new company with like-minded employees. The institution of stock options meant that a cut in pay and a move across country could suddenly represent an opportunity, not a failing—if the reward were a share in value 23. See Ferguson (1999). 24. See Hafner (1996); Hiltzik (1999); and Smith and Alexander (1988). 25. See Lerner (1999).
of a venture start-up. And large established firms followed by seeking ways to encourage and to participate in spin-outs, start-ups, and venture funds.26 These elements make up part of a “Silicon Valley System” (that is no longer geographically limited to Silicon Valley, of course). It is a set of social institutions (such as research universities, venture capitalists, and specialized law firms) and market institutions (such as an extremely flexible labor market, incentive compensation, financial capital, and ultra-high-skilled people from the entire world)—institutions that together make it possible for an entrepreneurial company to bring innovations to market quickly and at scale. This new industrial-economic system has become a critical growth engine for the world and a strong source of comparative advantage for America—and will be until it is successfully imitated elsewhere.27
The Production Challenge Unexpectedly and abruptly in the 1980s, Japanese consumer durable and electronics products surged into American markets. Previous import surges in labor-intensive products such as shoes, apparel, and low-end assembled goods such as toys had forced significant reorganization in American industry. But they did not challenge the sense that American producers and production methods defined advanced manufacturing and advanced industry. The Japanese challenge was fundamentally different. Japanese competitive strength (particularly in autos and electronics) was the result of fundamental innovations in a “lean production system” that simultaneously eliminated inventories and their costs, permitted constant quality improvement, and reduced cost. The shock of a basic challenge to position in the symbol of the industrial age, the auto, and the symbol of the emerging electronic age, the basic memory chip, was considerable. It forced American and European 26. The United States has been particularly successful in facilitating the activities of venture capitalists. In 1999 venture capital investments reached record levels in the United States, amounting to $48.3 billion, which represents a 152 percent increase over the $19.2 billion figure for 1998. Internetrelated firms attracted two-thirds of this sum for 1999 with $31.9 billion. Northern California, with $16.9 billion in venture funds, was by far the largest regional recipient of venture largess, almost double the next largest region, the Northeast. See Venture Economics News, February 8, 2000 (www. securitiesdata.com/news/news_ve/1999VEpress/VEpress02_08_00.html [February 2000]); and xent. ics.uci.edu/FoRK-archive/august97/0400.html (February 2000). 27. On the institutional ecology of the Silicon Valley system, see Kenney and von Burg (1999); Saxenian (1994); and Cohen and Fields (1999).
, , ,
producers to fundamentally reorganize their production and business practices.28 This was a messy business, made more difficult by a severely overvalued dollar in the mid-1980s. The short-term result was the hollowing-out of large chunks of American manufacturing capacity—and in the process the destruction of a lot of valuable human- and firm-specific capital.29 Nevertheless, in the medium term American companies proved remarkably successful at adopting their own version of “lean production” innovations. The Japanese manufacturers may have taught American producers a painful lesson, but the American producers really learned. By the mid-1990s—with a stronger yen and reconfigured American manufacturing processes—the balance of manufacturing advantage in hightechnology industries appeared much more even. The eclipse of the Japanese challenge came about partly because the leading edge of consumer electronics shifted from broadcast-entertainment— TVs, VCRs, radios and related products—to wireless- and computer-based products where U.S.–based producers had set standards. Partly it came about because companies such as Hewlett Packard (HP) now understood the long-run benefits from learning by doing and how large the benefits were that came from controlling the low end of a market through highquality volume production, even if cost accountants told top managers that low-end margins were low. With the inkjet printer, HP dominated the market by systematically defending the bottom end of the market as it introduced new low-cost products.30 But a larger part of the change came with a finer division of labor. Producers discovered that they could lower their costs by concentrating on what they did best and contracting to buy the rest from those with a firmspecific advantage in productivity or a nation-specific factor-cost-based comparative advantage. Outsourcing across borders, a cross-national pro-
28. The reorganization of manufacturing techniques and the principles of lean production are described in detail in Womack and others (1990). For a somewhat more critical view of the lean production system, see Kenney and Florida (1988). 29. See Lawrence (1984); “The Hollow Corporation,” Business Week, March 3, 1986, pp. 57–85; Harrison and Bluestone (1982); and Piore and Sabel (1984). 30. HP introduced the inkjet printer to maintain market share in less-expensive printers. Maintaining inkjet market share increased HP’s bargaining position vis-à-vis Canon, the supplier of the laser printing engine itself. (The situation is further complicated by the fact that the inkjet printer was HP-developed technology while the laser printer was not, and HP had a bias toward invented-here technology.) Nevertheless, this strategy contrasts with the classic strategy of defending the high end of the market. See Cohen and Zysman (1987).
duction system, and the emergence of contract manufacturing have been at the heart of the solution of the production dilemma. Better communications have enabled firms to implement this “outsourcing” strategy. The ability to use modern data communications networks to transmit information allows client firms to specify in great detail what, exactly, they want their contractors to do. In a previous generation, with information flow limited to telephone, fax, mail, and air couriers, a lot of tacit knowledge was necessary in order for work to be distributed. This included knowledge, for example, about how the client branch of the organization would use the output and what the client organization’s default operating procedures were. Such tacit knowledge could best be gained through long experience. Hence large multidivisional enterprises that allowed the building within the enterprise of this tacit knowledge were an attractive organizational form. The increase in bandwidth has allowed explicit directions and thick presentation of the overall project to substitute in considerable measure for tacit knowledge and experience. It has allowed for a much finer division of labor and the creation of what we now call contract manufacturing. Because the world’s nations are so highly differentiated in terms of labor skills and labor costs, the greatest benefits to producers from the finer division of labor may well come from the possibility of extending the firm’s division of labor across nations. The development of a truly innovative production system took place in several stages.31 First came the shift from a market dominated by integrated producers to one in which firms located anywhere in the disintegrated value chain can potentially control the evolution of key standards and in that way define the terms of competition—not just of their particular segment, but critically in final product markets as well. Market power shifted from the assemblers (such as Compaq, Gateway, IBM, or Toshiba) to key producers of components (such as Intel); operating systems (such as Microsoft); applications (such as SAP, Adobe); interfaces (such as Netscape); languages (such as Sun with Java); and to pure product definition companies (like Cisco Systems and 3COM). What all of these firms have in common is that, from quite different vantage points in the value chain, they all own key technical specifications that have been accepted as de facto product standards in the market. This was a key signal of how
31. Sturgeon (1999); Sturgeon (1997a); and Sturgeon (1997b).
, , ,
disruptive start-up companies began to define the direction and fate of the industry.32 Second, companies that had found production a weakness began to outsource both component production and assembly. New highly flexible and adaptable production systems emerged out of this process. Cross-National Production Systems (CNPS) is a convenient label to apply to the consequent disintegration of industrial value chain into constituent functions that can be contracted out to independent producers wherever those companies are located in the global economy. And such independent producers can locate wherever factor costs and local levels of technological development provide a comparative advantage.33 CNPSs take advantage of an increasingly fine division of labor both between firms and between nations. The networks permit firms to weave together the constituent elements of the value chain into competitively effective new production systems while facilitating diverse points of innovation. They are not principally about lower wages as such, nor about access to markets and natural resources— although these objectives often motivated initial investments. Rather they are about the emergence of locations that can deliver different mixes of technology and production at different cost-performance points. Third, and perhaps most important, CNPSs imbued supply chain management with a strategic meaning. This set the stage for companies such as Dell to integrate marketing and production and convert themselves into service businesses tying the design, production, and delivery of the product directly to the customer.34 But there is still a physical product at stake, 32. On the theoretical and historical process of standard setting, see David (1987); and David and Greenstein (1990). David makes a distinction between “standards agreements” that are negotiated and “unsponsored standards” that arise more generally, even spontaneously, in competitive environments. While many of these unsponsored standards may emerge as optimal solutions to specific technological problems, it is sometimes the case that standards result from initial first-mover advantage, that is, from initial specifications of a new technology established by start-up firms. Once established, standards create positive feedbacks, lock-in, and path dependence owing to high switching costs that ensue as standards diffuse. David (1993); see also Shapiro and Varian (1999). On the role of users in standardsetting, see Borrus and Zysman (1997). 33. On cross-national production systems and networks of companies, see Stephen S. Cohen and Michael Borrus, “Networks of Companies in Asia,” Berkeley Roundtable on the International Economy, 1996 (brie.berkeley.edu/courses/sc/cp221/pascal.pdf [April 13, 2001]); Gereffi and Korzeniewicz (1994); Reich (1994); and Cohen and Guerrieri (1994). 34. On the reconfiguration of value chains and the new links emerging within firms between production, procurement, and sales, see Kenney and Curry (1999b); “Business to Business E-Commerce” (1999); “The Net Imperative: Business and the Internet,” Economist, June 26, 1999, pp. 5–40; U.S. Department of Commerce (1998), app. 3; and Kenney and Curry (1999a).
and much of the service that a company like Dell provides lies in organizing the production, marketing, delivery, and customization of that product for a particular need. To recognize how manufacturing and production still matter in the world of e-commerce means grasping the evolving place that production has in a system set up to deliver either a product or a service to a consumer.
Tracking the Transformation The e-commerce transformation represents a series of remarkable opportunities for businesses, governments, and other organizations to remake themselves, re-create what it is that they can do, and reconstruct their relationships with customers, citizens, and constituents. It is also a remarkable opportunity for social scientists. This is not a separate research domain for a small and specialized group of observers interested in business evolution and the politics of technological change. It is not simply a productivity phenomenon (of greater or lesser magnitude). It is a social, economic, organizational, legal, and political phenomenon all at once—and may yet be more than that, extending to a phenomenon of consciousness as well. It is clear that a book like this is only a start in what will and should be a much broader process of understanding the transformation. Our analytic approach at this stage is middle of the road. Between relatively esoteric debates over the precise macroeconomic measures of productivity and hugely speculative and vague arguments about radical changes in society and consciousness, there lies a more grounded analytic approach based in empirical research about ongoing and foreseeable changes in business practices within sectors. It is possible to extract from that data a set of general themes. It is also possible to set some contours for a research strategy moving forward. Technological tools are developing much more quickly than are the human and organizational systems that make use of them. Governance policies follow, typically, yet another step behind. Yet expectations about policy as well as real changes in policy set parameters around experimentation with business models. Since business model transformation is the central organizational driver of the transformation, these policy choices are key to the way in which the revolution unfolds. Libertarian fantasies of cyberspace as a policy-free zone are (thankfully) a thing of the past. The question now is what kind of governance and from
, , ,
where? Clearly a set of existing rules is not easily adapted to this new environment. To govern the e-conomy will mean updating old understandings, rules, and bargains all at once. And much of this will have to be done globally, or at least internationally, as well as domestically. It is a tough agenda. We hope the sectoral studies in this book serve to clarify some of the issues that will be dealt with along the way.
References Anderson, Margo. 1988. The American Census. Yale University Press. Bar, François, and others. 1999. “Defending the Internet Revolution: When Doing Nothing Is Doing Harm.” Working Paper 12. Berkeley Roundtable on the International Economy (August). Baran, Barbara E. 1986. “The Technological Transformation of White Collar Work: A Case Study of the Insurance Industry.” Ph.D. dissertation, University of California, Berkeley. Borrus, Michael, and François Bar. 1994. “The Future of Networking.” Research Paper. Berkeley Roundtable on the International Economy. Borrus, Michael, and John Zysman. 1997. “Globalization with Borders: The Rise of Wintelism as the Future of Global Competition.” Industry and Innovation 4 (2): 141–66. “Business to Business E-Commerce.” 1999. Business 2.0 (September): 84–124. Campbell-Kelly, Martin, and William Aspray. 1996. Computer: A History of the Information Machine. Basic Books. Chichilnisky, Graciela. 1998. “The Knowledge Revolution.” Journal of Trade and Economic Development 7 (1): 39–54. Christensen, Clayton M. 1997. The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail. Harvard Business School Press. Cohen, Stephen S., and Gary Fields. 1999. “Social Capital and Capital Gains in Silicon Valley.” California Management Review 41 (2): 108–30. Cohen, Stephen S., and Paolo Guerrieri. 1994. “The Variable Geometry of Asian Trade.” Working Paper 70. Berkeley Roundtable on the International Economy. Cohen, Stephen S., and John Zysman. 1987. Manufacturing Matters: The Myth of the PostIndustrial Society. Basic Books. David, Paul A. 1987. “Some New Standards for the Economics of Standardization in the Information Age.” In The Economic Theory of Technology Policy, edited by Partha Dasgupta and P. L. Stoneman. Cambridge University Press. ———. 1991. “Computer and Dynamo: The Productivity Paradox in a Not-Too-Distant Mirror.” In Technology and Productivity: The Challenge for Economic Policy, 315–47. Paris: OECD. ———. 1993. “Historical Economics in the Long Run: Some Implications for Path Dependence.” In Historical Analysis in Economics, edited by Graeme Donald Snooks, 29–40. London: Routledge. David, Paul A., and Shane Greenstein. 1990. “The Economics of Compatibility Standards: An Introduction to Recent Research.” Economic Innovation and New Technology 1 (1): 3–41.
Dosi, Giovanni, and others, eds. 1992. Technology and Enterprise in Historical Perspective. Oxford: Clarendon Press. Ferguson, Charles. 1999. High Stakes, No Prisoners. Times Books. Flamm, Kenneth. 1987. Targeting the Computer: Government Support and International Competition. Brookings. ———. 1988. Creating the Computer: Government, Industry, and High Technology. Brookings. Froomkin, A. Michael, and J. Bradford DeLong. 2000. “Some Speculative Microeconomics for Tomorrow’s Economy.” First Monday 5 (2). Gereffi, Gary, and Miguel Korzeniewicz, eds. 1994. Commodity Chains and Global Capitalism. London: Praeger. Hafner, Katie. 1996. Where Wizards Stay up Late: The Origins of the Internet. Simon and Schuster. Harrison, Bennett, and Barry Bluestone. 1982. The Deindustrialization of America: Plant Closings, Community Abandonment, and the Dismantling of Basic Industry. Basic Books. Hatch, Nile W., and David C. Mowery. 1998. “Process Innovation and Learning by Doing in Semiconductor Manufacturing,” part 1. Management Science 44 (11): 1461–77. Hiltzik, Michael. 1999. Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age. New York: HarperBusiness. Joy, Bill. 2000. “Why the Future Doesn’t Need Us.” Wired 8 (4): 238–62. Kenney, Martin, and James Curry. 1999a. “Beating the Clock: Corporate Responses to Rapid Change in the PC Industry.” California Management Review 42 (1): 8–36. ———. 1999b. “E-Commerce: Implications for Firm Strategy and Industry Configuration.” Working Paper 2. Berkeley Roundtable on the International Economy. Kenney, Martin, and Richard Florida. 1988. “Beyond Mass Production: Production and the Labor Process in Japan.” Politics and Society 16 (1): 121–58. Kenney, Martin, and Urs von Burg. 1999. “Technology, Entrepreneurship and Path Dependence: Industrial Clustering in Silicon Valley and Route 128.” Industrial and Corporate Change 8 (1): 67–103. Lawrence, Robert Z. 1984. Can America Compete? Brookings. Lerner, Josh. 1999. The Venture Capital Cycle. MIT Press. Lundvall, B. A. 1985. Product Innovation and User-Producer Interaction. Aalborg University Press. ———. 1988. “Innovation as an Interactive Process: From User-Producer Interaction to the National System of Innovation.” In Technical Change and Economic Theory, edited by Giovanni Dosi and others, 349–69. London: Pinter. Maxwell, Kim. 1999. Residential Broadband: An Insider’s Guide to the Battle for the Last Mile. John Wiley and Sons. McKenny, James. 1995. Waves of Change: Business Evolution through Information Technology. Harvard Business School Press. Mowery, David, and Nathan Rosenberg. 1998. Paths of Innovation. Cambridge University Press. Negroponte, Nicholas. 1996. Being Digital. Vintage Books. Nooteboom, Bart. 1999. “Innovation, Learning and Industrial Organization.” Cambridge Journal of Economics 23 (2): 127–50. Piore, Michael, and Charles Sabel. 1984. The Second Industial Divide. Basic Books.
, , ,
Reich, Robert. 1994. The Work of Nations. Basic Books. Rosenberg, Nathan. 1996. “Uncertainty and Technological Change.” In Technology and Growth, edited by Jeffrey C. Fuhrer and Jane Sneddon Little, 91–110. Federal Reserve Bank of Boston. Saxenian, AnnaLee. 1994. Regional Advantage: Culture and Competition in Silicon Valley and Route 128. Harvard University Press. Shapiro, Carl, and Hal Varian. 1999. Information Rules: A Strategic Guide to the Network Economy. Harvard Business School Press. Slaughter, Sarah. 1993. “Innovation and Learning during Implementation: A Comparison of User and Manufacturer Innovations.” Research Policy 22 (1): 81–95. Smith, Douglas, and Robert Alexander. 1988. Fumbling the Future. Morrow. Standage, Tom. 1998. The Victorian Internet. New York: Berkley Books. Stephenson, Neal. 1996. “Mother Earth, Mother Board.” Wired 4 (12): 98–160. Sturgeon, Timothy J. 1997a. “Does Manufacturing Still Matter? The Organizational Delinking of Production from Innovation.” Working Paper 92B. Berkeley Roundtable on the International Economy. ———. 1997b. “Turnkey Production Networks: A New American Model of Industrial Organization?” Working Paper 92A. Berkeley Roundtable on the International Economy. ———. 1999. “Turn-Key Production Networks: The Organizational Delinking of Production from Innovation.” In New Product Development and Production Networks. Global Industrial Experience, edited by Ulrich Juergens. Berlin: Springer Verlag. U.S. Department of Commerce. 1998. The Emerging Digital Economy. Watson, Thomas Jr., and Peter Petre. 1990. Father and Son and Company. London: Bantam Press. Womack, James P., and others. 1990. The Machine That Changed the World. Simon and Schuster. Yates, JoAnne, and Robert I. Benjamin. 1991. “The Past and Present as a Window on the Future.” In The Corporation of the 1990s: Information Technology and Organizational Transformation, edited by Michael S. Scott Morton, 61–92. Oxford University Press.
2
The Construction of Marketplace Architecture
An agent is at hand to bring everything into harmonious cooperation, triumphing over space and time, to subdue prejudice and unite every part of our land in rapid and friendly communication . . . and that great motive agent is steam. ⁽ ⁾
, the commercial release of the first-version Netscape browser and Netsite server signaled the transformation of the Internet from an elite network reserved for advanced research and academics into a mass medium. As the number of dot-com sites quickly outpaced the dot-mil and dot-edu pioneers, users began to see the Internet’s tremendous potential for transforming commercial interactions. Sweeping predictions accompanied this transition. The Internet was going to transform economic activity, yielding significant increases in efficiency and productivity. Fueling this confident optimism was the expectation that the Internet would usher in perfect markets that would in turn replace traditional, inefficient corporate hierarchies and supply chains. Internet-based
A
market processes would yield flatter organizations, disintermediate economic relationships, and give equal power to all market participants. To a large extent, these hopes were rooted in the characteristics of Internet technology and of the innovation process that saw its emergence. The Internet’s development was user-driven and bottom-up. This resulted in a decentralized network, where any node can become part of the internetwork as soon as it speaks “Internet protocol” (IP), the common language. Governance of the Internet was itself decentralized, largely outside the hands of traditional government institutions, and carefully watched by a wide array of private individuals and institutions. Unlike previous communication networks, the Internet seemed largely self-governing and allowed any individual or organization to participate on an equal footing. The hopes were that these characteristics of the technology would simply carry over to the economic processes that make use of the Internet—that decentralized technology would naturally lead to decentralized outcomes in the use of the technology. The Internet’s liberating technology would drive market structure: the low cost of adoption and the endless range of new applications would lower barriers to entry, decentralize economic power, and thereby democratize society and empower individuals.1 With the benefit of hindsight, we know today that previous infrastructures, such as the railroad, fundamentally transformed the structure and efficiency of our economies. Indeed, “steam-commerce,” as the economic transformation brought on by the railroads might have been called, saw profound reorganization of productive activities within integrated multidivisional corporations. It led to sweeping restructuring of supply chains, markets for raw materials, and finished products. It allowed firms to draw on vastly broader labor markets and sources of inputs. This resulted in the reorganization of marketplaces around the possibilities created by the railroad.2 The real dimensions of such change are always difficult to gauge while the transformation is under way. Several years after the onset of the commercial Internet, as some dot-com pioneers stumble, we began to see how some early predictions missed the mark. The true economic impact of the
1. This tendency to assume that a decentralized technology naturally leads to decentralized and democratic uses is what we have described as “the Jeffersonian Syndrome.” François Bar, John Richards, and Christian Sandvig, “The Jeffersonian Syndrome: The Predictable Misperception of the Internet’s Boon to Commerce, Politics, and Community” (www.stanford.edu/~fbar/Publications/jeffersoniansyndrome.PDF [March 2000]). 2. Chandler (1977).
Internet still remains to be seen, but, with some initial Internet history to draw from, it is worth critically revisiting the early hopes.
Promises and Reality Viewing optimistically the inherently decentralized and democratic characteristics of early Internet technology, analysts predicted that, applied to commercial endeavors, it would transform marketplace communication and bring us closer to the ideal of a “perfect market”: multiple buyers, multiple sellers, many interchangeable products, all smoothly and swiftly converging toward equilibrium thanks to perfect information. With the Internet, it was argued, market participants can know everything there is to know about the prices, characteristics, and quantities of goods in the market and make instantaneous, perfect, rational decisions. The result was to be “a new world of low-friction low-overhead capitalism, in which market information will be plentiful and transaction costs low.”3 Easy entry and easy exit afforded by cheap, flexible Internet technologies would keep incumbent players constantly on their toes, in the best interest of economic efficiency. “Any product that resembles a commodity—and most do—will be driven down in price by the efficiency of the Internet as a marketplace.”4 At the end of the day, analysts promised that we could now get very close to Adam Smith’s ideal, perfect market. “There is a fundamental shift in power, and it’s shifting to the consumer.”5 Today, however, an economic reality has emerged that diverges substantially from these predictions. Far from a multitude of interchangeable participants, concentration seems the rule in many segments of e-commerce, where only the largest actors seem able to succeed. While the Internet has reduced overhead and transaction costs, e-commerce players have required considerable investment to survive. Far from a “friction-free” environment, e-commerce sites strive to create “stickiness” that will keep their customers from clicking on to their competitors. Overall, while the Internet has had significant impact on commerce, the resulting economic landscape reveals
3. Gates (1995, p. 158). 4. Bill Gates, “Friction-free Capitalism and the Price of the Future,” May 20, 1998 (www. microsoft.com/billgates/columns/1998Essay/5-20col.asp [December 1999]). 5. Ferguson, cited in R. Quick, “The Attack of the Robots: Comparison-Shopping Technology Is Here—Whether Retailers Like It or Not,” Wall Street Journal, December 7, 1998, p. R14.
many unanticipated features. The early expectations rested on three key assumptions: low entry barriers, decreased roles for intermediaries, and lower transaction costs. It is worth examining where they stopped short before moving on to an analysis of the new landscape. First, the Internet was expected to shatter entry barriers. New players, able to marshal virtual resources in place of real ones, were expected to compete on par with incumbents. With the Internet, no need to build stores or hire a sales force—a cleverly designed website would suffice. No need to keep costly inventory—orders would simply be passed on to suppliers for just-in-time delivery. Technology was to abolish entrenched positions as competitive advantage. Small players could be as powerful as large ones. In many sectors, however, the barriers to successful and credible entry remained high. It quickly became apparent that if entering was easy, staying would be more difficult. Anyone could open a bookstore on the Internet. Yet only the best-capitalized bookstores would manage to survive (and even for these, survival still remains an open question). Entrants have discovered that it takes significant resources and skills to maintain an effective web presence, to guarantee that orders received will be filled. Traditional businesses still have the relationships and the marketing expertise that create substantial obstacles to new players’ entry. Entrants had their chance, but over time it turned out that experience, long-standing business relationships, and domain expertise still matter. In fact, concentration seems pervasive throughout the Internet world. Whether you look at portals, ISPs, exchanges, or the makers of the underlying network infrastructure, tremendous economies of scale, scope, and network seem to favor the largest players and reinforce concentration. Because of network externalities and economics (large fixed costs, low marginal costs), we are seeing increasing concentration and large players, rather than a multitude of small players. Second, the Internet was to bring disintermediation.6 In the old economy, intermediaries of all kinds performed important functions as information brokers. They aggregated demand for suppliers, giving them a better sense of what the market wanted, and offered buyers a convenient one-stop picture of supply. In the Internet economy, however, a world
6. Malone, Yates, and Benjamin, (1987).
where everyone supposedly has access to complete information, intermediaries simply add cost and delay. They could be eliminated now that technology allowed direct connections between buyers and sellers, without need for brokers, market makers, consolidators, and other middlemen. In fact, new intermediaries emerged, old intermediaries adapted. Homebuyers and sellers did not simply bypass realtors but began to use the services of online brokers, some new, but mostly traditional realtors with a new web presence.7 Stockbrokers did not disappear: new brokers like etrade emerged while traditional brokers adapted. There may even be more intermediaries today than before: business-to-business transactions once directly negotiated between two parties now often take place within online marketplaces. Rather than disintermediation, we are witnessing the transformation of intermediation.8 Third, the Internet economy was to be friction-free. In the old economy, communication activities account for a large portion of the costs of transactions. They reflect the trouble and expense of searching for products, identifying the right buyers and sellers, negotiating contracts, invoicing purchasers, billing and collecting. All create market friction and make it burdensome to switch from established commercial relationships. Hierarchical relationships among commercial partners, embodied in longterm contracts or organizational integration, were the old economy’s way to reduce transaction costs.9 The Internet, offering cheap and efficient ways to set up and execute transactions, was expected to reduce that friction, making markets more perfect. In turn, this would lead to greater reliance on markets than hierarchies for the organization of economic activities. But if the Internet could reduce friction, the same technology can also be deployed to create more of it. Start-ups discovered that friction (or “stickiness,” as their business plans prefer to call it) often is the key to profits.10 Friction, and the resulting market imperfections, creates seller or buyer advantage as well as arbitrage opportunities for traders. Embedding friction within their web offerings, they were able to create switching costs for their customers, either through standards (for example, incompatible instant messaging), or particular implementation (for example, web-based 7. Buxmann and Gebauer (1998). 8. Sarkar, Butler, and Steinfield (1995). 9. Williamson (1975). 10. Smith, Bailey, and Brynjolfsson (2000).
e-mail that cannot be forwarded).11 Friction-free may be a macroeconomic ideal but makes less sense from the point of view of individual business players. Indeed, friction is where business opportunities are to be found. Instead of friction-free commerce, what emerged was the design of exchange spaces with differential friction. Through this all, a common theme starts to emerge. At the core of the transition toward e-commerce is the emergence of multiple virtual spaces for exchange. These are not trivial to build and to back up with real-world ability to deliver on the agreements they help to negotiate. As a result, there are real barriers to credible and sustainable entry. Far from disintermediation, they constitute the emergence of new intermediaries or the reinvention of old ones. And far from providing friction-free interaction, they represent the careful arrangement of intentional friction. As firms deploy electronic technology to create commercial advantage for themselves, they are striving to alter the terms and dynamics of competition. They are, in the process, re-creating the marketplace.
Mapping a Way through the Transformation The Internet-based reinvention of markets comes in varied degrees and flavors. As a result, the single label of “e-commerce” covers a wide variety of ways to organize production and exchange activities. Commercial interactions are organized differently in different sectors, often reflecting the preexisting ways of doing business and the position of incumbents. To discern what is really new and analyze the implications, we need to start with a map (see figure 2-1). Commercial activities, whether conventional or electronic, involve four basic levels.12 . Commerce requires communication: buyers and sellers must exchange information about the characteristics of goods and services, about quantities, availability, and prices; firms must coordinate their activities with those of partners and subcontractors. In the most primitive markets, such as farmers’ markets on the central square of medieval towns, communication was interpersonal and unmediated as buyers and sellers negotiated directly with each other. All communication technologies and media have influenced commercial 11. Shapiro and Varian (1998). 12. Bar and Murase (1999). See also Picot and others (1997).
C-good
C-good
E-good E-delivery
Transaction and payment Marketplace
Di rec e- c t om me rce
C-good
Ind ir e-c ect om me rce
Deliverable
Ne t-a com ided me rce
Co nv com entio me nal rce
Figure 2-1. Mapping E-Commerce
E-payment Conventional marketplace
Infrastructure
E-payment Electronic marketplace
Electronic infrastructure
Conventional
Electronic
activities. Carrier pigeons and the mails have increased the reach of old marketplaces; telegraph and telephone have helped accelerate the pace of exchange. New communication media will continue to influence the conduct of economic activities as they transform the way in which various economic actors communicate. . Commercial communications do not take place in a vacuum, but in the context of structured coordination environments within which buyers interact with sellers, negotiate, and agree on the terms of a transaction. These marketplaces come in many shapes and forms, from the vast network of fairs in medieval Europe13 to the modern NASDAQ. They share the fact that they are embedded within communication infrastructures that shape and constrain their mechanisms: the characteristics of what is traded, the process for matching demand and supply depend on the communication infrastructure.
13. Braudel (1979, pp. 63–74).
. These come into play to send, execute, and settle orders (including payments) that have been agreed to in the marketplace. They rely on the features of the communication infrastructure, either to physically transfer a payment or to transmit information about credits and debits to the accounts of buyers and sellers. . Finally, the system’s goal is to supply deliverables, the service or merchandise being exchanged. Here again, the underlying communication infrastructure constrains what goods can be shipped (or transmitted), what services require proximity or can be performed at a distance. Different degrees of reliance on electronic technologies at these four levels result in four broad categories of commerce (figure 2-1). In pure conventional commerce, nothing electronic is involved. Buyers and sellers physically meet in a market, communicate face-to-face, conduct transactions directly, and settle them with physical currency. The buyer physically takes delivery of the good or service. Since their inception however, electronic technologies have assisted in the entire range of commercial activities. A first level of electronic commerce, network-aided commerce, relies on electronic communication technologies to assist traditional commercial activities. The telephone made it possible to conduct traditional transactions at a distance rather than in person, and electronic data interchange (EDI) allowed companies to automate the exchange of orders and invoices. These, however, do not fundamentally change the commercial process, they simply make existing processes faster, cheaper, and more efficient. When a company lets customers pick a product from a website rather than a printed catalog, or takes their orders over e-mail, it uses the Internet as an aid to existing commerce rather than to transform it, continuing that trend. The next level, indirect e-commerce,14 corresponds to the creation of an electronic marketplace on the network, within which demand and supply are matched, even though the goods and services traded are ultimately delivered physically to a customer. This matching process often differs significantly from what goes on in the physical marketplaces it replaces (or competes with). Airlines’ computerized reservation systems (CRS), such as American Airlines’ SABRE, effectively created an electronic marketplace for airplane trips and ancillary travel services that functioned quite differently from the network of travel agents interacting with airlines over the
14. Committee on Economic and Monetary Affairs and Industrial Policy (1998).
phone it replaced.15 Similarly, E-Bay’s electronic marketplace is much more than the automation of newspaper classified ads and provides a new process for pairing up buyers and sellers. Finally, direct e-commerce is purely electronic, where the goods or services traded are themselves electronic and delivered over a network. This includes the commerce of software in its many forms, from music to computer programs, as well as online stock exchanges. Insurance and services such as aircraft engine maintenance are now traded that way as well. In this purest form, the electronic infrastructure supports the marketplace and transaction and payment mechanisms as well as the transmission of the traded objects themselves. This mapping of e-commerce makes two important points. First, the broad category of electronic commerce covers in fact a wide diversity of commercial arrangements, with different degrees of “electronic-ness.” As a consequence, the impact can range from a mere enhancement of traditional commercial activities to fundamentally new ways to structure and implement them. The nature and magnitude of the economic implications will be equally diverse. This makes it crucial to look at real cases in individual sectors in order to understand the diverse implications of the transiting to electronic commerce. Second, the map highlights what is the most transformative aspect of this transition: the emergence of electronic marketplaces. While communication networks have always been an important aid to the market and to market activities, the network itself is now increasingly becoming a marketplace, that is, the place where buyers meet sellers, negotiate prices and quantities, agree on delivery terms, and exchange goods and payments. Thus it is useful to distinguish two categories of economic implications of e-commerce, broadly described by the headings of efficiency and structure. The quest for enhanced efficiency within existing commercial practices has important economic benefits but is not fundamentally new. It was the goal in the application of previous communication innovations to commercial practices: the world has seen previous rounds of mailcommerce, telegraph-commerce, phone-commerce. Each entailed substantial economic benefits, making existing market processes faster and cheaper. But this is still network-aided commerce. The second category is less obvious but more fundamental. The implementation of electronic
15. Hopper (1990). See also chapter 5 by Klein and Loebbecke in this volume.
marketplaces within the network infrastructure shapes the structure of economic relationships between companies and the operation of market processes. The next sections look at these two in turn.
Not So New: Pursuing Efficiency through Network-Aided Commerce A first dimension of electronic commerce, and the most visible, simply constitutes the continuation of existing trends: it is the application of electronic communication technologies to existing commercial practices and marketplaces. Like the diffusion of previous communication technologies through commercial activities of the past, it does not in itself represent a fundamental transformation but rather incremental improvement of existing processes. Communication technologies are key to market processes because markets mechanisms are information processing activities. Markets are structured information exchange environments, where actors convey information about the characteristics of goods and services, their prices and availability. Market mechanisms such as negotiation, matching, and agreement similarly are communication processes. Naturally, every time a new communication technology comes along, it is typically applied to the automation of existing market communication, further enhancing the flow of market-related information.16 Thus before the Internet, couriers, the telegraph, the telephone, or electronic data interchange (EDI) have served to enhance market-related communication—to allow faster, better, broader, cheaper matching of buyers and sellers and the settlement of transactions and payments.17 Time and again, the first step in the application of new technologies has led to further automation of existing marketplaces—designed to improve their operation along existing processes rather than transform these processes. This is also the case with the Internet. Exchanging information about product and services becomes faster and cheaper. The Internet allows sellers to reach new potential customers, increasing their range, and conversely enables buyers to compare the offerings of a greater set of suppliers. As a result, sellers have developed better ways to gauge demand for their products, to adjust prices accordingly, and to relay these adjustments to the 16. Beniger (1986). 17. Brousseau (1994).
marketplace, leading to more dynamic pricing mechanisms. Internet technologies permit faster and more cost-effective matching of demand and supply, yielding improved market clearing mechanisms. They support better negotiation mechanisms and faster transaction settlement. Sellers have harnessed the features of the network to offer more responsive and more personalized customer service.18 These are all significant improvements. However, similar claims can also be made for any of the previous communication technologies, from the postal network to the telephone. The result, in this round as in the previous rounds, is network-aided commerce, a move toward more efficiency in market processes rather than fundamentally new market processes. Two characteristics of this transition deserve particular notice. First, these improvements need be neither uniform nor symmetrical. In fact, they are typically implemented strategically. Individual market participants hope to get a leg up on their competitors by deploying information systems that give them faster, better market information or that help them close a transaction faster. Sellers try to make it less attractive for their customers to switch to competing suppliers because of the superior service they can provide thanks to improved communication technology. Buyers try to exert greater pressure on their suppliers by deploying network systems that give them a more accurate vision of their alternatives. In all cases, as with earlier information systems, the purpose is precisely to create advantage over competitors, greater leverage over buyers or suppliers.19 As a result, especially in the early stages, communication technology deployment may improve the efficiency of certain market operations, but this does not in itself make the market more perfect. Improvements will, more likely than not, be unevenly distributed and asymmetrical in the benefits they yield. This obviously calls for a response from those who lost out in the first round of technology deployment. As time passes and the technology matures, marketplace improvements will diffuse and technology-based advantages will tend to cancel out, leaving the benefits of greater efficiency to be shared by all market participants.20 However, we should not expect that transition to be instantaneous. Second, the unfolding of these improvements represents the first step in a cyclical, evolutionary pattern. This initial implementation of market 18. Hanson (2000). 19. Clemons (1986). 20. Brynjolfsson and Hitt (1996).
automation technology requires the deployment of a new network infrastructure, sometimes within individual organizations, sometimes between market players. Initially, the motivation for this deployment may be strictly aligned with its automation goals and the investment justified on that basis. However, once this new infrastructure is in place, it allows market participants to experiment with possibilities beyond this initial intent, tinkering with other ways to use this network and its applications. This in turn will suggest and enable deeper transformation of market processes, beyond the strict automation that motivated the technology’s initial deployment.21 History has shown how this process ushers in a virtuous innovation cycle, leading to fundamental economic change, and there is no reason to expect this round to be any different. The next section explores the emergence of new market structures, an important step in that direction.
What, Then, Is Really New? Structuring the Electronic Marketplace The most fundamental transformation of commercial activities through the application of electronic technologies is not primarily about efficiency; it has to do with market and industry structure. It is about architecture. The architecture of conventional marketplaces, the physical arrangement of their “bricks and mortar,” is never neutral. Ludovic Piette’s depiction of Pontoise’s place du marché in 1876 shows this clearly. The buildings surrounding the square limit the area available for trading, and therefore the number of sellers and buyers who are allowed to take part in market activities. The physical arrangement of stalls constrains the discovery paths buyers can follow, and thus has an impact on what they buy. Sellers occupy different physical positions, display their wares on the ground or in carts, thus affecting their negotiating situation. A physical barrier stops buyers from entering the marketplace until the market is officially open for business. The specific constraints resulting from this architectural arrangement were somewhat arbitrary. Pontoise’s place du marché could have been organized differently, and indeed, one could find many different physical marketplace designs around the world, from the covered markets of Covent
21. Bar, Kane, and Simard (2000).
Ludovic Piette, Le marché de la place de l’Hôtel de Ville, á Pontoise (1876). Musées de Pontoise.
Garden to the souk of Istanbul. However, while different architectural choices could be made, none was neutral. Each entailed physical constraints that structured the market activities harbored in these spaces. Architecture shaped commerce. Set against the old brick and mortar marketplaces, communication technologies promised freedom from physical constraints. Telegraph and telephone began to allow distant buyers and sellers to participate in markets without being physically present. The Internet promised to liberate marketplaces from the constraints of physical space. There would be no limit to how many buyers and sellers could “fit” in the marketplace, since it was no longer physically bound. Sellers could dream up all kinds of ways to display their wares and design imaginative virtual stands within the software of the electronic marketplace. The market would no longer be held in a specific time zone and nothing demanded that it shut down after dark. With the Internet, the network is the marketplace.22 Not simply a lubricant for the wheels of traditional commerce, the Internet becomes the very place where buyers and sellers meet and transact business. The network, or more precisely the combination of network-based applications and network
22. Gordon (1989).
control software, is the environment within which the various stages of commercial exchange unfold. The network determines market access, since only those who are connected can participate in the market process. It supports discovery, as market players use the network to learn what goods and services are available, at what price, with what characteristics. Buyers and sellers also use the network to find out more about each other, from reputation to credit worthiness and service follow-through. Network-based software carries out the matching of demand and supply, connecting buyers with sellers. Once paired, buyers and sellers use the network to negotiate the precise terms of the transaction they wish to enter into. The network then supports the closing of a transaction and transfer of payment. For electronic products, the network also serves as delivery channel, completing the chain. For this reason, the transition to e-commerce is more profoundly transformative of economic processes than past transitions such as “steamcommerce”: the information infrastructure and the software that determines its configuration become the foundation for a full set of basic market processes. Market mechanisms then become embedded in the network’s software, and network configuration defines market operations. When the network becomes the marketplace, the information communicated over the infrastructure, the market mechanisms, and the functioning of transaction settlement and payments are all embedded in the software infrastructure that supports the network marketplace and determines its operating parameters. As software, they can adapt flexibly to changing market processes and coevolve with changing economic relationships and organizational forms. Furthermore, control over the configuration of digital networks—the software definition of who can communicate with whom, under which conditions, to do what—is separable from ownership of the underlying physical infrastructure.23 As a result, electronic marketplaces can potentially be designed and modified by a variety of parties, ranging from infrastructure owners, providers of marketplace environments, and market players—buyers, sellers, or third parties. Ability to control network configuration then becomes the key to defining the marketplace’s architecture. The architecture of the network marketplace and the software that supports it are the domain of the private actors that provide the network marketplaces—whether these are online auction or spot markets, online retail
23. Bar (1990).
sales sites, portals for e-commerce transactions, or business-to-business supply chain transactions. The promise of this newfound freedom led to a pervasive myth about the Internet: No longer bound by real-world constraints, the virtual marketplace would become a “perfect market” where all the inherent biases of the physical world could be overcome. In fact, architectural bias returned with a vengeance. Because the architecture of virtual marketplaces is defined in software, network configuration determines marketplace architecture.24 This means that those who have the power to set network configuration can decide who can participate in the market and what will be the rules of engagement within that marketplace. They can architect a virtual space with open or restricted access, decide to let in select buyers, sellers, or third parties. They can give them equal or differential access to market information. They can decide whether the market will function like an exchange, with bid-ask mechanisms, an auction, a brokerage, or simply a catalog. They can structure it so all parties get an equal shot at a transaction or embed preferential treatment for select market players within the software code that governs market clearing. Control can reside in a variety of places within the network. The “endto-end” principle,25 which has guided Internet design, argued for implementing software functions, to the extent possible, at the edge of the network (that is, in the computers connected to the network) and in the topmost software layers26 (that is, as independently as possible from the underlying network hardware). According to that model, the communication network is a neutral conduit and all the intelligence resides at the edge, in the servers controlled by the network users. In the pure end-to-end network, control over marketplace configuration therefore belongs to those who control the applications that run on these end devices and the virtual network they constitute. As the Internet matures, it experiences increasing departures from the purest end-to-end principle. Owners of various subnetworks (for example, backbone providers, broadband access providers, or Internet service providers) find it in their competitive interest to embed certain functions (such as security, caching, mirroring, etc.) within the piece of the network they control.27 They do so in part to improve network 24. Just as “Code is Law” (Lessig, 1999a), we might say that “Code is Economics.” 25. Saltzer, Reed, and Clark (1984). 26. See chapter 16 by Michael Kleeman in this volume, describing the network’s layered model. 27. Clark and Blumenthal (2000).
performance, but also because these functions can serve as the building blocks of electronic marketplaces, allowing them to leverage network control into market power. Thus in this emerging network environment, software-based control over network configuration can be found both at the edge and within the network, potentially exercised by multiple parties, jointly or independently.
Modern Marketplace Architecture The resulting combinations of software-based control open a virtually limitless array of possible arrangements, far greater than what exists in traditional markets. It allows the builders of electronic marketplaces substantial latitude to follow modern architecture’s dictum—“Form follows function”28—now that software building tools and materials are flexible enough to create marketplaces where design can be subordinated to the pursuit of specific market mechanisms and outcomes. This means that while Internet technology can be used to design perfect markets (or nearperfect markets), it can just as well serve to construct biased ones. And because the exact exchange mechanisms are buried deep within the code of complex network and application software, the true characteristics of these marketplaces may be much harder to divine than those of the farmers’ markets of old. A simple glance at Pontoise’s central square was enough to gauge the strength of a particular market position. Modern travelers would find it much harder to assess their relative position in the various possible e-marketplaces for airplane tickets.29 In this new world, network control is the key to market control. Consider the marketplaces described throughout the case studies in this volume. They show how Internet exchanges can be configured in very different ways to create many kinds of marketplaces, each aiming at distinct competitive outcomes. The vast majority of electronic commerce today occurs in e-marketplaces deployed by their main players. Analyst Emarketer estimates that 28. Sullivan (1947). 29. In its October 2000 issue, Consumer Reports compared the main online travel sites and concluded: “The results: The ‘lowest fare’ online rates for the same destination were all over the map— sometimes hundreds of dollars apart. Rates also differed from the baseline prices on Apollo, the computer reservation system used by many brick-and-mortar travel agencies. In many cases online rates were higher; in other cases, lower.”
over 93 percent of business-to-business e-commerce today takes place in what they call “private or proprietary exchanges”—that is, marketplaces controlled by the market’s dominant player.30 These marketplaces, such as those set up by Dell Computer or Wal-Mart, are primarily ways to automate these companies’ existing supply chains.31 While they create competitive pressures among the various nondominant players, for example between competing electronics parts suppliers to Dell, the reverse is not true. They are configured as proprietary exchanges that do not allow these suppliers to compare Dell or Wal-Mart with other potential buyers. Related to these are marketplaces sponsored by industry consortia rather than a single company. One of the most visible is Covisint, the automotive online exchange created by DaimlerChrysler, Ford Motor Co., and General Motors.32 The presence of competing buyers suggests that this marketplace might be less one-sided than Dell’s supply chain automation. The declared goal of Covisint is to cut the production cost of an average car by 10 percent. Some of these savings are expected from greater efficiency in transaction. Nevertheless, chances are that the architecture of the Covisint marketplace will lend itself to help its owners, the automakers, drive down the cost of components, rather than to help component makers set automakers against one another to bid up the price of their products. Consortiacontrolled marketplaces are emerging not only on the buyer side. For example, twenty-eight airlines gathered around United, Delta, Continental, Northwest, and American Airlines have announced the creation of Orbitz, a jointly controlled e-marketplace for travel services. A large number of marketplaces are controlled and operated by third parties—that is, by entities who do not trade within the exchange. Examples include Chemconnect (a marketplace for chemicals, plastics, and industrial gas) or Freemarkets (online auctions for industrial parts, raw materials, commodities, and services). The economic basis for these is different in the sense that their selling point precisely is to provide an unbiased trading environment for their customers and to charge a fee on the transactions they facilitate. Network control is then put to the service of creating a neutral marketplace architecture for traders. Third-party control does not guarantee absence of bias, however. In a number of cases, marketplace operators might have economic incentives to 30. Emarketer (2000). 31. For a discussion of the Dell model, see chapter 7 by Martin Kenney and James Curry in this volume. 32. See Fine and Raff (2001).
create particular market slants, even though they do not themselves trade in the marketplace. Examples include placement fees that marketplaces like amazon.com, ebay, or yahoo can charge to list some sellers or some goods more prominently than others. These incentives relate to the marketplace owner’s ability to allocate some scarce resource, like screen “real estate” in the case of placement. Screen space is at an even greater premium on the small screens of cell phones and PDAs, suggesting profitable strategies for those who control the order in which choices are displayed in their menus. In Japan, NTT DoCoMo has implemented a particular kind of bias in the marketplace it controls for mobile services, with the division of service providers in different classes: those within the “walled garden” and those outside.33 Those inside get not only premium placement but also better access to the infrastructure’s technical resources and to NTT’s marketing might. They also share a greater portion of their revenues with NTT, the operator of the marketplace linking them to their customers. Other kinds of bias may be buried deep in the network’s code and even harder to discern. For example, the broadband network provider Excite@Home has struck partnerships with certain content and service providers that agree to share their revenues in exchange for strategic caching and replication of their content within the network’s servers. While @Home’s network provides greater bandwidth for all services, it makes access to these privileged partners even better, presumably helping them along. Yet another kind of software-defined marketplace comes with the deployment of peer-to-peer technologies. There the transmission network purely serves as a neutral conduit, and the market mechanisms—discovery, matching, negotiation, and transaction—are implemented “at the edge.” Napster, and the corresponding marketplace for music built around this technology, represents the most visible deployment of such a marketplace.34 Companies like Kinecta or iSyndicate have implemented a very similar system for the exchange of digital content around a concept inspired by media syndication.35 The resulting marketplaces come perhaps 33. The French Minitel had pioneered a similar idea in the 1980s with the kiosque system, a tiered structure of partnerships with different service providers corresponding to different pricing structures and different levels of collaboration between the carrier and the service providers. 34. Napster’s original intent was nonprofit, to facilitate the exchange of free music, and it may seem strange to describe it as a marketplace. Recent developments, in particular Napster’s alliance with media giant Bertelsmann, indicate how a profitable marketplace can be built on that model. 35. Werbach (2000).
closest to a neutral marketplace. In these models each of the peers (the end-nodes at the edge of the network) publish a list of what they wish to sell or acquire, at what price. The network connecting them serves as a neutral conduit for that information. These examples demonstrate that electronic technologies make it possible to design marketplaces with a wide variety of architectures, each serving the interests of different groups of market participants. While these technologies can indeed serve to reduce friction, level the playing field, or give all players equal access to market information, they can just as well be deployed to embed architectural features in the network marketplace in order to create strategic advantage for certain players. Ultimately, it is not technology that dictates marketplace architecture, but those who control how technology is deployed and configured.
Rearchitecting Commerce How will the deployment of these various kinds of marketplaces transform commercial activities? The answer varies across sectors, and the studies in this volume begin to paint a series of pictures. Three themes stand out: marketplace efficiency, structure, and adaptability. First, evidence from the various sector studies indicates that e-commerce yields significant savings in the setup and execution of transactions. These savings, however, must be balanced against new expenses. The development of electronic exchanges has proven more expensive and time consuming than initially thought. The sector studies also suggest that those who control the new marketplaces primarily capture these savings. A second dimension relates to the impact e-marketplaces will have on the distribution of market power. Will they broadly follow existing patterns or challenge them? We have seen that, contrary to early expectations, Internet technologies did not necessarily create level markets but could also serve to design biases within the e-market’s architecture. These biases come in three main categories. First are information asymmetries, where the marketplace is structured so as to give some participants better or earlier market-relevant information. Second are matching asymmetries, when the market-clearing algorithms are programmed to favor some of the participants. Third are access asymmetries, when different market players have differential access to the telecommunication infrastructure.
This should not come as a surprise. Indeed, with rare exceptions, markets usually are asymmetric.36 Because these asymmetries reflect the relative market power of the participants and constitute further sources of market advantage, we should expect powerful players to leverage new technologies to further their advantage, to reinforce rather than eliminate these asymmetries. In some cases, however, the technology can create opportunities for traditionally weaker market participants to challenge the dominance of incumbents. The case studies of book or music distribution offer such examples. Perfect markets can exist only for commodities, where a product is fully described by three characteristics: identity, standardized quality (grade), and price. This is where we find the most successful electronic trading networks so far. In these cases, they have been easily justified by cost savings and information optimization in the supply chain. Most real-life transactions between businesses involve much more complex objects. They are not arm’s-length dealings when competition primarily revolves around price but multifaceted interactions including exchanges of expertise, joint learning, or collaboration in product design. In the emerging production networks, firms favor longer-term relationships with fewer suppliers who become partners in generating shared innovation. Such relationships are better supported by collaborative networks than by auction and electronic trading markets. In addition, Internet commerce appears to be penetrating business processes least where there is the greatest sunk investment in legacy information systems. Within the formal boundaries of the firm, there is resistance to displacing the legacy information systems that effectively manage mission-critical functions, often at low cost per transaction. The same appears to hold for business commerce that crosses firm boundaries where there is sunk investment in legacy systems that already similarly deliver very low transaction costs with broad reach—business-business payment systems, for example. This is not to say that Internet commerce will never make headway in these tougher applications, just that further progress awaits innovations that can deliver sufficient benefits to justify the replace36. On information asymmetries in particular, Scitovsky notes: “The root cause of the unequal distribution of knowledge between buyers and sellers is the division of labor, which causes everybody to know more than others about their own specialty and less about other people’s specialties than others know about them. The farther the division of labor proceeds, the wider becomes the gulf between the specialist’s knowledge and the nonspecialist’s ignorance of each specialty.” Scitovsky (1990, p. 137). Evolution to the “knowledge economy” exacerbates these asymmetries.
ment of existing systems. Outside of the obvious cost advantages of value chain trading networks, electronic marketplaces cannot yet support real innovation in areas such as collaborative product development and cooperative cross-firm work processes. They will eventually, as they spread further, but at the moment these network-based innovation processes remain at a very early stage indeed. This raises a third dimension of the impact of e-commerce—relatively unexplored and also more interesting. Because network configuration can be reprogrammed, the corresponding marketplace architecture is adaptable. This process is neither costless nor instantaneous but nonetheless allows faster, cheaper, and more flexible marketplace adaptation than in the preelectronic world. Marketplace architecture can then change to reflect evolving business practices or to take into account new ways to organize work processes within or between firms. In turn, changes in corporate form and new patterns of interfirm collaborations create user-driven experimentation with networking technologies and foster the development of new networking technologies and applications. The resulting process of coevolution may very well be the most significant characteristic of the transition to e-commerce. In the past the relative rigidity of the underlying communication infrastructure impeded rapid reorganization of work processes and interfirm relationships. By contrast today, e-commerce technologies allow for the joint adaptation of communication infrastructure and economic superstructure. In the knowledge-based economy, this becomes a powerful innovation engine.
Conclusion: Policy Implications As if to confirm that Internet technologies do not automatically generate perfect competition, electronic marketplaces—business-to-business marketplaces in particular—have already attracted significant antitrust scrutiny. This concern began before the current wave of Internet exchanges, most notably with the Department of Justice’s 1992 investigation of airline computer reservation systems (CRS), one of the first and largest e-marketplaces.37 As was the case with CRS, the exchanges attracting most 37. Department of Justice (DOJ) Antitrust Division, “United States v. Airline Tariff Publishing Company, et al., Proposed Final Judgment and Competitive Impact Statement,” Federal Register 59, no. 62 (March 31, 1994).
attention tend to be those owned by few major participants in a given marketplace, and the main concern is collusion.38 So far, policymakers do not view the anticompetitive risks associated with electronic marketplaces (and B2B marketplaces in particular) differently from those related to nonelectronic markets. They identify a number of potential antitrust issues, including information sharing agreements, joint purchasing, exclusionary practices, and exclusive access. In their view, however, these are not fundamentally new and can be addressed with traditional antitrust analysis.39 The above analysis suggests that there may be more to this story. Competitive biases can be built into the architecture of e-marketplaces in rather subtle ways. The danger is not so much the obvious one that online markets can be blatantly rigged to favor the market’s owner—indeed, an early example of such abuse, CRS, was effectively handled by antitrust. The real danger is that much more subtle manipulations of consumer choice and market outcome become possible and are likely to escape detection because they are embedded in the network’s very architecture.40 This issue of embedded, indirect market manipulation is one with which existing systems of commercial law and policy are ill-prepared to cope effectively.41 This points to a new link between communication policy and competition policy: when network control yields market control, policies for network access have implications that go beyond the strict domain of telecom policy to affect broader economic issues. When network code curtails fair market access, it becomes crucial to guarantee open access to networks, so that competing marketplaces can be created.
References Bar, François. 1990. “Configuring the Telecommunications Infrastructure for the Computer Age: The Economics of Network Control.” Ph.D. dissertation, University of California, Berkeley.
38. “A Market for Monopoly?” Economist, June 17, 2000, pp. 59–60. 39. “Entering the 21st Century: Competition Policy in the World of B2B Electronic Marketplaces,” Report by the Federal Trade Commission Staff, October 2000 (www.ftc.gov/os/2000/10/ b2breport.pdf ). 40. For a description of how such biases can occur in broadband cable networks, see chapter 18 in this volume. 41. Lessig (1999b).
Bar, François, N. Kane, and C. Simard. 2000. “Digital Networks and Organizational Change: The Evolutionary Deployment of Corporate Information Infrastructure.” Paper prepared for the International Sunbelt Social Network Conference. Vancouver, British Columbia, April 13–16. Bar, François, and E. Murase. 1999. “Charting Cyberspace: A U.S.-European-Japanese Blueprint for Electronic Commerce.” In Partners or Competitors? The Prospects for U.S.European Cooperation on Asian Trade, edited by Richard Steinberg and Bruce Stokes, 39–64. Lanham, Md.: Rowman & Littlefield. Beniger, J. 1986. The Control Revolution. Harvard University Press. Braudel, Fernand. 1979. Civilisation Matérielle, Économie et Capitalisme XVe–XVIIIe Siècle: Les Jeux de l’Échange. 1979. Paris: Armand Colin. Brousseau, E. 1994. “EDI and Inter-Firm Relationships: Toward a Standardization of Coordination Processes?” Information Economics and Policy 6 (3–4): 319–47. Brynjolfsson, Eric, and Lorin Hitt. 1996. “Paradox Lost? Firm-Level Evidence on the Returns to Information Systems Spending.” Management Science 42 (4): 541–58. Buxmann, P., and J. Gebauer.1998. “Internet Based Intermediaries: The Case of the Real Estate Market.” CMIT Working Paper 98-WP-1027. Berkeley: University of California. Chandler, Alfred D. J. 1977. The Visible Hand: The Managerial Revolution in American Business. Cambridge: Belknap. Clark, David, and Marjory Blumenthal. 2000. “Rethinking the Design of the Internet: The End-to-End Arguments vs. the Brave New World.” Paper prepared for TPRC. Alexandria, Va., September. Clemons, Eric K. 1986. “Information Systems for Sustainable Competitive Advantage.” Information and Management 11: 131–36. Committee on Economic and Monetary Affairs and Industrial Policy. 1998. Report on the Communication from the Commission to the Council, the European Parliament, the Economic and Social Committee and the Committee of the Regions on a European Initiative in Electronic Commerce. COM(97)0157–C4-0297/97. Emarketer. 2000. The eCommerce: B2B Report. New York. Fine, Charles M., and Daniel M. G. Raff. 2001. “Automotive Industry: Innovation and Economic Performance.” In The Economic Payoff from the Internet Revolution, edited by Robert E. Litan and Alice M. Rivlin, 62–86. Internet Policy Institute and Brookings. Gates, Bill. 1995. The Road Ahead. Viking Penguin. Gordon, P. 1989. La place du marché. Paris: La Documentation Française. Hanson, W. 2000. Principles of Internet Marketing. Cincinnati, Ohio: Southwestern College Publishing. Hopper, M. D. 1990. “Rattling SABRE—New Ways to Compete on Information.” Harvard Business Review (May–June): 118–25. Lessig, Lawrence. 1999a. Code and Other Laws of Cyberspace. Basic Books. ———. 1999b. “The Law of the Horse: What Cyberlaw Might Teach.” Harvard Law Review 113: 501–46 Malone, T., J. Yates, and R. Benjamin. 1987. “Electronic Markets and Electronic Hierarchies.” Communications of the ACM 6: 485–97. Picot, A., and others. 1997. “Organization of Electronic Markets: Contributions from the New Institutional Economics.” Information Society 13: 107–23.
Saltzer, Jerome H., David P. Reed, and David D. Clark. 1984. “End-to-End Arguments in System Design.” ACM Transactions on Computer Systems 2 (4): 277–88. Sarkar, M. B., B. Butler, and C. Steinfield. 1995. “Intermediaries and Cybermediaries: A Continuing Role for Mediating Players in the Electronic Marketplace.” Journal of Computer Mediated Communication 1 (3). Scitovsky T. 1990.”The Benefits of Asymmetric Markets.” Journal of Economic Perspectives 4 (1): 135–48. Shapiro, C., and Hal Varian. 1998. Information Rules: A Strategic Guide to the Networked Economy. Harvard Business School Press. Smith, M., J. Bailey, and Eric Brynjolfsson. 2000. “Understanding Digital Markets: Review and Assessment.” In Understanding the Digital Economy, edited by Eric Brynjolfsson and B. Kahin, 99–136. MIT Press. Sullivan, Louis. 1947. “The Tall Office Building Artistically Considered.” Reprinted in Kindergarten Chats (Revised 1918) and Other Writings. New York: Wittenborn, Schultz. Werbach, K. 2000. “Syndication: the Emerging Model for Business in the Internet Era.” Harvard Business Review (May–June): 86–93. Williamson, Oliver E. 1975. Markets and Hierarchies, Analysis and Antitrust Implications: A Study in the Economics of Internal Organization. Free Press.
II
E-Commerce: A View from the Sectors
This page intentionally left blank
The Boundary Condition of Services
- the blurry boundaries between goods and services as useless and frustrating as streaming video through ordinary phone lines. It is chopping the production process into bits and pieces— into discrete tasks like design, purchasing, and logistics—and reassembling them in new ways and in new places, for which old labels correspond poorly. It is rather, to push things a bit, the way packet switching treats a message by breaking it into bits and bytes, routing it separately through many different places, and reassembling it at the final destination. The lion’s share of our economy consists of services, but what precisely is a service? Enumeration is simple, though exhausting: a legal consultation, a crop dusting, a facelift, a rock concert. But the category blurs and flips. The rock concert on a purchased CD or DVD is a good; on TV it is a service. Then again, if the DVD was rented and not purchased, it is a service once again.The economic accounts category “services” has always been a messy, residual category. It exists in implicit distinction to the category “goods.” The construction parallels such interdependent categorizations as supply-demand, male-female, and debit-credit. But under midnight oil at lightning glance it sometimes seems closer to dancer-dance. Services are said to share defining, abstract properties: nontangibility being the most important. This separates them from goods, which are defined as being intangible. But not all nontangibles in our economic accounts are “really”
E
services: interest payments, for instance. And some services, like crop dusting and product design, take their finality in goods; they are part of the production of tangible things. Nonstorageability is another attribute that is often used to define services. But online data banks do just that: they store intangible information; and so do the keepers of the “files” in bureaucracies, whether by computer or by hand. George Stigler concluded, almost fifty years ago: “There exists no authoritative consensus on either the boundaries or the classification of service industries.”1 Since then, boundaries have softened considerably. The rearticulation of production chains into more finely defined tasks, or service activities, and their extension across more and more companies further complicates classification. The design for a new IBM semiconductor when produced by IBM in its semiconductor facility is part of a good; when produced by a semiconductor design house and sold to IBM, it is a service—usually, though its classification may change depending on the modes of delivery and payment. E-commerce is adding additional difficulties to the categorization scheme. That rock CD, downloaded and paid for on a microcharge website, is a service; downloaded on gnutella, it escapes all accounting categories. As they go online, books, magazines, and newspapers, traditionally classified as manufactured goods, become services. And promising new developments in cell therapy will be accompanied by a shift in many pharmaceuticals from goods to services, as bottles of massproduced pills are replaced by drug treatments that are created and administered a patient at a time according to individual DNA specifications, protein cultures, and financial immunities. The category “services,” bursting with busily employed occupants, is empty of meaning, though thought to be fraught with significance. Our purposes will be better served by analyzing discrete “tasks” or “service activities” such as design, logistics, procurement, assembly, billing, and paying independently of the final product designation of the industry. These activities are getting more separate and more servicelike. They constitute the stepping stones by which e-commerce is introduced, through which efficiency gains are realized and reorganization triggered. This taskby-task approach, which comes straight out of our case studies, permits us better to see how e-commerce, and more broadly information technology, is rearticulating those interdependent activities, redefining many of
1. Stigler (1956, p. 47).
them—and, in many informative cases, failing to do so. In brief, the case studies reveal significant differences among industries and industry segments—differences in particular “service activities” or tasks—that are obscured by sectoral aggregation.
E-Commerce and the Rearticulation of an Industrial Structure An excellent example of e-commerce enabling and accelerating the transformation of an industrial structure into separated activities is provided by the semiconductor industry. The emerging structure of an important segment of that industry is best understood as chains of different “services.” Each such transformation seems to enable yet another. New design tools (software, most often leased online, not bought like wrenches and screwdrivers) are a primary enabler of substantial structural change in the industry. In a large and rapidly growing segment of the industry, “fabless” (no factory) semiconductor firms design chips on these standard tools (as do integrated firms such as Intel and NEC). Specialized foundry firms, such as Taiwan Semiconductor, manufacture the chips. We have now entered the oxymoronic world of manufacturing services. But its linguistic oddity ought not to trivialize its importance. The foundry specifies design parameters dictated by the technical capabilities and cost structures of its manufacturing process. These are expressed by the design tools as design rules understandable to the remote circuit designer not expert in the process technology of the fab. The fabless firm sends its design, verifies it, and follows the progress of its chips online, in real time, through various stages of adjustment and then on through the manufacturing process. (This includes sending the design, through a similar calibrated process via the design tools online to a third-party “mask” manufacturer.) These successes are breeding second- and third-generation changes in the industry’s structures and practices. The leading design tool firm (Cadence) is spinning off a stand-alone service company to do, not enable, detailed design work. New foundries are adopting innovative ownership and organizational forms. As a semiconductor manufacturing facility (a fab) can now cost upwards of $2 billion, for several firms to share a fab is a way for each to reduce its risk resulting from an unsuccessful design. Along with benefits in time saved, this is, of course, a good part of the drive toward foundry services. New ownership and lease arrangements are converting fabs into a kind of time-share condo, where time slots on the
line can be owned, leased, and even traded as business conditions vary, differentially, among the designer firms.
The Enabler: Standardized Formats, Protocols, and Authentication The semiconductor industry’s ability to standardize rules and formats for setting and communicating designs and other transactions is the critical enabler for the extensive implementation of e-commerce in that sector. This standardization is embodied in the design tools and enforced, fundamentally, by the providers of manufacturing services. They are able to do it because their manufacturing processes are optimized to specific technical parameters. Designs that do not adhere to those specifications and protocols encounter real and serious time and price consequences. In this case, real-world constraints at one point of the production chain can enforce design rules and formats that are necessary for web-enabled transfer of design and monitoring of production. Critically, individual design firms, the fabs’ customers, are not losers in this standardization process. The studies in this book show great variation in the ability of different industrial segments to adopt commonly accepted formats. The basis for success among sectors that succeed in adopting common formats also varies. In some cases, such as semiconductors, all designer firms can gain, and none necessarily loses to any others, if the foundry-friendly format is adopted. Other “successful” cases seem to reflect the overweening power of one player (a powerful final assembler or retailer). Others, perhaps a majority, have thus far been disappointing in their advances toward adopting common formats and, therefore, toward realizing major potential gains of e-commerce, largely because of unresolved questions of differential gains and an inability to solve the distributional problem. The forthcoming migration of much of e-commerce to an XML language will attenuate some of the problems with common formats by introducing new dimensions of flexibility and variation. Authentication is a related problem: it is important to know for sure just who is entering an order or a payment. New, sophisticated authentication software is now being introduced. These new technologies will help to propel greater usage and gains, but they will not resolve fundamental distributional conflicts. In some industries, consortia of dominant firms are launching their own e-commerce marketplaces. In autos, for example, GM, Ford, Daimler, and Renault teamed up
to create an industry marketplace—Covisint—to muffled choruses of supplier anxiety. In many such cases, for example bond trading, antitrust questions will likely have to be resolved. Dell, the most admired model of e-commerce, is able to impose formats in its own supply chain—from the final consumer right back through the component suppliers and logistics providers. It is aided in this, of course, by its huge volume of purchases and its rather small but still substantial number of suppliers (75 percent of about twenty suppliers, almost all of which, except Microsoft, have competitors).2 In retail, the overwhelming power of a few giants, such as Wal-Mart, Kmart, Target, and Sears, permits them to snap their own supply chains and impose their own norms and forms. These giant companies have long been leaders in automating logistics. All of them long ago migrated from paper-based procurement systems to electronic data interchange (EDI) systems at considerable gain. They anticipate additional major gains from migrating to web-based systems. Sears predicts that procurement costs will fall from $100 an order with current EDI systems to about $10 an order with web-based systems. As Sears handles about 100 million purchasing orders annually, savings should total roughly $9 billion per year.3 These savings result mostly from improving visibility in the supply chain, which slashes communications and tracking costs, coupled with enhanced flexibility. This estimate includes the considerable win-win savings resulting from what Dell and UPS discovered when they went to Internet, real-time tracking: cheap, online tracking inquiries soared, but costly phone calls from customers dried up, resulting in substantial saving on both sides. Multiplying Sears’s projected gains by Wal-Mart, Target, and Kmart generates big numbers. Cautiously halving those numbers, and even halving them a second time, still leaves very big, net-gain numbers. But as the case studies show, in many major sectors—ranging from health care through liquor and automobile sales through government— these kinds of gains are not being achieved. In many cases the obstacle is legislated: as in online sales of wine or even autos, where local dealers, as a result of local legislation, must consummate sales. Most typically, the stumbling block no longer seems to be technology, or managerial awareness, or even the availability of competent people to execute new processes. The key problem reported in these studies centers around formats, standards, 2. See chapter 7 by Kenney and Curry in this volume. 3. See chapter 13 by Hammond and Kohler in this volume.
forms. If e-commerce is to realize its promise, the enabling condition is agreement on a common interface, a format or way to present information. In many giant sectors—retail banking and government, to mention just two—an additional problem stands in the way of realizing the full cost savings of e-commerce. Costly paper check processing and shipping is a perfect candidate for replacement by electronic payment systems. And many major banks are eagerly promoting the shift. But it will be a very long time indeed before all customers are willing to forgo paper checks and shift to electronic systems. In the very long interim, the banking system will be obliged to shoulder the costs of maintaining both payments systems, each with very high fixed costs. The same is true—even truer, if we may—for government. Governments at all levels—federal, state and local—are experimenting with what is beginning to be called governmentto-consumer (G2C) web-based services such as the provision online of tax forms or access to Social Security information and benefit calculation assistance. Almost half the states provide online income tax filings and use outsourced solutions for this service. Many states provide online services to order vital records such as birth, death, and marriage certificates. Governments are only beginning their move into online service provision. Some municipalities are experimenting with online filing for minor construction permits in an effort to eliminate the need to come to city hall and stand in line with arms full of blueprints. The enormous volume of government-toconsumer transactions means that, eventually, substantial cost savings (both monetized and nonmonetized) will be realized. But government must maintain nonelectronic systems for dealing with its “consumercitizens” whatever the cost. Government does not and cannot obey the simple market logic that drives corporate decisions. Government is also the largest customer in many markets, spending over $500 billion a year on procurement. The complex restrictions placed on government procurement, which have been the stumbling block for preelectronic efforts to “streamline the process,” should give pause to eager expectations of fast and major efficiency gains through electronic procurement systems of the kind used in business.4 Analyses of the health care sector estimate that e-commerce would generate savings of about $30 billion over a ten-year period just in billings and
4. See Fountain (2001).
payments transactions.5 This is a very conservative estimate (discordant if compared to Sears, above). But even this depends on the adoption of common standards for financial and administrative transactions. This estimated gain does not include gains resulting from the electronic handling and communication of medical records, where much larger gains in efficiency, not to mention effectiveness, would be realized. For medical records, substantive problems of airtight privacy remain to be resolved (even though they have not been resolved for incumbent systems). So do significant technical problems: doctors are unlikely to abandon current costly and error-prone hand scribbling of medical records and prescriptions until reliable and convenient voice recognition entry devices are available. Private standards-generating initiatives in the health sector were last year’s “new new thing.” Various efforts, some by outsiders, some by major insurers and health groups, compete. But the critical element remains the lack of a sufficiently powerful and determined player who can legitimately oblige the acceptance of common standards. The obvious candidate is, of course, the federal government, which pays so substantial a part of the national medical bill. Prudence means thoughtful balance, not just watered-down expectations. The economic gains resulting from moving billing and payments online will generate very substantial dollar savings, though they they will be achieved only slowly and with difficulty. But they represent what is, ultimately, a relatively minor application. The health sector is one of many realms where information technology (IT) is likely to play out on a grand scale, surging far beyond the substantial gains attributable to only one form of IT, e-commerce. Moving prescriptions and medical records online is an emblematic second round effect. It will reduce a major cause of medical mistakes—and, therefore, deaths, increased hospitalizations, and human misery—as well as enormous economic cost.6 It is the third round, however, where the great gains await: and they are in the substance, not the administration, of health care. As was stressed in the part I of this volume, information technology consists of tools for thought, the most powerful and allpurpose tools ever. For starters, we should consider advanced imaging, 5. U.S. Department of HHS. See the Health Portability and Accountability Act of 1996 (HIPAA), which calls for HHS to adopt standards for financial and administrative transactions in health care. See Danzon and Furukawa (2001). 6. On the banality and ubiquity of medical mistakes, see Kohn, Corrigan, and Donaldson (2000).
micro surgery, rational drug discovery, and stem cell therapy, all well along in the pipeline, all rather revolutionary, and all just beginning. A review of the service sector case studies highlights the potential paths and problems associated with the transformations at hand.
The Financial Services Sector As Clemons, Hitt, and Croson reveal in chapter 4, “The Future of Retail Financial Services: Transparency, Bypass, and Differential Pricing,” it is nearly impossible to generalize about the impact of the Internet on the American financial services sector. Although most of its subsectors are dominated by large credit agencies, their similarities end there. The dynamics of the market for credit card provision, for instance, differs wildly from that of retail banking, stockbroking, mortgage lending, or term insurance. Not only are these services processed differently, but each focuses on distinct customer segments. Whereas credit card firms target low-income clients likely to pay recurring service fees, stockbrokers scrap to attract high-income clients with lucrative portfolios. Simply put, each subsector in the financial services industry is characterized by significantly different degrees of price transparency, differential pricing, and customer disintermediation. Combined with market-specific government regulations, these differences create diverging growth prospects for financial service firms in the New Economy. Internet-enabled price transparency is throttling corporate profits in markets characterized by simple transactions. Since 1995, Internet brokers such as E*Trade have seized control of over 30 percent of all retail stock trades, offering order fulfillment at rates ten to twenty times below those promised by full service brokerages like Merrill Lynch. Online comparison shopping is driving down margins in mortgage and insurance provision, just as competition between firms like Amazon.com and Barnes & Nobles drove down the cost of many consumer goods. Already, 10 and 20 percent of all customers now use the Internet to research their mortgage and insurance needs, respectively. Despite a lowly market penetration of less than 1 percent, Internet-only banks are also placing upward pressure on consumer expectations of what constitutes a “reasonable” rate of return on uninvested capital. Consumer empowerment is only part of the story. Data-mining tools and mass customization allow firms to tailor their services to individual
market niches, wringing profits from previously untapped consumer needs. Just as Capital One Financial revolutionized the credit card industry with its decision to focus on the high-profit, low-income market segment, online transaction records and data-mining tools offer firms increasingly valuable knowledge about their customer base—knowledge that allows firms to extract profits from even the highest risk segments. As Clemons, Hitt, and Croson take pains to point out, however, the degree of customer disintermediation across various sectors frustrates simple analysis. Despite the downward pressure online brokers have placed on the going price of market orders, the ability of traditional firms to offer vertically differentiated services such as complex financial planning, access to coveted IPOs, and valuable market research provides a bulwark against the loss of their most lucrative customers. Disintermediation poses less of a threat to retail banks. With few exceptions, consumer-to-consumer (C2C) lending, borrowing, and payment systems have failed to materialize. As with credit cards, there appear few functional alternatives for most consumers than to continue to rely on retail bank services. Aiding this intransigence, many financial firms are protected against the encroachment of upstart competitors by their enormous economies of scale. Long-standing investments in innovative systems including automated teller machines (ATMs), centralized telephone call centers, and PC banking tools have institutionalized competitive advantage and created incredibly efficient information channels. Thus, turning their eyes to the growth of Internet banking, Clemons, Hitt, and Croson insightfully suggest that banks are shifting into web-centric systems out of competitive necessity, not out of an internally generated commitment to overhaul their existing business systems. Attempting to synthesize the lessons of their remarkably different case studies, Clemons, Hitt, and Croson propose a theory of “newly evulnerable markets.” They project that financial services that are easy to enter, attractive to attack, and difficult to defend will experience the most rapid upheaval in response to business process innovation and continued technological advances. Government policies and regulations are crucial insofar as they delimit competitive boundaries and restrict firm flexibility. Looking to the future, Clemons, Hitt, and Croson point to significant long-term changes in the structure of the financial systems of the advanced Western economies. Crucially, however, while technological efficiencies may be occurring rapidly, their analysis suggests that business model innovation will be largely pioneered by firms specializing in niche markets at the
margin of profitability and will only diffuse slowly into the broader financial community.
The Airline Industry In chapter 5, “Web Impact on the Air Travel Industry,” Klein and Loebbecke focus on a particular service task of the air travel industry— pricing—but develop the analysis in a way that makes their findings relevant for many other service sectors. Specifically, the authors investigate how ubiquitous digital networks enable the introduction of entirely new pricing models or pricing models previously unknown in the industry. Traditionally, consumers purchased flight tickets from travel agencies that acted as sales intermediaries for the airlines. While airlines moved toward differential pricing even before the advent of the Internet, end consumers were largely passive participants in the price determination process. The spread of the Internet and the availability of new electronic tools for marketing and sales have led to widespread experimentation in the area of pricing within the industry. Far from simply enabling disintermediation and direct sales from airlines to consumers, the Internet has given rise to web-based intermediaries, new roles for existing suppliers and intermediaries, and significantly increased consumer participation in the price determination process. Loebbecke and Klein trace and classify several kinds of innovative pricing models and conclude that the web will give rise to models of negotiated pricing that can go all the way down to the end consumer, a model familiar from bazaars but long absent in many industrial mass markets. The authors classify differential pricing models according to whether pricing is based on customer characteristics, product features, sales volume, or customer utility. In the process, they discuss innovative pricing (and business) models by Lufthansa (sales auctions), TravelBids (reverse auction), Rosenbluth (value-based pricing) and Priceline (demand collection). Each model assigns suppliers, intermediaries, and end customers specific roles in the price negotiation process with asymmetric distribution of influence over price across the chain. Both Lufthansa’s sales auctions and Priceline’s demand collection, for example, give end customers comparatively high influence over the final price. TravelBids’ reverse auctions and Accompany’s demand pooling, by contrast, locate highest influence over price with the supplier.
The survey of innovative web-based pricing models reveals that Internet and e-commerce have not led to a unidirectional shift in the balance of market power in the sector. Rather, each model empowers actors at various points in the value chain according to its own particular logic. Just as innovative pricing models are diverse, so are the business strategies that have brought them forth. New entrants such as TravelBid, Priceline, and Accompany have developed business models that are qualitatively different in order to break into an existing market and transform it. In contrast, Lufthansa’s monthly auction of surplus capacity is a means to enhance overall ticket sales efficiency. However, only a small number of tickets are auctioned this way and the airline’s business model as well as its overall sales model is virtually unaffected by small online auctions. In sum, the Internet has changed the air travel industry in that it has enabled models of negotiated pricing that were hitherto impractical. Suppliers, existing intermediaries, new intermediaries, and end consumers each play roles in the price determination process with the specific task and degree of influence over price varying from model to model. Much experimentation is still under way, and it is too early to assess whether a single pricing model will prevail or how big the overall impact of these models will be for this or other sectors.
References Danzon, Patricia M., and Michael Furukawa. 2001. “E-Health: Effects of the Internet on Competition and Productivity in Health Care.” In The Economic Payoff from the Internet Revolution, Brookings Task Force on the Internet. Brookings. Fountain, Jane. 2001. “The Economic Impact of the Internet on the Government Sector.” In The Economic Payoff from the Internet Revolution, Brookings Task Force on the Internet. Brookings. Kohn, Linda T., Janet Corrigan, and Molla S. Donaldson, eds.. 2000. To Err Is Human: Building a Safer Health System. Washington: National Academy Press. Stigler, George. 1956. Trends in Employment in the Service Industries. Princeton University Press.
3
E-Finance: Recent Developments and Policy Implications
begun to profoundly affect how financial services are delivered. On the one hand, the Internet affords convenience, price transparency, broader access to information, and lower cost; on the other hand, financial services are data-intensive and generally require no physical delivery. Combining the two should give the perfect environment for new entrants to build substantial businesses, go up the value chain, and compete on price.1 “E-finance” has been defined in different ways.2 In this chapter, we use the term rather broadly to mean the provision of financial services—banking and deposit-taking, brokerage, payment, mortgage and other lending, insurance and related services—over the Internet or via other open public networks. E-finance is expected to continue growing strongly. However, the fast pace of developments generates considerable uncertainty about the current situation and future implications. This uncertainty is shared by bankers and supervisors and is confined not just to how much stress will be placed
T
1. However, as discussed later, only a small portion of these revenues will be earned in an Internetonly environment. 2. Sometimes the terms “online finance,” “Internet finance,” “virtual finance,” and “cyber finance” are also used interchangeably.
-
on parts of the financial system, but to where such stress will be felt. It is likely that the cross-border, fast, and less regulated nature of e-finance will accentuate the manifestation of familiar challenges and risks, if not create new ones. This chapter provides a global overview of recent developments in efinance and examines its policy implications. The first section briefly surveys various manifestations of e-finance. The second section presents a conceptual framework to understand the e-finance structure and reviews changes taking place in individual institutions, exchanges, and trading systems. The third section reviews possible implications for financial stability, and the fourth section reviews implications for monetary stability. The fifth section discusses the role of central banks and supervisors in monitoring and assessing e-finance developments.
Overview of E-Finance Developments The sheer size of traditional financial services markets implies strong growth potential for the global e-finance market to exceed a trillion dollars (see table 3-1). One study estimates that e-finance revenues will more than double in three years.3 Although these growth statistics are impressive, they only partially indicate the impact e-finance will ultimately have on the financial services industry. While the Internet may appear to be just another technology wave adding a new delivery channel, its scope and potential impact on financial stability are much larger.4 Not only is the Internet taking business away from traditional “bricks and mortar” financial institutions, it is also introducing new business models, changing financial structures, and driving industry consolidation. Many believe consumer acceptance of online financial transactions will follow a “hockey stick” path as new technology increases convenience and 3. Morgan Stanley Dean Witter (1999). It should be noted that it is very difficult to distinguish between the new revenues (that is, fees, interest income) that directly result from an e-business solution the Internet offers and a replacement of what would have been originated in traditional channels. Different e-finance sectors have different revenue models to charge for their services, which makes the measurement even more complicated. 4. It should be noted, though, that much of the computing and communication-enabled transformation in relationships among financial institutions and their wholesale consumers was already occurring before the Internet was commercialized.
, ,
Table 3-1. Estimated Future Size of E-Finance Market in the United States and Europe, 2003 United States Penetration rate (percent)a Sector
Billions of U.S. dollars
Savings/banking Mutual funds Brokerage Credit card Car insurance Personal loans Mortgages Life and pensions Home and other general insurance Total
Europe
MSDW 2003
JP Morgan 2000–04
Billions of eurob
Penetration rate (percent)a
235 ... 32 4 18 ... 147 1
20 ... 38 30 15 ... 15 15
15 → 35 2 → 30 35 → 55 3 → 30 <1 → 30 <1 → 25 <1 → 10 <1 → 5
158 192 ... 4 11 24 37 9
33 19 ... 19 13 11 6 2
... >500
... ...
<1 → 25 ...
8 442
7 15
Sources: Fassnacht and Archibold (2000); Morgan Stanley Dean Witter (MSDW) (1999); Van Steenis (2000). a. Ratio of online sales to total sales. Estimates by JP Morgan indicate the growth of the penetration ratio from 2000 to 2004. b. Based on forecasts for France, Germany, the Netherlands, Italy, Spain, Sweden, Switzerland, and the United Kingdom.
reduces search costs (see figure 3-1). Observers also suggest that online product adoption will reflect the complexity, frequency, and time-criticality of the product or service and come in the following sequence: brokerage, banking and deposit-taking, bill payments, credit cards and simple loans, mortgages, insurance.5 Online brokerage has thus far shown the fastest growth. By dramatically cutting charges for a “market order” (an order to buy or sell a security at the prevailing price), online trading now accounts for about one-third of all stock trades in the United States. The declining cost of a trade order has pressured brokers’ margins downward, and they may further erode as alternative electronic trading systems—potentially allowing direct customer 5. Fassnacht and Archibold (2000); Morgan Stanley Dean Witter (1999).
-
Figure 3-1. Growth in Online U.S. Brokerage and Banking, 1996–2002 Millions 14 12 10 8
Brokerage
Banking
6 4 2 1997
1998
1999
2000 (estimate)
2001 (forecast)
2002 (forecast)
Source: McKinsey & Co., cited in Posner and Meehan (1999).
access—continue to replace established exchanges. Intermediaries therefore hope the growth in volume may offset much of the erosion in margins. Because of greater security concerns, online banking has taken off more slowly than broking. Most banks moved from PC banking (via proprietary dial-up networks) to Internet banking to achieve cost reduction. This allowed banks to provide free basic inquiry-only service and charge nominal fees for online bill payment. In the United States, the shift to Internet banking rapidly increased the adoption of online banking from 3 percent to greater than 10 percent, with expectations that one-third of online households will move to e-banking within the next few years. Alternative payments methods (or e-payments) such as stored value cards (smart cards) or e-cash appear to be lagging even farther behind; consumers still favor credit cards.6 The primary obstacle e-money faces is the chicken-and-egg quandary; merchants will not take e-money if few
6. L. Van Hove, “Electronic Purses: (Which) Way to Go?” First Monday, June 2000 (www. FirstMonday.org/issues.issue5.7/hove/index.html).
, ,
consumers use it, and consumers will not use e-money if few merchants take it.7 Another difficulty e-payments have encountered is online auctions’ continued use of the existing clearing and settlement facilities of credit card systems.8 EBPP (electronic bill presentment and payment) is the latest innovation and might become the most influential development in the retail payment market.9 If payment systems can make more use of Internet technology to allow for more efficient and secure real-time processing, e-payments may then cease to lag behind other e-finance developments. For credit products, it is predicted that credit card issuance is poised to move significantly onto the Internet,10 with mortgages, student loans, and car loans and leases following over the next few years. While mortgage finance is a potentially attractive area for disintermediation because of relatively high fees, online mortgage brokers have had trouble converting price shoppers into actual buyers, as consumers prefer to deal with real people for an infrequent purchase of such a complex and expensive product.11 Although life insurers were among the first to go online, the sale of life insurance on the Internet has been very limited to date (less than 1 percent of the total term life market), presumably because of the complexity of products and the need for personalized advice. Insurers’ need to assess risk and examine default histories also favors existing institutions. Car insurance may become a popular insurance product sold online, mainly due to the size of the market and the relative simplicity of the product. Travel insurance, home equity, and compulsory third-party insurance could soon follow. Conversely, many investment banking services, such as mergers and acquisitions and corporate advice, have been virtually unaffected by e-finance developments. Examining the expansion of e-finance by region reveals that the e-finance market in the United States is more diverse than in many other 7. MasterCard and Visa have developed complex systems for coordinating transactions among their thousands of bank members and millions of cardholders and accepting merchants to solve this chicken-and-egg problem. See Evans and Schmalensee (1999). The Octopus card in Hong Kong is an interesting case that may be suggesting a possible solution to this dilemma. It was introduced as a means of paying for travel on the underground railway, leading to its wide adoption, and it is now being proposed to extend its application to other purchases. 8. PayPal in the United States is an example of a personal online payment system. 9. Committee on Payment and Settlement Systems (2000). 10. While only 2–4 percent of new credit cards were approved online in the United States in 1999, market analysts estimate it could reach 11–16 percent by 2002. 11. Fewer than 1 percent of all mortgages were generated online in 1999. Consumers so far seem to favor “surf and dial” (Baghai and Cobert, 2000).
-
countries due to the higher penetration of PC ownership (and therefore Internet usage) and the more advanced equity culture among consumers. Hence the U.S. experience in e-finance is providing some evidence of the relative robustness of new financial business models. We have also observed rapid growth in the Pacific basin, particularly in Australia, Hong Kong, Singapore, and South Korea.12 In Singapore, with strong government backing, financial institutions have embraced the Internet to introduce new products and services in banking, securities, and insurance. Among industrial countries, France, Italy, and Japan are currently lagging behind. This may change, however, as several Japanese nonbanks— such as the biggest chain of convenience stores, an Internet investor, and a branded electronic firm—move into e-banking. In addition, the large and sophisticated U.K. financial market has attracted new entrants and Germany is seeing the growth of an equity culture where e-brokerage is gaining popularity. While online banking in the United States is mostly PC-based, e-finance in Europe—as well as in Japan13 and other parts of Asia—may become mobile phone–based due to the greater penetration rate of nomadic devices in these countries (see figure 3-2).14 Nordic countries, for example, are global leaders in mobile phone and Internet usage, and large banks in Sweden and Finland are at the forefront in adopting WAP (wireless application protocol) technology, allowing Internet banking and brokerage via mobile phones. Latin America has started to see a rapid expansion of Internet platforms. With Internet penetration still low compared with other regions and wealth highly concentrated, large Latin American banks are focusing on top-tier clients.
Conceptual Framework and Preliminary Observations E-finance involves six basic levels: (1) online products, the services and products being exchanged online; (2) intermediaries, the entities that produce financial services and products or deliver them; (3) exchanges and 12. P. Montagnon, “Asia Warms to Online Services,” Financial Times, September 6, 2000. 13. Japan is the first country in the world to experience rapid growth in the mobile Internet market. See chapter 15 by Funk in this volume. 14. Birch (1999); Maude and others (2000).
, ,
Figure 3-2. Mobile Phone and Internet Penetration Mobile phone penetration (percent) Finland
50
Sweden
40
Japan
10 0
Singapore
Korea
30 20
Norway
Brazil
Switzerland France Germany
Spain
Mexico China
Australia United Kingdom Netherlands
United States
Russia
India
0
5
10
15
20 25 30 35 Internet access (percent)
40
45
50
55
Sources: Economist, June 24, 2000; World Bank (2000). a. Mobile phones per 100 people, 1998; percentage of people with Internet access ar home, March 2000.
trading systems, the market coordination environment within which buyers meet sellers and negotiate over prices; (4) clearing and settlement systems, a mechanism to send, execute, and settle orders; (5) legal and regulatory frameworks, a nexus of rules governing rights and obligations of parties to transactions, and a supervisory framework, a set of mechanisms ensuring the implementation of the legal and regulatory frameworks; and (6) a communication platform, carrying messages about prices, quantities, service, or product characteristics (see figure 3-3). It is clear that the value of Internet technology cannot be fully realized unless all the market and supporting infrastructure is sufficiently developed. Overseeing and maintaining the integrity of this layered structure is entrusted to the central banks and other financial supervisory authorities, with close cooperation with the private sector.
Changes in Individual Institutions To appropriately consider the policy implications of e-finance developments, it is necessary to look first at changes taking place in individual institutions. Disintermediation (or at least the threat of disintermediation) will be significant in any industry that relies on a high-cost, agency-based distribution channel, whether agents are employees or independent contractors.
-
Figure 3-3. E-Finance’s Six-Layer Structure
Central banks Financial supervisory authorities Insurance Lending
Savings Payments
Financial activity
Layer 1 Online products Brokerage Insurance Mortgages
Banking Credit card payments Layer 2 Intermediaries
Market infrastructure
Layer 3 Exchanges and trading services
Layer 4 Clearing and settlement systems Supporting infrastructure
Layer 5 Legal and regulatory frameworks
Layer 6 Communication platform
Source: Authors’ assessments.
, ,
It will be especially significant when agents lack any special advantage over firms seeking to engage in direct distribution, when agents have limited influence over the purchasing behavior of clients, or when the threat of new entrants to the overall viability of a firm is high.15 Several observations from recent experience warrant special attention. First, brand marketing and customer acquisition are disproportionately expensive, as a trusted brand and existing relationships are critically important, particularly in banking.16 As a result, the retail deposit business shows relative stickiness, and there is little sign of “first mover” advantage, at least in the United States.17 One survey indicates that even the most promising demographic groups (young, educated, wealthy, active consumers of financial services) are unlikely to switch to technology companies for financial products. Therefore, while technology allows reduced operating costs and lowers some barriers to entry—as services can be provided remotely from a country with lower labor costs and minimal regulatory requirements— new entrants are finding it hard to challenge existing banks. The costs of banking are examined in detail in table 3-2. Second, the vast majority of e-banks are offshoots of incumbent banks rather than new entrants.18 Some banks that started pure Internet operations opened physical branches as relationship enhancers or acquired automated teller machine (ATM) networks for consumers’ convenience. Both bank executives and analysts19 now agree that the pure-play model is not sustainable and that the “clicks and mortar” (or multichannel distribution) alternative offers greater advantage.20 If these banks, however, are not able to rationalize their physical presence, their total costs may actually increase. There is also a managerial challenge in incorporating IT-savvy staff within more traditional banking hierarchies. In some cases, banks have formed 15. See chapter 4 by Clemons, Hitt, and Croson in this volume. 16. KPMG, “Awakening Giants: How Europe’s Big Banks Will Win in the E-Commerce Revolution,” May 2000 (www.kpmg.co.uk/kpmg/uk/services/finsect/publics/pubs/ebanking.pdf ). 17. O’Connell (2000); Furst, Lang, and Nolle (2000). 18. This is true even in the Nordic region, where the penetration of Internet banking is the highest in the world. See Vincent (2000). The growth of Internet banking has not had a dramatic impact on price competition, nor has it helped the banks to attract customers from other banks in Nordic countries. See C. Brown-Humes, “Architect of Clicks and Mortar Strategy,” Financial Times, May 11, 2000. 19. Bekier, Flur, and Singham (2000). Douglas A Warner III (2000), chairman of JP Morgan, argues that “bricks-and-mortar companies, the so-called dinosaurs, actually have a huge advantage in the race to profitability and market leadership over the dot-coms if they can get it right.” 20. As well as branches and the Internet, this encompasses call centers, ATMs, concessions within large stores, and so on. See Riviera (2000).
-
Table 3-2. Costs of Banking Cost per transaction (U.S. dollars) Type of banking Physical branch Telephone Automated teller machine PC-based dial up Internet Online banks Traditional banks
Booz, Allen and Hamilton
Goldman Sachs and Boston Consulting Group
1.07 0.54 0.27 0.02 0.01
1.06 0.55 0.32 0.14 0.02
Average net interest margins (percent) 1.0 4.1
Sources: Costs per transaction are from Claessens and others (2000); interest margins are from Morgan Stanley Dean Witter (1999).
alliances with IT companies to develop an Internet operation rather than attempt to go it alone. The preference for multichannel distribution also applies to insurance, where agents still play a significant role; brokerage firms, by contrast, generally do not require any physical presence. Third, the Internet has decoupled manufacturing of financial products from their distribution. Although it is an open question how much further integration will progress, blurring the boundaries between banks, brokers, and insurers has already affected the financial system. Because of increasing consumer demand for personalized money management services, banks have begun to offer hybrid financial services or products that cross financial boundaries.21 This, of course, is nothing new. What is different is an increased complexity of the design and delivery of personalized services, something that has been permitted by the advent of technologies such as fixed and mobile Internet and digital television. Furthermore, while no ratings agencies foresee immediate ratings changes resulting from the impact of the Internet, they would expect to focus more on an institution’s online franchise (such as its intellectual and knowledge capital) than on traditional financial ratios.22 21. Examples of hybrids are the one between retirement and health insurance; between mortgage and personal loan; and between investment and retirement funds. See L. Hirst, “The Second Wave of Financial Services,” IBM, 2000 (www.ibm.com/solutions/financialservices). 22. Theodore (2000).
, ,
Fourth, the Internet may fundamentally change business models in the financial services industry. New entities include vertical portals (which allow individuals to compare prices for financial services from various websites), smart agents (which automate this comparison process to choose the intermediary offering the best deal), and aggregators (which allow individuals to obtain horizontally consolidated information about their financial and nonfinancial accounts across institutions). Such new business models pose a threat to banks’ direct links to customers. In addition, banks may fear being blamed if confidential customer data are mishandled by aggregators. Fifth, financial service companies are increasingly forming alliances with information technology vendors and telecommunication companies. Some believe this will speed up the process of financial industry consolidation, both cross-border and cross-industry, particularly in Europe.23 Most banks in Europe, whether explicitly or covertly, now see international expansion as one of the key sources of growth. However, cross-border mergers and acquisitions are still viewed with a heavy dose of suspicion both by investors and governments. Cost synergies from mergers are unlikely to be material, and revenue synergies are exceedingly difficult to forecast accurately. E-banking, however, minimizes the cost and risk of cross-border expansion. The resources devoted to foreign e-banking are often situated in the home country so that the same resources can be switched from one foreign market to another. It is thus much easier to retrench quickly from a virtual offering than a branch-based one. Though it is extremely difficult to predict the future course of e-finance development, it is useful to recognize conflicting forces, some leading to greater competition and others leading to greater concentration. There are three key aspects of e-finance that could foster new entry and increase competition (and disintermediation): —low physical set-up costs for e-financial institutions compared to traditional ones; —low marginal operating costs (see table 3-2); —irrelevance of physical location, facilitating cross-border provision of services. On the other hand, there exist factors that could inhibit new entry and lead to increased concentration (and less disintermediation):
23. Theodore (2000).
-
—economies of scale arise because some sectors of financial services are characterized by large fixed costs but very small variable costs; —network externalities arise because the benefit of participating in a network increases with the amount of existing participation; —switching costs arise because users become familiar with their current system, making them reluctant to move to a new supplier that is only marginally cheaper or better; —trusted, or at least familiar, brand names are important; it is very hard for a new firm to convince potential customers to trust them, although a well-known nonfinance company may be able to enter the market. Table 3-3 provides a preliminary assessment of these factors.
Changes at Exchanges and Trading Systems Internet technology is affecting not only individual institutions but also the structure and functioning of exchanges and trading systems. In equity markets, automated trading systems known as ECNs (electronic communications networks)—innovative stock-trading systems that rely on computer software to match buy and sell orders—have developed rapidly, resulting in greater cost efficiency, accelerated trade execution, and expanded price information available to investors.24 Cost savings will further be enhanced as standard clearing and settlement procedures facilitate straight-through processing (STP). These factors pressure dealers’ margins, to which dealers may respond in different ways. The larger ones may try to compensate for lower margins by chasing more volume. Others may unbundle their services and concentrate on certain niches. Some may just withdraw totally from trading and concentrate on advice and research. By decreasing transaction costs, electronic trading leads to tighter pricing (that is, lower bid-ask spreads) in equity and securities markets. One dimension of liquidity may therefore be improved. But there are two other factors that may reduce liquidity: markets may become more fragmented or shallower, with prices adjusting more abruptly to changes in expectations. Will markets become more fragmented? In many OTC-type markets, electronic trading has been a centralizing force. However, in some currently centralized markets, lower barriers to entry may mean that new trading
24. McAndrews and Stefanadis (2000).
** **** b *** * *** ***** High
Deposits/ banking ** * *** **** ** ** Low
Payments ** **** *** * ** *** Medium
Credit cards/ small loans ** *** ** * ** *** Medium
Mortgages
Product
Source: Authors’ assessments; Claessens and others (2000). * none or minimal; ** low; *** medium; **** high; ***** very high. a. That is, technical costs; marketing costs are included under “brand name.” b. May take the form of ATMs.
Set-up costa Physical location Scale economies Network externalities Switching costs Brand name Total
Causes
Table 3-3. Barriers to Entry
** *** *** * ** **** Rather high
Insurance ** * ** * ** *** Very low
Brokerage
**** ** ***** ***** *** *** Very high
E-money
-
systems proliferate, none of which is individually particularly liquid. This is generally expected to be a merely transitory phenomenon, as there are inherent centripetal tendencies in these markets, due to the economies of scale and network effects discussed above.25 Traders move to the system offering most liquidity, making large systems larger. Spreads should also narrow as markets combine and participants become better informed, making already larger systems cheaper and reinforcing this tendency. Once a system becomes established, there may be “tipping” effects; competition may be keen between rival trading systems when none accounts for a majority of transactions, but once one achieves a market share of, say, 70 percent, it may then rapidly take over the whole market. The resultant dominant system, however, may not necessarily be the most efficient. The first system introduced, or that backed by some main market participants, or that introduced by an institution with the financial resources to take losses in the early stages, may achieve a dominant position from which even a more efficient system cannot dislodge it. Will markets have less depth? By reducing bid-offer spreads, electronic trading will tend to reduce the profitability of active market making, causing financial institutions to scale back this activity. Although statistics are sketchy, there appears to have been reductions in the size dealers in some markets are prepared to transact at quoted prices. If so, there may have been a deterioration of one important dimension of market liquidity— market depth. This is not necessarily bad; it may simply reflect the better pricing of liquidity risk. By serving to undermine the liquidity illusion (the belief that large positions can be reversed without significant adverse movements in prices), this development may actually make markets more robust. Furthermore, by enabling cheaper transactions, electronic trading may encourage more end-users to enter the market, and it is often argued they contribute more to liquidity in crucial periods. The spread of e-finance and the greater use of the Internet could increase the speed with which new information affects asset prices. Even if this increases short-term volatility, a sharp price adjustment quickly followed by renewed trading at near-normal levels may be better than a sluggish adjustment. Whether the greater volume of transactions reduces price volatility in markets may depend on the nature of the additional participants and the 25. See, for example, Committee on the Global Financial System (2001) and chapter 7 of Shapiro and Varian (1999). Market analysts predict more consolidation and mergers of electronic trading systems in the next couple of years, attaining a critical mass of order flows (Nerby, 2000).
, ,
particular micro market structure. Markets are increasingly dominated by institutional investors seeking slightly better returns than their rivals. As the managers of these funds are often evaluated in comparison with the shortterm performances obtained by their peers, managers tend to behave like herd animals, rushing in and out of markets together. Increasing the size of such herds may not provide liquidity; it may just mean more people jamming the exits. In this way a vicious circle may arise, amplifying price volatility. Encouraging more participants, however, may also mean there are more contrarian investors able to buy at lows and sustain temporary losses if need be, which will act to stabilize prices.
Implications for Financial Stability E-finance may erode the profitability of traditional financial institutions. Virtual banks may have to be very aggressive in their pricing to attract funds away quickly from established banks. This, in turn, could make traditional institutions riskier by increasing their incentive to take on riskier business. At the same time, e-finance is changing the nature of the payments system. These developments have implications for both supervisors of individual institutions and those charged with the overall stability of the financial system.
Implications for Financial Supervision Regulators face a trade-off between stifling innovation and limiting systemic risk. Financial regulation should be “technology neutral” (or “e-neutral”); that is, it should not unnecessarily impede innovation. However, the existence of licensing and supervision procedures is likely a primary reason why e-finance was largely excluded from the dot-com bubble. In dealing with the regulation and supervision of e-finance providers, it is important that they be treated in a comparable manner to traditional firms. This includes assuring consumers that they have the same protection (cooling off periods, complaints and compensation arrangements) as with traditional products. Under most countries’ banking legislation, a domestic e-bank would be required to meet the same prudential requirements as traditional banks. It is arguable whether e-banks are inherently riskier, but some aspects of e-banks require much more careful examination. For example, e-banks are
-
generally more reliant on higher interest deposits—which can be very rapidly transferred to another bank—and more prone to operational failures and security breaches such as breakdowns and “hacking.” If e-banks continue to be judged riskier, they could be subject to stricter supervision, higher capital requirements, and higher deposit insurance premiums. In Hong Kong and Singapore, for example, e-banks cannot be established except through conversion of existing local banks.26 Other countries require a physical presence of e-banks within their national jurisdiction, but this is difficult to enforce. A related issue is whether e-banks in emerging economies should be subject to tighter supervision than in fully developed economies since the impact on their own financial markets is expected to be larger. Since e-finance is likely to increase the incidence of cross-border transactions, it is important that the division of responsibilities between home and host supervisors be clarified, including any requirement for licensing in the “targeted” host country. Furthermore, because it is harder to determine the “location” of an e-finance firm, it is also desirable that common standards be set and enforced across countries to avoid financial firms moving their notional headquarters to laxer regimes. Not just intermediaries but markets may be increasingly subject to cross-border mergers. The most prominent example of this is the London Stock Exchange being subject to rival offers from German and Swedish interests. The practice of outsourcing core technologies and processing operations raises questions about how far, and by what mechanisms, a bank’s management should oversee the operations of what may be a complex chain of various service providers. A subsidiary question concerns oversight by supervisors: should (and could) supervisors monitor the quality of service providers or would this raise issues of moral hazard?27 Increasing reliance on a new sophisticated technology and a greater need to have a “name” that the public knows and trusts may lead to new forms of conglomeration that bring together financial and nonfinancial companies. For example, Sony, a highly trusted brand, has already established an e-brokerage firm that is attracting increasing numbers of online trading accounts and recently announced its intention to establish an e-banking 26. Hong Kong Monetary Authority (2000). The Monetary Authority of Singapore has recently released a comparable guideline. 27. A systematic on-site inspection of these service providers has been proposed in the United States.
, ,
arm. The implications for traditional supervisory oversight of new entries (fit and proper tests, the clarification of business plans, and policies on mergers and acquisitions) are already challenging supervisors. Even the very meaning of consolidated supervision in that context becomes more complex. While new technology allows nonbanks to enter traditional banking, it also allows banks to compete outside their usual activities by capitalizing on their brand names and public trust. For example, banks may expand into services such as certification, digital signature, and secure communication. An important issue is evenhandedness in treating financial and nonfinancial firms, as supervisors are usually reluctant to allow banks to expand into nonfinancial commercial business. Supervisors also need to consider how they will treat new types of entities such as vertical portals, aggregators, and smart agents. Privacy issues arise as e-banks and e-insurers can more readily compile a database of information that could be of use to other retailers. Even if the e-finance firm intends to protect confidential information, there may be unauthorized access to it. The economies of scale and network effects referred to above may lead to a single or small number of institutions or trading systems dominating some markets. Competition authorities may want to ensure there are not abuses of monopoly power from these developments or from the formation of strategic alliances. But they face the problem that it is becoming harder to define markets as geographical bases are less relevant. E-finance may also increase risk of fraud, resulting from e-financiers making untrue or misleading claims. Websites using the names of (or similar names to) reputable financial institutions could be established, but with funds then diverted to other accounts. Minimizing the risk of money laundering and other financial crimes facilitated by the development of efinance and e-payments—and balancing enforcement interests with those of consumer privacy and market independence—is a daunting task.28 Securities regulators will want to ensure that disclosure standards are not avoided by marketing new stock issues directly on the Internet. Although investment advisers are often required both to be licensed and to
28. Such an effort in the United States is well documented in Financial Crimes Enforcement Network (FinCEN), “A Survey of Electronic Cash, Electronic Banking, and Internet Gaming,” U.S. Department of the Treasury, 2000 (www.ustreas.gov/fincen/e-cash.pdf ).
-
operate under “suitability obligations,” it is not clear how these requirements would apply to advice offered on a website. A particular challenge for insurance supervisors is to ensure e-insurance contracts are fair to customers.29 A customer buying insurance over the Internet cannot discuss the purchase with an experienced salesperson and may not understand the conditions of the policy. One response is to design the electronic forms in such a way that a purchase cannot be made unless the customer proves to have understood the conditions applying to the policy. Another problem is ensuring customers receive important information, regardless of the type of equipment and software the customer is using.30 In particular, when insurance policies are sold over the Internet, it should be mandatory to provide the customer with the means to verify the contract’s content and when it becomes effective and the ability to correct any errors. For very small, undiversified economies, greater electronic access to global financial markets may raise the question of whether they need to have domestic equity and debt markets or even banks.31 However, small business finance is often provided only by local banks that can assess the credit risks involved through familiarity with the customer.32
Implications for Payment System Risks Authorities have thus far been reasonably relaxed about e-payments, in part because of their limited success to date. Typically, the view is that most Internet payment methods are essentially traditional instruments cleared in traditional ways but using a new communications method between bank and customer. Taking this view, few if any policy issues arise. This is even true of the unique challenge for Internet payments: namely, the need for micro-payments, as physical cash cannot conveniently be used between remote parties, especially when the parties are in different countries or use
29. International Association of Insurance Supervisors, “Principles of the Supervision of Insurance Activities on the Internet,” October 2000 (www.iaisweb.org/framesets/pas.html). 30. For example, the insurer may have designed a web page to be viewed at a minimum resolution of 800x600 pixels and set to nonscrolling, but the customer’s display is only 640x480 pixels. In this case, the customer will not see the lower part of the picture, which might have an essential announcement or warning. 31. Claessens, Glaessner, and Klingebiel (2000). 32. In India, some banks are promoting e-banking as a means to reach smaller firms in remote rural areas.
, ,
different currencies. Even here the potential solutions are generally variations of e-money, such as prepaid balances held remotely. If the e-payments become more prevalent, however, concerns about their use for money laundering may arise. So long as online payments are cleared and settled through the existing infrastructure that complies with “best practice” (such as Core Principles),33 their development would have only a limited impact on payment system risks. E-finance may continue to increase the demand for more efficient and robust back-office operations to achieve straight-through processing, including across national borders. If so, this will not only reduce operational risks by minimizing errors caused by human intervention, but also reduce settlement risk by shortening the settlement cycle.34 On the other hand, if e-finance significantly promotes financial consolidation, interbank (or on-others) clearing will eventually shift to intrabank (or on-us) clearing. The possible creation of a huge correspondent bank (or bankers’ bank) under this scenario would lead to a substantial amount of settlements taking place on its book, not on the book of the central bank. This may undermine settlement certainty and hence increase settlement risk. If nonbank financial institutions develop competitive Internet-based clearing and settlement service networks that bypass existing (bank-dominant) payment systems, systemic risk could increase due to the increased difficulty of monitoring the links between various actors and assessing the risks to which they are exposed. However, banks should not lose sight of the importance of payment services to their near-term revenues and longterm competitive viability.35
Implications for Systemic Stability Operational stability, including maintaining the integrity of the IT infrastructure supporting the settlement of financial market transactions, is essential to the safe and orderly operation of banking and financial systems and ensuring the public’s trust.36 One such risk is that, because financial 33. Committee on Payment and Settlement Systems (2001). 34. The Federal Reserve Board chairman, Alan Greenspan, in a recent speech (2000) strongly welcomed a wider adoption of STP from the viewpoint of facilitating the shortening of the securities settlement cycle from three days after the transaction to one day after (and ultimately simultaneously). See also Leinonen (2000). 35. Stewart (2000). 36. Summers (1999). This is also available online at www.rich.frb.org/media/speeches/other.html.
-
institutions use similar software programs, a common shock could adversely affect many large institutions. The deeper involvement of greater numbers of new and different firms—including nonfinancial firms—in financial markets may make it much more difficult to monitor the links between the various actors and to assess the risks to which they are exposed. As the links between financial and nonfinancial markets become more blurred, the sources of possible systemic threats are likely to become harder to track. An important element of this is operational risk, and in particular the vulnerability of computer systems to disgruntled employees or “hackers.” For example, insufficient or inadequate segregation between internal systems for retail and large value payments could allow the breach of the lighter security around a lowervalue system such as a bank’s retail website, which would in turn allow entry to a high-value system via the bank’s internal network.37 A failure of a large Internet service provider could also do serious damage.38 So far incidents have been relatively minor, but the risk remains.
Rethinking the Safety Nets Most regulatory systems primarily focus on eliminating the risk of loss to small depositors from bank failure. This is because it is impossible for depositors to distinguish between a bank failure caused by idiosyncratic problems at an individual bank and one caused by a systemic problem. There is a risk that problems at a single bank—or even unfounded rumors of a problem—will lead to a rush to withdraw deposits, which, given the illiquidity of bank assets, could bring down healthy banks. Because depositors are less familiar with e-banks and fear that deposits can be withdrawn more quickly, e-banks may be more susceptible to runs than traditional banks. Thus while the challenges for banks, supervisors, and central banks will remain the same, the nature of cyberspace may leave them far less time for crisis management and resolution. A common means of reassuring depositors is to arrange deposit insurance. However, such measures often exclude foreign (or foreign-currency) deposits, and some providers of e-banking services would be excluded from insuring deposits, as they are not banks. This does not matter for financial 37. C. Sergeant, “E-Banking: Risks and Responses,” UK Financial Services Authority, March 29, 2000 (www.fsa.gov.uk/pubs/speeches/sp46.html). 38. Basel Committee on Banking Supervision (2000).
, ,
stability so long as the public does not lose confidence in established domestic banks if an offshore e-bank fails.39 (If depositors responded by withdrawing funds from other e-banks, this could even strengthen domestic banks.) Another way bank depositors are reassured is by the central bank standing behind banks as a lender of last resort. Domestic e-banks that are licensed in the usual way could have access to such support, but possibly on harsher conditions to reflect their riskier nature. E-finance is also likely to continue the move toward financial conglomeration and blur the distinction between different financial products. This will pose coordination problems regardless of whether supervision is organized along product or institutional lines. Increasing financial conglomeration may mean that the reasons for applying a safety net to banks are less compelling.40 Some might infer a need for the current safety net to be expanded to cover new institutions, involve greater transparency and disclosure, or focus on products rather than institutions.
Legal Issues While most of the issues raised by e-finance do not significantly differ from classical conflict of law questions raised by cross-border finance, some are indeed new. This is particularly true in the case where the geographical location of parties contracting over the Internet cannot be determined. A fundamental issue is whose courts have jurisdiction over these transactions and whose laws apply to institutions offering e-services. Indeed, jurisdictional predictability, if not certainty, is critical in allowing participants to consummate electronic financial transactions without undue concern over the legal risks, means of enforcement, and rules of dispute resolution. New sets of principles—covering banking, payment services, and securities businesses—are being developed to address these concerns.41 One such concept is “targeting,” in which the language, graphics, and software 39. There is also the customer protection issue of ensuring depositors at such banks are aware of the additional risk they are taking. 40. See, for example, Claessens, Glaessner, and Klingebiel (2000). 41. A different, though related, issue is whether the location of financial intermediaries in the payment systems chain should be used to impose jurisdictional standards and rules simply because they are at “systemic choke points” in the financial flows. This would invoke wider issues and may not be specific to e-finance. See American Bar Association, “Achieving Legal and Business Order in Cyberspace: A Report on Global Jurisdiction Issues Created by the Internet,” Jurisdiction in Cyberspace Project, July 2000 (www.kentlaw.edu); and Vartanian (2000).
-
of a specific website could be used as a basis for judging whether a website is targeted to a particular jurisdiction and whether the service provider in question is aware that contracts may be subject to the rules of that jurisdiction. A related issue is whether advanced technology should be employed to monitor the jurisdictional issues created by e-finance.42
Interrelationship of Various Issues The above discussion might read more like a laundry list of issues than a coherent framework articulating the relationship among the various building blocks of e-finance. In order to prioritize these issues, however, it is important to understand what the distinct elements are and how they relate to one another. This approach also allows central banks and financial supervisors to map the relevant implications and challenges and to evaluate the potential for cross-country and cross-sector cooperation. Figure 3-4 illustrates the interrelationship of the various issues.
Implications for Monetary Stability E-finance also has potential implications for monetary stability; it could affect how central banks operate their instruments or the data on which they set their instruments, or changes in these settings could affect the economy.
Effect on Operating Procedures Central banks require an instrument with which to implement monetary policy; generally this is the interest rate in the overnight interbank market. In order to affect this rate, the central bank typically influences balances held by commercial banks. Thus central banks may lose control of monetary policy if e-finance were to eliminate the demand for settlement balances altogether, a scenario that could occur if private cybercash becomes the preferred medium for payments. Authorities, however, could maintain 42. For example, the use of intelligent electronic agents would enable consumers to monitor their journey through cyberspace and warn them when they are entering a site whose privacy policies do not match their preferences.
Cheaper and faster execution
Cross-border
Source: Authors’ assessments.
Changes in market structure and dynamics (fragmentation, liquidity, price discovery, volatility)
Jurisdiction
Fraud/money laundering
Monetary stability
e-Finance
Financial supervision
Figure 3-4. E-Finance: Interrelationships of Various Issues
Lower financial boundaries/new business models
Disintermediation
Customer protection (security, privacy)
Operational integrity/settlement risks
Reliance on technology (outsourcing)
-
control by making it a legal requirement for settlement to take place on the books of the central bank, as in Australia and Canada.43 But even if settlement does not take place on their books, central banks could still influence short-term interest rates as they can intervene in financial markets without concern for profitability.44
Effect on the Transmission Mechanism There are many possible effects of e-finance on aspects of the monetary transmission mechanism, but assessing the size (or even sign) of the net impact is very difficult. By increasing the extent of price information available and the speed with which it can be accessed, e-commerce and e-finance may reduce lags in the monetary transmission mechanism. If structural changes increase competition in the banking sector, banks may adjust their retail interest rates more rapidly in response to changes in central bank and money market rates. On the other hand, the credit channel may be weakened if previously constrained firms can access a wider range of potential lenders. If e-finance makes hedging against exchange rate and interest rate fluctuations easier and cheaper, this could reduce the responsiveness of activity and prices to monetary policy. If e-trading, or just faster and wider distribution of rumors via the Internet, makes financial markets more volatile, this could also have an influence. With markedly reduced transactions costs and more direct access, a greater number of small investors are now able to invest in equity markets. This may mean that future collapses in asset prices may have a more marked impact on economic activity than was observed after the 1987 stock market crash.
Effect on Macroeconomic Indicators E-commerce and e-finance may distort key macroeconomic variables, complicating the task of correctly setting policy instruments. For example, if e-money displaces a significant amount of currency, monetary base and 43. Even without legal compulsion, there are advantages to banks continuing to use the central bank for final settlement. 44. See the debate between Friedman (1999, 2000); Woodford (2000); Goodhart (2000); and Freedman (2000).
, ,
M1 growth rates will be misleadingly low.45 If offshore e-banks attract a significant share of deposits or provide a significant share of loans, broader monetary and credit aggregates may be similarly misleading. Central banks that place weight on such indicators in monetary policy deliberations may need to begin planning data collections on e-finance and incorporating the findings into their measures. Also likely to be affected are macroeconomic data used, but not compiled, by central banks. For example, private consumption data is often compiled from data on sales from retail shops. Sales made over the Internet, as well as products delivered electronically, are likely to be missed from the collection. Policymakers may then mistakenly interpret the apparent fall in consumption as a drop in domestic demand and set policy too loosely. Inflation measures may also be affected. Studies of biases in published inflation indices refer to “outlet substitution bias,” where customers switch their purchases away from large stores monitored by consumer price index (CPI) compilers to cheaper stores that are not monitored. This bias may cause the CPI to overstate true inflation by 0.1–0.4 percentage points. E-commerce could raise the extent of this bias.
Role of Central Banks and Supervisors in Monitoring and Assessing E-Finance Developments In the not so distant future, the transformations engendered by the spread of e-finance may raise systemic concerns for central banks, particularly given the greater importance of unregulated entities outside the present reach of supervisors. Not only are reliable data on the current situation hard to find,46 growth is extremely hard to predict; while e-broking, some types of e-banking, and some e-trading have spread much more rapidly than generally predicted, e-money developments have failed to meet expectations. Even after the trends are identified, it is necessary to distinguish between developments that raise familiar prudential oversight problems in a new guise and those that give rise to entirely new challenges. This will necessarily be difficult, as it is hard to isolate the contribution of the 45. A separate issue is the central bank seigniorage loss much discussed several years ago. So far this remains very marginal due to very limited replacement of currency with e-money. 46. Data and measurement problems apply to all areas of e-commerce. See OECD (1999).
-
Internet from the effects of other complementary innovations and to distinguish Internet effects from other long-term industry trends and exogenous factors. The impact of e-finance developments on financial and monetary stability may appear very quickly. While it is retail financial services that have been most radically transformed by the Internet to date, the possible impact on the business-to-business (B2B) segment should not be overlooked in the medium term. Furthermore, business models will continue evolving, with the enabling technologies bringing changes in the nature of financial services. How the players with new business models will behave under stress, as well as in normal market conditions, is difficult to predict. Periodic reappraisal of the global e-finance landscape and its relevant policy implications remains vital. Thus international forums such as the Basel Committee on Banking Supervision, the Committee on the Global Financial System, the Committee on Payment and Settlement Systems, and the Financial Stability Forum will continue to play an important role in facilitating these discussions.47
References Baghai, P., and B. F. Cobert. 2000. “The Virtual Reality of Mortgages.” McKinsey Quarterly 3: 60–69. Basel Committee on Banking Supervision. 2000. “Electronic Banking Group Initiatives and White Papers.” Basel: Bank for International Settlements (October). Bekier, M. M., D. K. Flur, and S. J. Singham. 2000. “A Future for Bricks and Mortar.” McKinsey Quarterly 3: 78–85. Birch, D. G. 1999. “Mobile Financial Services: The Internet Isn’t the Only Digital Channel to Consumers.” Journal of Internet Banking and Commerce 4 (October). Claessens, S., T. Glaessner, and D. Klingebiel. 2000. “Electronic Finance: Reshaping the Financial Landscape around the World.” Financial Sector Discussion Paper 4. Washington: World Bank (September). Committee on Payment and Settlement Systems. 2000. “Clearing and Settlement Arrangements for Retail Payments in Selected Countries.” Basel: Bank for International Settlements (September). ———. 2001. “Core Principles for Systemically Important Payment Systems.” Basel: Bank for International Settlements (January). Committee on the Global Financial System. 2001. “The Implications of Electronic Trading in Financial Markets.” Basel: Bank for International Settlements (January).
47. Crockett (2001).
, ,
Crockett, A. D. 2001. “Financial Stability in the Light of the Increasing Importance of Online-Banking and E-Commerce.” Basel: Bank for International Settlements (January). Evans, D., and R. Schmalensee. 1999. Paying with Plastics: The Digital Revolution in Buying and Borrowing. MIT Press. Fassnacht, M., and R. Archibold. 2000. “Architecting the Open E-Finance Network: Built to Ride the Internet Wave.” New York: JP Morgan Securities (July 27). Freedman, C. 2000. “Monetary Policy Implementation: Past, Present, and Future—Will the Advent of Electronic Money Lead to the Demise of Central Banking?” International Finance 3 (July): 211–27. Friedman, B. 1999. “The Future of Monetary Policy: The Central Bank as an Army with Only a Signal Corps.” International Finance 2 (November): 321–38. ———. 2000. “Decoupling at the Margin: The Threat to Monetary Policy from the Electronic Revolution in Banking.” International Finance 3 (July): 261–72. Furst, K., W. Lang, and D. Nolle. 2000. “Internet Banking: Development and Prospects.” Economic and Policy Analysis Working Paper 2000-9. Washington: Office of the Comptroller of the Currency (September). Goodhart, C. 2000. “Can Central Banking Survive the IT Revolution?” International Finance 3 (July): 189–209. Greenspan, Alan. 2000. “Electronic Finance.” Remarks prepared for the Financial Markets Conference sponsored by the Federal Reserve Bank of Atlanta. Sea Island, Ga., October 16. Hong Kong Monetary Authority. 2000. “Guideline on the Authorisation of Virtual Banks.” HKMA Quarterly Bulletin 23 (May): 46–51. Leinonen, H. 2000. “Re-engineering Payments Systems for the E-World.” Discussion Paper 17/2000. Helsinki: Bank of Finland (November). Maude, D., and others. 2000. “Banking on the Device.” McKinsey Quarterly 3: 86–97. McAndrews, J., and C. Stefanadis. 2000. “The Emergence of Electronic Communications Networks in the U.S. Equity Markets.” Current Issues in Economics and Finance 6 (October): 1–6. Federal Reserve Bank of New York. Morgan Stanley Dean Witter. 1999. “The Internet and Financial Services.” New York (August). Nerby, P. E. 2000. “The Impact of Alternative Trading Systems on Market-Making at Securities Firms.” New York: Moody’s Investors Service (July). O’Connell, R. 2000. “The Internet and U.S. Banks.” New York: Moody’s Investors Service (January). Organisation for Economic Co-operation and Development (OECD). 1999. The Economic and Social Impacts of Electronic Commerce: Preliminary Findings and Research Agenda. Paris (February). Posner, K., and A. Meehan. 1999. “The Internet Credit Card Report.” New York: Morgan Stanley Dean Witter (July 20). Riviera, P. 2000. “Getting Physical: The Need for a ‘Real World’ Presence.” Internet and Financial Services (July). Morgan Stanley Dean Witter. Shapiro, C., and Hal R. Varian. 1999. Information Rules. Harvard Business School Press. Stewart, J. B., Jr. 2000. “Changing Technology and the Payment System.” Current Issues in Economics and Finance 6 (October): 1–6. Federal Reserve Bank of New York.
-
Summers, B. 1999. “Integrity and Trust in Electronic Banking.” Remarks prepared for the Software Engineering Symposium. Pittsburgh: Carnegie Mellon Software Engineering Institute, September 1. Theodore, S. 2000. “Online Winds of Change: European Banks Enter the Age of the Internet.” London: Moody’s Investors Service (February). Van Steenis, H. 2000. “Online Finance Europe: Invasion of Customer Snatchers.” London: JP Morgan Securities (June 2). Vartanian, T. P. 2000. “A Global Approach to the Laws of Jurisdiction in Cyberspace.” Testimony to the House Subcommittee on Courts and Intellectual Property of the Committee on the Judiciary, June 29. Vincent, E. 2000. “Nordic Internet Banking.” London: Moody’s Investors Service (April). Warner, D. A. 2000. “The Challenge for Intermediaries in the Digital Age.” Speech prepared for Asia Society meeting. Hong Kong, August 28. Woodford, M. 2000. “Monetary Policy in a World without Money.” International Finance 3 (July): 229–60. World Bank. 2000. World Development Indicators. Washington.
4
. . .
The Future of Retail Financial Services: Transparency, Bypass, and Differential Pricing of net-based applications has profoundly transformed the marketing, sales, and delivery of services, and in no sector has this occurred more rapidly than in financial services. This chapter principally addresses issues in the transformation of business-toconsumer (retail) financial services. There are three principal issues that will determine the transformation of retail financial services: —transparency, or the ability of all market participants to determine the available range of prices for financial instruments and financial services; —disintermediation or bypass, in which net-based direct interaction eliminates the role previously enjoyed by retailers, financial advisors, retail stock brokers, and insurance agents; —differential pricing, in which pricing is increasingly tailored to the characteristics of individuals or groups of consumers, with prices based on the revenue streams they generate and the costs to serve them. Each of these will affect the roles to be played by financial service providers, the sources of profits available to them, and the strategies they may choose to earn those profits. Significantly, these same three trends have been present in commercial financial services for some time; the stakes were high enough that imple-
T
mentation did not have to wait for the arrival of the net and the lower costs that resulted. Transparency of pricing via real-time feeds from stock exchanges and bond traders was available to market professionals in the 1980s, although individual retail investors did not have comparable information until the late 1990s. Likewise, disintermediation of commercial block trading via Instinet was widespread in the mid-1980s, although disintermediation of retail trading was not prevalent until 1999. Completing the list, commercial fleet insurance and most forms of corporate liability insurance have been priced individually for some time; the underwriter’s exposure was high enough to justify using individual company data rather than less precise actuarial data.1 Moreover, the same three trends have been apparent in nonfinancial services. Airlines have to a large extent disintermediated the agency system. Pricing has become more complex, with some systems such as Priceline offering true differential pricing. Competition and transparency have reduced airline margins. Similar trends have been observed in industries as diverse as grocery retailing and book sales.
Transparency The net changes the customer’s information endowment, which alters the balance of power between the customer and financial professionals. The game has surely changed. Before London’s Big Bang in 1986, “jobbers” standing on the floor bought shares from investors who wished to sell and sold shares to investors who wished to buy; no transactions between customers could occur without their intervention. Buying at the bid and selling at the offer, they earned the “touch” (or bid-ask spread) on every share traded. With limited pretrade transparency, there was little competition among jobbers to reduce the spread; better yet, with limited post-trade transparency, there was little pressure on the jobbers to trade out of their position at prices uncomfortably close to the prices at which those positions were acquired. Jobbers bought low, sold high, flipped their positions, and earned tuppence on virtually every share traded in London. It is no wonder they loved their jobs. 1. Indeed, the theory of differential pricing and price discrimination is not new, and it is not dependent upon the net (see Tirole, 1988). The principal impact of the net is to enable sophisticated pricing strategies to be applied to individual consumer transactions.
, ,
However, with the coming of Big Bang, screen-based trading, and instant trade publication, the lives of market makers (the term used for jobbers after the Big Bang) were dramatically altered. Pretrade transparency forced greater competition, reducing spreads. Market makers found that they were buying only slightly lower and selling only slightly higher; this still enabled them to “buy low, sell high” but was not nearly as attractive as before screen-based transparency. Worse yet, post-trade transparency enabled investors to know how many shares were in the position of market makers who were seeking to unload their positions; this enabled customers to adjust the prices they expected to pay and to discount slightly, knowing the pressures market makers were facing.2 Often market makers were forced to sell their shares or cover their short positions at approximately the price at which they had acquired them; thus “buy low, sell high” was transformed into “buy low, sell low.” The pressure facing market makers had become intolerable. This appears to be a fundamental property of transparent, screen-based markets. Similar pressures are faced in many other financial services sectors. The ability to search for best prices on the net has brought increased competition (“pretrade transparency”) to term life insurance, credit cards, and mortgages; other sectors are sure to join the list. This suggests that in an ever-increasing array of products and services, price-based competition will increase. Customers will find what they want at the price they want;3 it will be hard to confuse them and even harder to deceive them. As customers find the best prices, margins will shrink. Significantly, pricing errors— offering too favorable a premium on insurance, too favorable an interest rate on a credit card, or too favorable a price on block trade—will become more serious, since the margins on other transactions will be too thin to subsidize significant errors. 2. Unlike investors, who buy shares to keep them for some period of time, market makers buy shares to facilitate investors’ trading and generally hope to trade out of their positions relatively quickly. Knowing that a market maker was holding 500,000 shares that he or she did not want to hold, an investor could reasonably expect to pay a lower price than otherwise for the same shares. 3. Essential (but implicit) is the fact that price transparency must be accompanied by transparency in the attribute bundle associated with any individual offering. Consequently, price transparency does not result in a single level of quality or a single set of prices. First class on an airline with sleeper seats is preferable to first class on an airline without sleeper seats, which is preferable to business class. First class with sleeper seats may be more expensive than first class on an airline without such seats, which will almost certainly be more expensive than business class. Similar price differences can exist, and indeed almost certainly will continue to exist, in all goods and services, even in the presence of theoretically perfect information.
But while pressures may have increased, the game is not over yet. Service providers own the product. They still know more than the customer,4 and they still determine the rules of engagement, especially in organized exchanges and securities markets. And, while formulating a strategy may be more difficult, the basic idea remains: they need a strategy. As any institutional trader and any manager of a block trading desk can attest, customers do vary. A block trader in Amsterdam described getting a phone call at dawn one morning from an institutional fund manager eager to sell him a massive position in a NYSE listing: “Wanna buy 100,000 Union Carbide, down twelve points from the New York close?” Although the trader had not yet heard about the disaster at Union Carbide’s facility in Bhopal, he wisely declined the offer. The insight behind his response generalizes to other settings. You can tell customers apart by the way they behave. In this instance he followed a simple rule: any time anyone offers to sell you anything down twelve points from the New York close, do not do it. That is, when someone demonstrably knows more than you and is eager to trade with you, you had better either rethink your price or hang up and walk away from the trade. Of course, screen-based trading has provided the customer with transparency while stripping away many of the cues that financial intermediaries used to use when pricing their services. An institutional block trader has learned to ask many questions before offering the customer a price: Am I the first person you called? Do you have any more behind? Do you need a price now or can I shop it for a while? These questions allow the block trader to determine how much information has already been leaked to the street, whether he or she will be competing to place his or her position with others trying to place the remainder, and the urgency that the person calling feels about getting a firm price quickly. Each of these questions—and many others—helps the trader get a sense of the effort and risk associated with working off the position, which in turn helps him or her determine a price.
4. This should be self-evident. Insurance companies have actuarial tables. Credit card companies have demographic information on applicants, easily purchased, and large-scale data-mining programs that enable them to predict, with much greater accuracy than the customer, the expected long-term risk-adjusted annuity value of each applicant’s account. As important, the company can design the product, altering annual fees, APRs, and grace periods, or altering deductibles, or designing each product’s bundle of services and prices, to maximize the value from each customer while limiting direct comparison with competitors’ offerings.
, ,
While these measures have long been employed in corporate and commercial financial services, ever-finer differentiation and ever-finer pricing distinctions are becoming the norm in retail services as well. Where once all credit card customers were offered the same rate structure (for example, AT&T Universal originally offered all customers 14.9 percent annual interest rate and Citi offered all 19.8 percent), profitability-based pricing has now captured the industry. Led by Capital One’s pioneering efforts, major issuers now offer their customers several thousand different price points, based on their evolving understanding of the customers’ profitability.
Disintermediation For the past several years, major manufacturers and service providers have been worrying about the benefits and the risks associated with disintermediation of their agents, wholesalers, and distributors. Certainly, consumer packaged goods manufacturers have been considering direct distribution as a way to block the increasing power of megastores like Wal-Mart in the United States, Carrefour in France, and giant chains like Sainsbury and Tesco in the United Kingdom. Similarly, almost since the advent of corporate-focused agencies after U.S. deregulation, airlines have wanted to break the power of agencies such as Carlson/Wagon Lits, American Express, and Rosenbluth.
Benefits of Disintermediation The reasons for this interest in disintermediation are clear: —agencies and distributors have a higher degree of independence than manufacturers and airlines would prefer, leading them to serve the interests of customers or themselves rather than the interests of those whose products they are distributing; —agencies and retailers have increasing power as the information control point in the distribution channel; —airlines in particular have determined that agencies no longer represent low-cost distribution. We explore each in turn. In brief, the store does not care if it sells Bounty or Viva, as long as it sells paper towels at a profit. The agency does not care if it sells American Airlines or United tickets to Hawaii, as long as the customer buys a ticket
and is satisfied. Thus the producer’s goal is to have the product or service sold; the retailer’s goal is to sell something in the category. This is not new, but it becomes newly important. In the absence of strong brands, many customers do not care which paper towel they buy or which disposable diaper or which fabric softener. This gives the retailer considerable power to promote whichever brand is most profitable or whichever brand offers the best promotional programs. While mass merchandisers and chain stores in the United States have already obtained sufficient power to impose “slotting allowances” and other fees on manufacturers simply for carrying their products, customer loyalty programs will allow them to track customers’ purchases and thus to create more effective individualized promotional programs. Consumers with a brand preference within a category will be offered what they prefer; consumers without a brand preference will now be offered whatever is most profitable for the retailer. Thus manufacturers will not only be charged for carrying merchandise, they will also be charged for promoting it. Before the deregulation of air travel in 1978, most corporate travel was booked directly through airlines, with tickets generally mailed or picked up at center city airline ticketing offices shared among the major airlines. Travel agents principally served leisure travelers, who required considerable coaching on their choice of location, their selection of air carriers and hotels, and even the entertainment or sightseeing details of their destinations. These activities were entirely too labor intensive to be performed by the airlines at either their airport or center city locations or, in general, by any other means. For this reason, airlines welcomed the agency system as the least expensive channel for distributing their product to the new leisure market. After deregulation, however, corporate travel management shifted to mega-agencies like Rosenbluth and American Express. Not only was this market considerably easier to serve than the leisure market, airlines found that the travel management services provided by the agencies frequently were operating to reduce corporate travel expenses (as they should) by negotiating firmly for special deals from airlines. The agency system began to look like an expensive distribution channel that, rather than increasing revenue, was taking revenue away from the carriers.
Risks of Threatening Disintermediation The principal risk associated with attempting a bypass or disintermediation strategy is, of course, the danger of punishment by an angry intermediary.
, ,
In many industries the intermediary has considerable influence over the purchasing decisions of consumers and commercial customers. Grocers and other retailers can promote a brand or they can punish it by promoting competitors’ brands. Promotion efforts include increasing display space, providing priority space such as convenient aisle cap displays, promotional pricing, and advertising of the product and its promotional price. Conversely, punishment can include decreasing shelf space, eliminating aisle caps, and charging punitively high prices. Likewise, agencies have considerable ability to move market share among different airlines serving the same routes. While they cannot actually change the price of tickets, they can develop priority displays and scripting languages for the agents that can move double digit percentages of bookings almost immediately.
Analysis Our initial predictions several years ago were that airlines would safely attack: either terminate their relationships with travel agencies or, more likely, dramatically curtail their existing commission programs; the latter has indeed occurred. Conversely, our prediction was that consumer packaged goods manufacturers would need to move cautiously, and that none would launch a major attack on chain stores and mass merchandisers; this too has proved to be correct, at least in the United States and the United Kingdom. Exploring the basis for these predictions provides some insight into what can be expected for financial services. Our analysis of the relative vulnerability of different industries is based on our theory of “newly vulnerable markets.” These markets share certain characteristics: —easy to enter because regulatory change, alternative distribution, and changes in consumer preferences lead to new ease in entering a market; —attractive to attack because simplistic pricing in the presence of extreme differences among customers leads to large differences in profitability among customers and opportunities for cream skimming by attackers; —difficult to defend because unbreakable commitments, cultural barriers, or plausible punishment create an inability to replicate the attacker’s strategy or an inability to counter it in some other way. Each of these features plays a critical role. (1) If a market has newly become easy to enter, due to recent regulatory change, alternative forms of cost-effective electronic distribution, or other changes, there is no reason to
assume that it has already reached equilibrium. That is, a market that has recently opened up may indeed present strategic opportunities without violating economists’ reasonable expectations about market efficiency. (2) If the newly opened market also exhibits simplistic pricing, then it is indeed vulnerable, at least initially. That is, a cream-skimming and opportunistic attacker, focusing on the most profitable segments of the market, can rapidly become profitable, even before capturing a large fraction of the market.5 (3) If there are barriers that prevent the defender from rapidly adopting the strategy of the attacker, then the attacker will indeed have time to harvest the rewards of his market entry. We generalize this to a theory of newly e-vulnerable markets. These markets are —easy to enter because changes in consumer preferences and complexity and subjectivity of interface design make it easy to launch an electronic channel; —attractive to attack because simplistic pricing in the presence of extreme differences among customers leads to large differences in profitability among customers and opportunities for cream-skimming by attackers and because the visibility of these differences to attackers enables opportunistic pick-off of the most profitable customers’ accounts; —difficult to defend because rapid and plausible punishment from a retailer threatened with bypass reduces or even destroys profitability and effectively discourages electronic market entry. . What does this indicate about the difficulties faced by an airline contemplating an attack on the agency system? Using the newly e-vulnerable markets framework, we conclude that it would, first, find the market easy to enter. The net makes it easy to describe the product, completely and unambiguously, simply by describing origin and destination cities, departure and arrival times, class of service, seat location, and price. It is easy to reach customers, as most corporate and full-fare leisure travelers have e-mail and are computer savvy. With the advent of e-ticketing (virtual tickets), it is even easier to distribute the product, since, in fact, no distribution is actually required. Second, the agencies would be attractive to attack. There are strong profitability differences among customers from the perspectives both of 5. A market that exhibits considerable differences in profitability among customers is said to possess a strong customer profitability gradient. Often, for convenience, we term the best customers simply love ’ems and the least attractive customers kill yous.
, ,
the airlines and of the agencies. A great airline customer buys a full fare or a business class ticket, while a less attractive customer buys a ticket that is deeply discounted. A great travel agent customer calls with a specific destination city and hotel in mind, and perhaps even a specific flight request; a less attractive customer requires considerable coaching on destination, and then buys an inexpensive ticket. Significantly, from the perspective of the airlines, they have accurate information on their customers. Customers are required to provide some form of identification before boarding the plane, and most of the best business customers have frequent flyer accounts; thus not only do airlines enjoy a situation wherein there is a strong customer profitability gradient, but they are able to determine where each customer fits into this overall pattern of profitability. Third, the agencies’ position would be difficult to defend. The principal ways in which an airline would attack would be to target enough of the highly profitable customers for direct distribution to demonstrate that it could bypass the agency system if necessary, while simultaneously slashing commissions for those customers that continued to book through travel agents; this is, indeed, what all of the major U.S. carriers have done. The principal means of defense available to agencies would have been to punish the first airlines that attempted to cut commissions by shifting volume to those carriers who continued to pay full commissions. Unfortunately for the agencies, they were unable to do this. Airlines were able to target their best customers for direct distribution, their adoption was so rapid, and the airlines’ retention of their most profitable customer base was so complete that the threat of punishment by the agency system was not credible. Therefore, it should be relatively safe for airlines to bypass the agency system. Of course, they need not actually do this; the fact that they readily could is sufficient to enable them to reduce commissions paid to travel agents; should agents balk, then the airlines could implement direct distribution. . How different is the situation faced by consumer packaged goods manufacturers like Procter and Gamble or Unilever? We once again use the framework offered by newly evulnerable markets. Is the market easy to enter? In order for manufacturers to enter the market for retailing consumer goods electronically, it is first necessary that they provide an interface so that consumers can determine what is on offer, make informed selections, and place their orders. It will then be necessary to provide logistical support, to enable distribution that
is cost effective and convenient for consumers. As we explore below, both are more complex than their equivalent for direct distribution of air travel. Is the retailing system attractive to attack? In order to ensure that direct distribution can rapidly become profitable for manufacturers, even before they have captured a significant share of the market, a customer profitability gradient is necessary. Unfortunately for consumer packaged goods manufacturers, unlike in air travel, where business class and first class travelers are more profitable than the average economy fare traveler, there are very limited differences in unit profitability for manufacturers among sales of paper towels, cookies, or breakfast cereal. As significantly, those differences that may exist can be available to retailers through customer loyalty tracking programs, but this information is seldom made available to manufacturers. Is the strategy difficult to defend? For direct distribution to succeed, manufacturers need to capture enough market share to enable costeffective logistical systems for distribution and to ensure that punishment from traditional retailers will not be too effective. How to assess each of these three factors, relative to direct distribution of air travel? To provide an effective alternative distribution system, online retailing systems must first be able to describe the product to the consumer with sufficient detail. For air travel this is easy, as noted above. For grocery retailing, this can be done at either of two levels. The first, which is relatively straightforward, is to describe the brand, the size, and the price. The second, which does not use brand as a complete description, requires far more information. For cookies, it might include a description of crunchiness or chewiness, chocolate chip density, size and weight of each cookie, and the presence or absence of nuts. For detergent, it might include a list of enzymes, their effectiveness on different stains, and the amount of environmental impact from phosphates. For paper towels, it might include absorbency, surface softness, and color. Each product will have a different set of attributes, and for produce, each individual item—each kind of vegetable or fruit—might have a different description. Equally important, online retailing must be able to deliver the product to the consumer with certainty and convenience and at an acceptable price. While e-tickets in air travel completely eliminate the problem of delivering tickets to passengers, grocery items must be delivered: when the consumer wants them and in good condition. This will be considerably more difficult to assure. Unlike air travel, where there is a strong and visible profitability difference among customers, in most grocery items there is little difference in
, ,
consumer profitability, and such differences as might exist are seldom visible to manufacturers. While I can conceive of situations where I would be willing to pay a significant premium at that instant for immediate delivery of paper towels, there is no mechanism for satisfying this need, nor of informing the manufacturer of it. Finally, without a strong customer profitability gradient, manufacturers would need to capture significant market share for their direct distribution systems—and do so quickly. That is, in an industry with a strong customer profitability gradient, it may be possible to capture a small number of extremely profitable customers rapidly by identifying them and their needs and by making them sufficiently attractive offers. Without a strong and visible customer profitability gradient, as is the case in consumer package goods retailing, manufacturers cannot defend themselves by capturing a small but profitable set of customers, and thus they remain vulnerable to retaliation by retailers. The first paper products company that attempts to bypass Wal-Mart, or the first home improvement company that attempts to bypass Home Depot, will find that it captures share to its direct distribution channel very slowly, while the bulk of its customers remains in a traditional channel now heavily biased in favor of their competitors’ products. Therefore, it would be extremely risky for consumer package goods manufacturers to attempt to sell directly to their consumers, bypassing the major players in their current retail distribution system.
Predictions for Financial Services Given what we have observed in these two very different industries, what would we expect to occur in financial services? Clearly, in many financial products, barriers to entry have recently been reduced by net-based innovations; electronic securities trading and online shopping for term life insurance are but the most obvious examples. Many industries also exhibit strong customer profitability gradients; some customers require significant support and research activities, while others require much less. Again, electronic trading comes to mind as an obvious example of customers who require very little support. Given that we have the first two conditions for e-vulnerable markets satisfied—easy to enter and attractive to attack—we next need to determine if these markets will be easy or difficult to defend; that is, will the transformation be effected by existing industry participants, or will new entrants with new strategies be responsible for this transformation?
In particular, we note that some financial services industries are more vulnerable than others to disintermediation, to the extent that consumers’ needs for these intermediaries differ significantly across customer segments. For example, some day traders use virtually no research, while other longterm investors with moderately sized portfolios require extensive advising from their account executives. The problem for investment firms arises when some customers value research and others do not, when the research is bundled in “free” with routine trading services, and this results in trading services that are priced too high for those who need only vanilla execution and too low to cover the expense associated with the most demanding accounts. Does this mean that account executives and full-service brokers will disappear? Not any time soon. Does it mean that average costbundled pricing that includes both execution and research will need to be rethought? Absolutely. We have observed individuals who go to McDonald’s only for the free catsup; without a doubt, Merrill Lynch is beginning to observe investors who use them only for free research. The reason this is a threat is that those customers who are easiest for traditional full-service intermediaries to serve will begin to desert them for a new class of no frills execution-only intermediaries, just as some consumers are going to discount bucket shops for cheap ticket-only travel services. Thus it is essential for industries that have customers who differ significantly in cost to serve or other aspects of profitability to develop effective pricing strategies to avoid the threats of opportunistic pick-off by new, stripped-down intermediaries, or of full disintermediation. It appears that one of the greatest obstacles facing existing financial services firms is their current dependence on their sales force, their insurance agents, stockbrokers, financial analysts, and account executives. When the first of the major individual lines insurance companies launched its online direct distribution subsidiary, the market response was initially limited; most individuals think about their insurance only when it is up for renewal, and most renewals occur without reconsidering the selection of agent and insurance company. However, the response from the firm’s agents was immediate and savage. We believe that this will be a general result: when an established financial services firm first adopts online distribution, the response of its customers may be favorable but it will also be slow, while the response of its agency force will be immediate. As long as the prospect of significant, certain, and immediate punishment outweighs the possibility of slow and uncertain benefits, it will be difficult for established firms to transform themselves.
, ,
Pricing Strategies Strategies for one-to-many B2C marketing will exploit the fact that customers are different and products are different. These are not exactly novel observations, but through the net these two differences can be extended and allowed to interact in complex and different ways. Customers are different. The costs to serve them differ and produce different revenue streams. Customers differ in their willingness to pay for services. And products are different. Different products have different combinations of attributes, which means they appeal to different market segments or different buyer populations even if they have the same quality and the same price. This is called horizontal competition. Different products offer different quality levels, appealing to different market segments at different price points. These differences among products can be augmented, through mass customization strategies, to appeal to smaller and smaller, more and more tightly focused groups of consumers. The net provides new information to sellers and allows them to bring ever-more complex bundles of goods and services to the market. Combining the two differences noted above when creating these bundles— customer differences and product differences—enables a high degree of complexity in marketing strategies.
Exploiting Cost and Revenue Differences In earlier work we have documented the presence of love ’ems and kill yous in the credit card industry.6 Love ’em customers—the best two deciles— account for roughly 125 percent of all profits earned by credit card issuers, while kill you accounts—the worst two deciles—are loss making. The remaining population, the vast majority of accounts, are generally breakeven for issuers. With strong differences in profitability such as these, it is not surprising that the best banks are now able to determine which behaviors lead to profitability among cardholders: Love ’ems pay finance charges; kill yous do not. Of course, with enormous profitability differences among customers— termed the customer profitability gradient––it was inevitable that the best issuers would develop a strategy to exploit it. Capital One, in particular, 6. Clemons, Croson, and Weber (1996).
was the first major issuer to note the implications of the customer profitability gradient and the implicit wealth transfer it represents. That is, the presence of such strong differences in profitability among accounts indicates that the most profitable accounts are being significantly overcharged and the large profits that this generates are being used to subsidize the less profitable mid-range deciles and the loss-making kill yous. Overcharging the best accounts had not led to credit card issuing being extremely profitable but merely represented the presence of large transfer payments and cross-subsidies, moving amounts of money among different customer segments. Once the senior management team at Capital One had recognized the operation of a money pump in their industry, they also recognized the strategic opportunity created by the presence of enormous profitability differences. Their response was to attract only customers in the profitable deciles. Once that can be done—assuming that it can be done—it is no longer necessary to enjoy economies of scale. A bank with only accounts that produce significant revenues can afford to have slightly higher processing costs. It can even afford to offer rewards to attract profitable accounts. Indeed, that was Capital One’s strategy: locate high revenue–producing accounts; offer them incentives to move their business to Capital One; endure slightly above average data processing costs. The latter two activities were affordable, and the bank was able to subsidize its data processing operations and its best accounts, largely because it was not also attempting to operate a money pump to subsidize unprofitable accounts. Although exploiting the customer profitability gradient to attract only profitable accounts sounds like it should be quite difficult, it was achieved through fairly simple and straightforward product design. The bank was the first bank in the world to offer new accounts a great rate for transferring their existing balances in from another bank. The thought processes can be replicated more or less as follows: —What kind of customers are profitable? Customers who pay you back, slowly. That is, revolvers, customers who pay finance charges, are your best prospects. —What kind of customer would be attracted by a low interest rate on financing outstanding balances? Customers who pay finance charges on existing balances. —How can one locate customers that have already been screened as credit worthy and who already know that they are going to pay finance
, ,
charges? By offering the low rate not to all new applicants but only to those who are bringing in an existing balance from another bank. Similar differences in customer profitability have long been observed in other financial services, such as brokerage and market making; market makers, who assume risk when they take on principal positions, are especially vulnerable to customers with superior information endowment. Only recently have mechanisms evolved for dealing with these differences.
The Terror of the Net and the Need for Pricing Strategies As discussed earlier, transparency reduces easy profits. The most extreme version of this can be found surfing the net, looking at pricing for commodity products such as best-selling books. Not only do different online merchants offer them at essentially the same prices, but they do so at essentially their own cost. For example, Amazon.com and BarnesandNoble.com offer the third volume of the popular Harry Potter series, Harry Potter and the Prisoner of Askaban, at prices that differ by only 1 cent. As interesting, the average of their prices is precisely the publisher’s cost to them; if they are to have any profit at all on the sale of this book, it must come from their charge for shipping and handling! So the “Terror of the Net” suggests that the net is going to make pricing quite brutal. Capital One’s experience indicated that if you can identify highly profitable customers, you can make money. But Amazon’s experience with Barnes and Noble suggests instead that market transparency will create such price pressure that no one will be profitable. With both Amazon and Barnes and Noble selling the first Harry Potter books at prices that do not cover their fully loaded costs, one asks: how brutal can things get on the net? Fortunately, even a more careful examination of Barnes and Noble’s and Amazon’s pricing strategy indicates that the world of perfect competition is not here yet. When we checked on prices for The Green Sea of Heaven, Elizabeth Gray’s brilliant translation of fifty ghazals of Hafiz, we located the book at two very different prices. Amazon.com was offering the book at “our price” of $14.95, while Barnes and Noble was selling the book at the publisher’s suggested price of $14.00 less a 20 percent discount, or $11.80. These significantly different prices indicate that neither zero search costs nor zero margins characterize the net today. Retailers, especially online retailers, and most especially online providers of financial services, will
need a pricing strategy, but this is not new; pricing strategies have always been required.
The Changing Role of Customer Loyalty and the Decline of Scale-Based Competition During the early days of the adoption of information technology for transaction processing systems, the fixed costs of information systems implementation were high enough to create significant pressures to achieve economies of scale, leading to the first wave of scale-based competition in the service sector. This focus on scale led to analyses that discovered that acquisition costs, the costs of acquiring new accounts, were often much greater than retention costs, or the costs of keeping an existing account. This in turn led to the discovery of the “loyalty factor” and a focus on keeping existing customers happy, almost at any cost. This, of course, is a flawed measure, since a bad loyal repeat customer who costs you money frequently is worse than a casual bad customer who costs you money only occasionally.
Profitability and Skill-Based Competition When Capital One shut off the money pump, they were able to earn extraordinary profits—and enjoy extraordinary stock performance—operating as the high-cost, low-price provider of credit card services. This really does sound quite absurd: if their operations lacked scale, and in consequence they had higher operating expenses than larger and more efficient competitors, and they simultaneously offered their customers APRs of 12.9 percent, 10.9 percent, or less, while competitors offered 19.8 percent, we should clearly predict disaster. How did they achieve remarkable success instead? As noted, the first decision was to turn off the money pump and stop cross-subsidizing accounts. Bad accounts would not be attracted, and those that were acquired would be charged annual fees or in other ways discouraged from remaining as loss producers. Ending the cross-subsidies enabled the bank to retain perhaps as much as 1,000 basis points incremental profit on their best accounts. This is an accounting device, rather than actual profit, but it does mean that the profits earned from these accounts will not be given away to other, unprofitable accounts. With perhaps 200 basis points or more needed to feed the inefficient data processing (DP) engine,
, ,
due to its lack of scale, and another several hundred rebated to attract the best accounts through lower interest rates, the bank was both low price to the customers and high price because of its inefficiencies. It still had a considerable surplus from these accounts, since it was not transferring the wealth to others. Or, as the bankers might have asked themselves after accounting for both data processing costs and lower interest rates and viewing the finance charges that were still available from these best accounts, “Whatever shall we do with the rest?” Why do we care? As one of their competitors noted, it became essential for them to understand and replicate Capital One’s strategy. If not, if they attempted to charge all customer segments equal prices, they soon found that their most profitable accounts had been induced to leave as a result of lower finance charges available to them elsewhere. Profits declined, they raised prices in order to restore profitability, and promptly lost more customers as a result. Worse yet, while they were systematically losing their best accounts, Capital One was achieving scale and operating efficiency. The process has been named “death spiral,” and as their management noted: “When one of your competitors starts down this slippery slope, you have no real choice. You learn to replicate it or you go out of business.” How widespread is this? A wide range of financial service companies, in different industries, in different countries, with different regulatory regimes, have all encountered similar problems: —several credit card issuers, including AT&T Universal and Chase, were forced to change their pricing strategies as a result of Capital One; —retail banking at Hong Kong Shanghai Bank suffered serious profitability impacts as a result of CitiBank’s opportunistic pick-off of their best accounts in Hong Kong; —retail brokerage at full-service firms like Merrill Lynch and Salomon Smith Barney needed new pricing structures and new segmentation strategies as a result of e-trading and its pick-off of the customer segments that were easiest to serve; —individual health insurance may be the next industry to suffer opportunistic pick-off and cream-skimming, although this would first require a change in regulatory regimes.
The Role of Product Design in Attracting the Most Profitable Accounts In many areas within financial services, product design can be carefully controlled to influence profitability, principally by designing products that
appeal to and attract primarily profitable customers. These screening mechanisms are not based principally on deceiving customers so that they pay too much and thus become profitable; rather, good customers are offered even better prices so that they will shift their activities from competitors or increase their volume of business and thus become even better customers. Again, this strategy is not based on withholding or distorting information—it is based on accurate communications. The more information the service provider is able to offer potential customers, and the more potential customers it is able to reach, the better its business will become. A few examples help underscore this point. In credit card issuance, love ’ems are revolvers and pay finance charges on their outstanding balances; kill yous are transactors and convenience users who pay off their balances in full and do not pay finance charges. The balance transfer product is differentially more attractive to love ’ems than to kill yous and will attract principally the customers that the bank wishes to serve. In automobile insurance, love ’ems drive safely; kill yous do not. The size of the deductible can be varied, shifting the risk of small accidents from the insurance company back to the driver and making the policies less attractive to careless drivers. In private health insurance, love ’ems maintain their health, exercising regularly and avoiding unhealthful habits like smoking; kill yous do not— in fact, their behavior is dramatically different.7 Some information can be obtained on applicants’ behavior and used to impose different prices on applicants with different risk pools. (We discuss such data mining activities in more detail below.) However, screening mechanisms that offer different sets of policies can be even more effective. A policy with high deductibles for lifestyle-related illnesses like emphysema, but at an extremely low price, will attract healthy applicants, while a full coverage policy without such deductibles will attract a different and more risky set of applicants. The private health insurance example would benefit from additional discussion. Note that the policy with high deductibles appeals mostly to nonsmokers, and thus can offer a very attractive premium not because of the deductibles but because the insurance company can calculate its 7. Indeed, customers are often able to announce their desirability by certain actions that signal what can be expected from them. This is especially true in health insurance, where smoking or the absence of smoking may signal additional risk factors like the lack of exercise and proper diet (see Landsburg, 1993). Signaling mechanisms employed by individual applicants or customers themselves, however, are more difficult to calibrate and are less widely used than screening mechanisms implemented by the companies that serve them.
, ,
expected cost when purchased by healthy applicants. Likewise, the full coverage policy is more expensive not only because it lacks these deductibles, but more significantly because it is attractive to a different and riskier population. Note, also, that this stable separation is not based on deceit; just as the nonsmokers appreciate the low rates that they are charged, the smoking population appreciates the full coverage that they are offered, and neither population has an incentive to masquerade or to misrepresent itself when applying for coverage.
Implementing Skill-Based Competitive Strategies Substituting strategies based on customer profitability for strategies based on scale has been termed skill-based competition. Successfully replacing scalebased strategies with skill-based strategies requires attention to the following: —activity-based costing to assess profitability; —product design strategies and product pricing strategies to attract different populations and exploit the customer profitability gradient; —data mining, to refine the companies’ information endowment when product design and product pricing strategies are not sufficient to distinguish fully between different population groups. Activity-based costing can be used to determine which activities are profitable and should be encouraged and which activities are unprofitable and should not be encouraged. Revolvers, who pay large finance charges, are profitable in credit card issuance, as are extremely large users. Applicants with good health and healthful habits are attractive in insurance because they file few claims. Applicants who make large numbers of phone calls each month, offsetting the high fixed cost of billing, are attractive in 10-10-xxx dial around long-distance service. Product design and pricing strategies can be used to attract the desired populations, and when they are done under ideal conditions, they can actually create a stable separation; that is, smokers will intentionally apply for the smokers’ policy, despite the higher premium that is charged. Unfortunately, a stable separation cannot always be found. For example, the balance transfer product pioneered by Capital One not only attracts low-risk revolvers, the best of all possible accounts; it also attracts high-risk revolvers, with a significant change of default. If late payment is the best possible outcome in credit card issuance, nonpayment is most definitely the worst. And late payers may be living close to the edge of their financial limits and may easily be transformed into defaulters.
Data mining is the most popular technique used in the absence of stable separation. It looks for patterns in the historical record, most specifically for patterns of transactions and behaviors that can be correlated with profitability or loss of profitability and thus can be used as predictors of future profitability. Not surprisingly, residents of relocation apartments make a large number of phone calls back home to friends and family, and thus are profitable targets for 10-10-xxx long-distance services. But not all patterns are readily explained. Readers of Hot Rod magazine tend to be higher risk than readers of Car and Driver, and thus their accounts may need to be monitored more closely against potential defaults, though it is not yet clear why this should be true. Indeed, one of the greatest limitations of data mining is that while stories can be made up to justify its observations, they are indeed just that—stories—and their explanations may have little predictive or explanatory power.
Conclusions Financial services industries are going to be transformed by three trends. One is transparency and superior information in the hands of customers, which will place much greater pressure on the profitability of service firms. Another is the transformation of channel power, as more and more services are performed by customers directly, without support from agents, brokers, or account executives, which allows customers to remove many of the easiest transactions from service providers. These two trends will result in a third: the need for more careful attention to finely tuned pricing strategies to deal with those customer requests that continue to need professional support, service, or intermediation.
References Clemons, Eric K., David C. Croson, and Bruce W. Weber. 1996. “Market Dominance as a Precursor of a Firm’s Failure: Emerging Technologies and the Competitive Advantage of New Entrants.” Journal of Management Information Systems 13 (2): 59–75. Landsburg, Steven E. 1993. The Armchair Economist. Free Press. Tirole, J. 1988. The Theory of Industrial Organization. MIT Press.
5
Web Impact on the Air Travel Industry
, the travel and tourism sector has emerged as one of the most important for developing as well as developed countries. It is estimated that the relative importance of tourism will grow to approximately 11 percent of global GDP in 2007.1 Tourism incorporates many of the features of the information society such as globalization, mobility, and information richness. People from all nations, social ranks, professions, and different ways of life are potential tourists. Tourism, as a global industry, links a worldwide supplier community with consumers equally distributed worldwide. Its physical and virtual networks enable worldwide traveling, bringing together very distant cultures and habits. The tourism industry is diverse, partly fragmented, and the size of tourism principals varies from micro to global enterprises. Only special segments like the airlines are concentrated into an oligopoly of global alliances. Figure 5-1 represents a stylized view of the travel and tourism market. It differentiates between the supply and demand side and the respective intermediaries. Links mark the relationships as well as the flow of information. The figure depicts only the most relevant links; the nodes indicate the relevant players in the field. On the supply side, we distinguish between primary suppliers (such as hotels, restaurants, cultural or sport event organiz-
O
1. WTTC (1997).
Figure 5-1. The Travel and Tourism Market Consumers
Travelers
NTO outlets
Travel agent Intermediaries
Government bodies Tour operator
RTO
CRS/GDS
DMO, planners, and administration Incoming agent
LTO
Hotel chain
Suppliers
Primary supplier
Airline
Source: Werthner and Klein (1999).
ers), airlines, and other transport providers (such as car rental companies, railroads, and ferry companies). Tour operators can be seen as product aggregators; that is, they produce a new product by combining basic products or components. Travel agents, on the other hand, can be viewed as information brokers, providing the consumer with the relevant information and booking facilities. Computerized reservation systems (CRSs) and global distribution systems (GDSs) cover airline offerings as well as other tourism products such as packaged holidays and other means of transport. They provide the main links to tour operator systems and to travel agents.
Destination management organizations (DMOs) operate on a national, regional, or local level and focus on planning, marketing, and administrative tasks for destinations. In most cases, these entities have to act on behalf of all suppliers within a destination and are not involved in the booking process. The links to governmental bodies indicate that these destination marketing and management organizations are also often governmental organizations. LTO, RTO, and NTO refer to local, regional, and national tourist organizations such as tourist boards or visitor bureaus. The tourists and travelers represent the demand side.
Product Features of the Air Travel Industry Travel and tourism products are services that require a high degree of customer participation during service fulfillment. Moreover, as consumption and decisionmaking are mostly decoupled, travel and tourism services can be seen as information products. Prospective customers need plenty of information to comprehend offerings, compare them, and make their choices. As a consequence, the travel industry—especially the air travel industry—is among the leading users of IT. The existing CRSs and GDSs give a good insight into the current market situation and are the basis for the airlines’ yield management—that is, capacity planning and pricing. Tickets for scheduled flights have unique properties. They represent nontransferable contracts between the customer and the airline. Scheduled flights operate with a high level of fixed cost (for aircraft, crew, fuel, fees, and so on) while passenger-related costs account for roughly 13 percent of the overall cost.2 Scheduled flights are a prominent example of price differentiation, as up to twenty different booking classes are defined for a single flight in which only two (or three) service classes are differentiated. Table 5-1 gives a brief summary of a specific travel product on which we will focus during the remainder of this chapter, tickets for scheduled flights.
Consumer Behavior Consumer behavior is changing overall. As part of this general trend, tourists and air travelers in general: 2. Pompl (1998).
Table 5-1. Product Characteristics of Airline Tickets Characteristic Initial production cost Marginal cost for additional product Individualization cost
Shelf life
Tickets for scheduled flight High fixed costs (aircraft, crew, fuel, etc.) Less than 15 percent of overall costs related to number of passengers (ground service, catering, etc.) within a given group of seats Fixed cost for setting up yield management and booking systems; low variable cost for price discrimination based on service level and contractual features (right to return or change ticket, advance booking, restrictions on timing, etc.) Defined by flight schedule: once gate has closed, overstocked seats are worthless
—ask for better service, becoming more critical and less loyal; —want more specific offers with regard to content as well as to the complete travel arrangements; —are becoming more mobile and travel more frequently; —decide their travel arrangements later, leading to a decreased time span between booking and consumption; —are more sensitive to price, comparing more and more offers. As a consequence, the consumer market becomes more segmented and most potential customers fit into different market segments at the same time.
Salience of the Web Air travel and tourism are among the most important application domains on the World Wide Web. Estimates state that approximately 33 percent of Internet transactions are tourism-based.3 A Delphi study with forty participants from German-speaking countries estimates that within the next ten years 30 percent of the tourism business will be Internet-based.4 According to Forrester Research, $64 billion in travel will be booked on the Internet by 2004.5 While forecasts vary depending on the sources, the overall trend 3. Strassel (1997). 4. Schuster (1998). 5. Keith Regan, “European Airlines Form Net Travel Agency,” E-Commerce Times, May 11, 2000 (available online at www.ecomercetimes.com/perl/story/?id=3307).
is hardly disputed. Reasons for the prominent position of travel revenues on the Internet include —the sheer volume of overall revenues; —the salience of rich and topical information for customers; —tourism suppliers address a global audience, and almost every Internet user is a potential customer; —intense competition on the web among incumbents and new players has led to the emergence of numerous leading websites that offer a wealth of multimedia information and efficient transaction support. For airlines, online sales enable an extension of their yield management activities as it becomes easier to sell remaining capacity on a last-minute basis and to differentiate prices even further. Customers can benefit from easier access to a wealth of current information, efficient transactions, and increased market transparency.
Industrial Trends Triggered by the Web Figure 5-2 depicts the traditional scheduled flight ticket sales channels in tourism. Travel agents intermediate transactions between airlines and customers. CRSs and GDSs intermediate the relationship between tourism principals and customers with services based on the capabilities of data storage, information retrieval, and transaction processing (an early form of cybermediation between supplier and retailer developed in the 1970s). Some airlines, especially low-cost ones such as Southwest Airlines, focus on direct sales. We now reconstruct and illustrate the extension of this structure in four stages, using prominent industry examples.
Web-Based Disintermediation: Lufthansa While airlines have a tradition of trying to exploit direct sales—for example, via call centers or control of travel agencies—the web has significantly extended their possibilities to do so. Almost all major airlines are operating a website with direct sales offerings. As they are the owner of the products, airlines in some instances have changed the rules for travel agencies either by reducing sales commissions or by exclusively offering products that are not available via travel agents, for example, ticket auctions. Since August 1997, Lufthansa has been regularly auctioning off selected flight tickets via its website InfoFlyway. Once a month, auctions run for a
Figure 5-2. Traditional Industry Structure in the Scheduled Airline Market Airlines
CRS/GDS
Travel agents
Customer
full day from 10 a.m. till 10 p.m. Fifty separate auctions take place during an auction day. During one auction, which lasts for approximately ten minutes, one set of tickets is auctioned off. On average, there are 120 participants in the virtual auction room, of which about twenty are active bidders. An auctioneer tries to induce participants to continue the competitive bidding process. The Lufthansa auction has ascending prices, but the ticket list price is taken as an upper limit. Successful bidders are called after the auction to confirm the price and verify the credit card information. Typical bidders are participants in Lufthansa’s frequent flyer program, Miles & More, and customers who use Lufthansa’s website regularly. The tickets offered are for carefully selected seats on less frequented flights to attractive destinations. Auction tickets, which often are sold with a significant discount, are frequently used for an additional weekend trip or as presents. Lufthansa has included offerings from their partners, such as holiday packages, in the auctions and is exporting the auction to countries outside Germany. The airlines have started to compete more openly with their traditional distribution partners, the travel agencies, and they have invested serious amounts of money into their web activities. Lufthansa, for example, has received awards for outstanding web quality. However, so far the overall success of these markets has been limited. Delta had to revoke its $2 fee for offline bookings after immediate massive protests from travel agents. In 1999 Lufthansa’s combined direct sales activities—over all channels— accounted for 7–8 percent of their ticket sales; the target for 2003 is 14 percent.6 While this ratio is in line with the expectations of other major airlines, U.S.-based Southwest Airlines is moving far more aggressively and successfully with 27 percent of all tickets purchased via the web in January 2000.7 6. Marcussen (1999a); Marcussen (1999b). 7. Paul A. Greenberg, “Southwest Projects $1B in Online Sales,” E-Commerce Times, February 29, 2000 (available online at www.ecommercetimes.com/perl/story/?id=2610).
Web-Based New Roles for Intermediaries It is not only the airlines trying to take advantage of the opportunities of the web. Traditional travel intermediaries, namely travel agents and the CRSs and GDSs, have entered the online market as well. : . While travel agents account for 80–95 percent of all ticket sales, their margins have been reduced by the airlines, and they are quickly losing market share to competitors in the online market. Travel agencies appear to be especially vulnerable because they have operated for years in a fairly protected market, and because the concentration level in the travel agency market is still quite low, despite a recent phase of consolidation. Only a few travel agencies are using differential pricing to distinguish attractive from less attractive customers. However, travel agents have started to use the web to develop differentiated value propositions and reinforce their market position. Rosenbluth Travel is a prominent example of a travel agency that is responding to the increasing (price) competition from airlines with innovative value propositions to customers. By being able to direct huge volumes of travel toward different airlines, it is trying to maintain its market position and influence over airlines. Rosenbluth is one of the largest U.S. travel agencies with a focus on the business traveler segment. It has a long tradition of providing a best-fare analysis and rebooking customers in order to secure the best deals. Moreover, it pursues a strategy of combining customer relationship management with innovative web applications “to be people-focused and technologically savvy.”8 Next to their corporate website, which can be seen as an online extension of their traditional services, Rosenbluth also owns a separate online brand, biztravel.com. The current price strategy reflects an overall strategic reorientation. In order to underscore the notion of comprehensive travel management and long-term customer benefits, Rosenbluth is charging its customers the net ticket price (without the travel agent’s commission) and adding a service fee.9 By this means the customer gains better insights into the cost structure and how he or she can influence costs.10 While this strategy is not restricted to an online channel, service fees reflect the underlying cost structure and are lower for online transactions. Customers are offered several features 8. www.rosenbluth.com/content/tm/why.htm. 9. Rosenbluth and McFerrin Peters (1998). 10. For a comparison of German travel agents’ strategies, see H.-J. Klesse, “Reine Sinnestäuschung,” Wirtschaftswoche 26 (June 24, 1999): 96–100.
online, which enable them to monitor their accounts and have Rosenbluth apply the customers’ internal business rules (travel budgets, reimbursements, and so on) when arranging travel. Biztravel.com drew headlines when it introduced compensations for delayed flights on selected airlines.11 The compensations clearly underscore Rosenbluth’s innovative customer service and customer value strategy. However, Rosenbluth’s strategy is not an option for the huge number of smaller travel agencies that are operating in the consumer segment. - —- : . Computerized reservation systems (CRSs) and later global distribution systems (GDSs) were developed in the 1960s by airlines to make internal booking information available to travel agents. Global distribution systems operate as industry platforms, are usually owned by several airlines, and handle the bulk of booking transactions for scheduled flights, car rentals, and international hotel chains. Sabre, American Airlines’ CRS, launched EaasySabre in the late 1980s in an attempt to make its offerings available to consumers—with limited success. In the past, CRSs and GDSs have built so-called “online travel supermarkets” for consumers using web technology. Since the October 1999 merger with Preview Travel, Travelocity, 70 percent owned by Sabre, has attracted over 17 million registered users and has become one the most successful online players, with over $1 billion sales in 1999. Travelocity is offering on the one side a one-stop shopping site for travel and tourism products, with a wealth of travel-related online content, and on the other side a highly efficient transaction mechanism based on advanced technology, customer profiles, and superior interaction design. As Travelocity has become an equivalent of the automatic teller machine (ATM) for the travel industry, it is striving to increase customer retention and margins. Travelocity’s marketing alliance with Priceline.com and America Online (AOL) illustrates two trends, the first toward multiple alliances (even with companies that are perceived as competitors), and the second toward combining multiple trading and pricing mechanisms on one platform.12 Travelocity and the second largest representative online supermarket, the Microsoft spinoff Expedia, have emerged as winners of a fierce shakeout 11. Paul A. Greenberg, “Online Travel Site Offers Guarantee,” E-Commerce Times, May 24, 2000 (www.ecommercetimes.com/perl/story/?id+3399). 12. Kambil, Nunes, and Wilson (1999).
among the first generation of online travel sites, while ITN has joined American Express Travel online.
Web-Based New Intermediaries—Cybermediaries While industry incumbents have developed different business models and (re)positioned themselves in the online market, new players, so-called cybermediaries, have entered the market and positioned themselves prominently as consumers’ advocates with innovative pricing models (demand collection, demand aggregation, reverse auction).13 TravelBids and Priceline are two outstanding examples of these new entrants. (As of December 2000, TravelBids was not operational, and no date had been set for resumption of service.) . Van Heck and Vervest distinguish between sales and procurement auctions.14 While the Lufthansa auction is a typical case of a sales auction, calls for tenders are traditional examples of procurement auctions. Customers advertise specifications of their needs and ask potential suppliers to submit competing bids. So far, procurement auctions, also referred to as reverse auctions, have mainly been limited to business markets. The economic reasons are the high cost of advertising the call for tender and of selecting the best bid for the customer and the costs of submitting bids for the (potential) supplier. Reverse auctions for consumer goods are a rather new pricing model that has become operational as a result of the Internet. TravelBids is an example of such a reverse auction. Customers’ requests are posted on TravelBids, which is a specialized electronic market. While in the Lufthansa auction (potential) customers submit bids for flights, in reverse auctions travel agents submit bids for customer orders. In contrast to the Lufthansa auction, customers using TravelBids have a wide range of attributes that they can specify or intentionally leave open. They take an active role by specifying their preferences for tourist offerings. In this market, all bids are visible for everyone to see; hence prospective customers can view other listings and see the results. The bidding period can be set up to seventy-two hours; unsuccessful bids can be repeated. TravelBids’s fee of $10 for successful bids is split between the travel agent and the customer. On the supplier side, travel agents bid to fulfill the demand. They use their knowledge to identify flights that fit the customers’ preferences and use part of their commission to gain additional orders. 13. See Klein and Loebbecke (2000). 14. Van Heck and Vervest (1998).
. In most markets, consumers have little opportunity to signal the amount of money they are willing to pay before they actually make a purchase. This leads either to consumers’ surplus, when the actual price is below the customers’ willingness to pay, or to deadweight loss, when it is above their willingness to pay.15 The web makes feasible pricing strategies that combine personalization and versioning. So-called demand collection systems facilitate a platform for consumers to signal their price preferences for a class of products using certain specification criteria. Those signals are forwarded to suppliers, which can decide individually whether they can and want to fulfill those limited purchase requests. Based on the assumption that supplier-side fixed prices do not always lead to an optimal allocation of products and services, Priceline set up a market platform, initially for airline tickets. The product range is continually being expanded and now includes, for example, hotel rooms, new cars, and mortgages. Customers can specify their preferences—including the price. Priceline then advertises these offers to airlines, car companies, or financial services companies, which can decide whether they want to fulfill this additional demand at the listed price. Airline customers, however, do not give a detailed specification; they specify only day, place of departure and place of arrival, and request a flight operated by a major airline. In this way, airlines have sufficient scope to fulfill the demand, if they wish. Chances are increased that the offers will be met. Priceline typically earns a commission of $10 for every ticket sold. The specified offers are forwarded sequentially in a highly efficient and patented process to potential suppliers. Customers’ offers are binding and have been substantiated by a credit card authorization. Airlines then decide whether they want to take additional customers at the listed price depending on their current load factor and price policy. Feedback is given to the customers within hours. In contrast to auctions, Priceline has set up a private market. The demand is actively advertised to airlines, but neither the offers nor the deals are made public. Suppliers can decide based on internal policies; they do not risk any kind of signaling effect, which a flexible price strategy otherwise might send to the market. Priceline is called a demand collection system because it functions as an intermediary that collects customers’ requests for products and services at a price different from that advertised. This demand
15. Bakos and Brynjolfsson (1998).
typically is not articulated and thus could not be fulfilled. Priceline was granted a U.S. patent for their business model. At first glance, Priceline does not appear to be like the typical model for value-based pricing, because it does not reflect an active seller’s pricing strategy. However, considering that individual customers specify their price preference for a basic service or product (without overall control of the specific product features), this explicit preference can be taken as an expression of the customer’s individual valuation of the product or service. By differentiating prices based on the customers’ explicit price preferences, Priceline achieves a high level of allocation efficiency. The customers, however, face the risk of receiving products or services whose features (except for the price) do not exactly meet their expectations. While Priceline has been admired for its innovative business model, its share price suffered severely during 2000. Airlines with very low spare capacity were hesitant to acknowledge customer prices and therefore most customers showed stronger preferences for convenience and choice in a tight market situation.
Web-Based Airline Strategies: Orbitz, Otopenia, and Hotwire The success of Travelocity and other online travel supermarkets is attributed to the ease of booking combined with comparison shopping features. Faced with this success, major airlines have formed alliances that mimic the supermarket business model and attempt to compete aggressively with them. In January 2000 a major travel alliance was formed among twentyseven U.S. airlines, including the five largest carriers—American Airlines, United Airlines, Northwest Airlines, Continental Airlines, and Delta—in order to build a travel website or portal, code-named Orbitz, to offer discounted fares and ticketing directly to travelers. Joint investments are expected to exceed $100 million.16 Mirroring the efforts of their U.S. competitors, eleven European airlines, among them Air France, British Airways, and Lufthansa, announced plans to launch an online travel agency, code-named Otopenia, to attract a significant proportion of total online travel sales in Europe. Backed by six of the largest U.S. airlines, Hotwire.com was launched in late October 2000 as an online discount travel site, focusing on the sale of empty airlines seats. With rebates of up to 40 percent but at fixed prices, 16. Business Week Online, November 15, 1999.
the airlines involved attempt to sell at the last minute those seats that otherwise would have led to unused inventory without affecting their establish price schema.17 The business model mimics Priceline’s business model; it is, however, based on a fixed price strategy. Although neither Orbitz nor Otopenia was operational at the end of 2000, they have already been the subject of legal scrutiny; the American Society of Travel Agents (ASTA) and the German travel agents’ association Deutscher Reisebüro- und Reiseveranstalter-Verband have asked the respective antitrust agencies to check the announced alliances.
Analysis The examples illustrate to what degree the opportunities created by the web have contributed to the transformation and the development of the distribution systems in the air travel industry. Figure 5-3 outlines the current structure leveraged by the potential of the Internet and the web. Shaded shapes indicate online players; ovals indicate new players. The air travel industry today, like almost any other industry, is characterized by an increasing number of distribution channels. Triggered by increased information and communication possibilities combined with lower costs, there is a mixed-mode structure that represents a continuum of combinations of traditional channels: dis-, re-, and cybermediation, as illustrated by the sequence of developments described above. On balance, two trends have to be distinguished: (1) the rise of new intermediaries, mainly successful on a large scale in the role of online travel supermarkets, parallel to the push of airlines into various forms of direct selling; and (2) decreasing market transparency and increasing concentration against the prevailing electronic market rhetoric.
The Rise of New Intermediaries The new information and communication means have allowed niche players such as the Lufthansa auction system, Priceline, or TravelBids to become established on the market. However, it has become obvious that these business models are limited to comparatively low sales volume. 17. Clare Saliba, “Hotwire.com Leaps into Net Travel Fray,” E-Commerce Times, October 25, 2000 (www.ecommercetimes.com/perl/story/?id=4637).
Figure 5-3. Cybermediated Industry Structure in Air Travel
Airline online alliances Travel supermarkets Travel agents online TravelBids
Airlines
CRS/GDS
Travel agents
Customer
Airlines online Priceline WWW
Further, they are rather dependent on sales difficulties from the perspective of the airlines—for instance, only when airlines have a significant number of unsold seats do they offer them to companies such as Priceline. On the intermediary side, currently with a more promising proven market success, are online travel supermarkets such as Travelocity. Importing a very wellestablished concept from physical goods, Travelocity not only sells products from all major airlines (“brands”) but also employs a number of distribution systems, from fixed price tickets to Priceline’s name-your-price model. Travelocity’s current market success coupled with the consistency of its strategy with proven retail concepts leads us to see a flourishing future for companies of this sort. The main competition to the online supermarkets will come not from niche players such as Priceline that have gained short-term attention in the press and partially on the stock market but from airlines that are forming alliances in their efforts to increase direct online sales. Because of their priv-
ileged access to their own product, their behavior has been criticized as predatory disintermediation.18 The pros and cons of intermediary versus direct selling seem rather obvious: intermediaries are dependent on having tickets to sell, but once procurement is secured, clear decision structures are coupled with a proven, very high degree of domain-specific technical experience and expertise. Airlines have guaranteed access to the products to be sold—the flight seats—but whether efficient and effective coordination among competing airlines can be successfully managed remains to be seen.
Decreasing Price Transparency and Increasing Concentration: Market Failure? Economics provides numerous reasons for a move toward electronic markets characterized by low transaction costs, high transparency, and low concentration.19 However, players like Priceline, TravelBids, and the Lufthansa auction system obviously exploit the fact of limited transparency to consumers either by directly increasing the transparency (Lufthansa auction) or by letting the customer believe that the service provider benefits from high transparency leading to access to the best offer. However, as stated above, the sales figures for all those models are low. Currently, a few players have a high degree of market share. Those players are the online supermarkets, Travelocity, and Expedia. Looking at sales figures, not at number of players, concentration has clearly increased. Further, for the vast majority of bookings, customers still get a number of different price quotes for the same flight depending on the sales channel they choose. Market transparency is clearly limited. It has been low for a long time, whether because the back-end matching operations are so complex or because the companies in power have good business reasons not to offer too much actual transparency. Maintaining the belief in transparency has been demonstrated to be a good strategy!
Conclusions and Policy Implications Our analysis of the recent transformation in the travel industry can be summarized in three major effects of the web. First, the web facilitates a 18. Berghel (2000). 19. Malone, Yates, and Benjamin (1987).
convergence of globally dominant strategies. More than before, successful or plausible business models are subject to almost immediate imitation. Second, the web has emerged as a realm for numerous alliances that are crisscrossing the boundaries of existing alliances and industries. Even in an industry that is characterized by a tradition of alliances, such as United Airlines–led STAR alliance, the extent of new alliances is surprisingly high.20 The web can be used and is used to reduce price transparency as the degree of price differentiation and the number of potential outlets is rising. And third, the web drives concentration despite the sketched emergence of a multitude of online travel sites, which are estimated to be in a range of 1,000 worldwide. In most segments of the travel industry, the web effectively has become a “winner takes all” game: razor-thin margins on one side (the typical commission for online agents is $10 per booked flight) and high set-up costs for advanced websites as well as significant marketing expenses require a high transaction volume. Furthermore, since the first quarter of 2000, the capital market favors companies that can show solid earnings, and only the big players will be able to negotiate favorable deals with the airlines. These conclusions send a clear message to policymakers: strong signals have to be sent to the market regarding which level of concentration and, even more important, which types of monopolistic behavior will be accepted and which not. Given the dynamics of business models, types of alliances, and creativity of managers, the market has to be monitored carefully. Complaints by industry players and customers have to be taken seriously, and quick response mechanisms have to be designed. Only then will policymakers be able to allow the forces of the market to work and limit unintended consequences of monopolistic or predatory behavior.21
References Bakos, Yannis, and Eric Brynjolfsson. 1998. “Aggregation and Disaggregation of Information Goods: Implications for Bundling, Site Licensing, and Micropayment Systems.” In Internet Publishing and Beyond: The Economics of Digital Information and Intellectual Property, edited by Brian Kahin and Hal Varian. MIT Press.
20. Intelligent Enterprise, “News & Analysis: Sleeping with the Enemy,” December 5, 2000 (www. intelligententerprise.com/001205/news2/shtml). 21. Berghel (2000).
Berghel, Hal. 2000. “Predatory Disintermediation.” Communications of the ACM 43 (5): 23–29. Kambil, Ajit, Paul F. Nunes, and Diane Wilson. 1999. “Transforming the Marketspace with All-in-One Markets.” International Journal of Electronic Commerce 3 (4): 11–28. Klein, Stefan, and Claudia Loebbecke. 2000. “The Transformation of Pricing Models on the Web: Examples from the Airline Industry.” In Proceedings of the 13th International Bled Electronic Commerce Conference, edited by Stefan Klein, Bob O’Keefe, Joze Gricar, and Mateja Podlogar, 1: 331–49. Kranj, Slovenia: Moderna Organizacija. Malone, Thomas W., JoAnne Yates, and Robert I. Benjamin. 1987. “Electronic Markets and Electronic Hierarchies.” Communications of the ACM 30 (6): 484–97. Marcussen, Carl H. 1999a. “The Effects of Internet Distribution of Travel and Tourism Services on the Marketing Mix: No-Frills, Fair Fares, and Fare Wars in the Air.” Information Technology & Tourism 2 (3/4): 197–212. ———. 1999b. “Internet Distribution of European Travel and Tourism Services—The Market, Transportation, Accommodation, and Package Tours.” Research Report 18. Research Centre of Bornholm. Pompl, Wilhelm. 1998. Luftverkehr—Eine ökonomische Einführung. 3d ed. Berlin: Springer. Rosenbluth, Hal F., and Diane McFerrin Peters. 1998. Good Company—Caring as Fiercely as You Compete. Reading, England: Addison Wesley. Schuster, Andreas G. 1998. “A Delphi Survey on Electronic Distribution Channels for Intermediaries in the Tourism Industry: The Situation in German Speaking Countries.” In Information and Communication Technology in Tourism, edited by Dimitrious Buhalis, A. Min Tjoa, and Jafar Jafari, 224–34. New York: Springer. Strassel, Kimberley A. 1997. “E-Commerce Can Be E-Lusive.” Convergence 3 (3). Van Heck, E., and P. Vervest. 1998. “Web-Based Auctions: How Should the Chief Information Officer Deal with Them?” Communications of the ACM 41 (7): 99–100. Werthner, Hannes, and Stephanie Klein. 1999. Information Technology and Tourism—A Challenging Relationship.” Vienna: Springer. ———. 2000. “ICT and the Changing Landscape of Global Tourism Distribution.” EM— Electronic Markets 9 (4): 256–62. WTTC. 1997. Travel & Tourism—Creating Jobs. Brochure for the Summit of the Eight. London.
6
Confronting the Digital Era: Thoughts on the Music Sector
recorded music industry has experienced the most prominent, if not the most tumultuous, changes as a result of the widespread adoption of digital technologies. For several years Internet retailers have permitted home shopping enabled by digitized listening station samples; Internet radio offers thousands of genre- and artist-specific channels where consumers hear music that is rarely available on broadcast radio and purchase the music they hear by clicking the ubiquitous “buy” button; and of course there is Napster, the mystically compelling peer-to-peer music distribution solution that forced major record companies to recognize that the Internet must be embraced because it will not be defeated. This chapter, a brief snapshot of today’s digital music industry, characterizes and categorizes these and other changes wrought by digital technologies by comparing them to existing industry practices and highlighting future developing opportunities. Of course, the reader must be aware that music-loving software developers are working feverishly on new technologies, while others are working to block or disable new technologies— the race to develop the “killer” business model that attracts both customers and revenue is heatedly under way. Though the precise outcomes are still unknown, it is likely that regardless of which business model wins, creators and consumers will gain eco-
P
nomic power among participants in the music value chain. As for the middlemen—some will be replaced; others will evolve (or acquire the new ones); still others will remain the same. And if the market is truly growing as digital cognoscenti believe, then many of the existing players will continue and perhaps thrive, while there will be room created for new profitable entrants. Offering an initial sweep of the impact of digital technology on the music sector, the chapter addresses the historical structure of the music industry, then turns to the opportunities presented by digital technology, and finishes the analysis with a review of reactionary strategies that prop up existing major industry players.
Historic Music Industry Functions The sound recording–retail music industry includes a number of discrete functions: songwriters write; recording artists (assisted by engineers and producers) record “masters”; manufacturers produce records, tapes, and CDs from the masters; radio stations perform and promote; retailers advertise and sell; and recording artists tour. Historically, almost all the functions—when done well—have required record company investment: the record company (1) owned or leased the studio and paid the producer and engineer; (2) owned or paid a manufacturing facility; (3) promoted the album (lawfully or unlawfully) to radio stations; (4) paid a distributor to distribute and promote the album at retail (including by financing cooperative advertising); and (5) produced or otherwise had a strong hand in the tour. Traditionally, artists contracted with record labels to receive access to sophisticated sound studios, to take advantage of the marketing capacity and financial resources of the firm, to be relieved of production and logistics, and to receive needed business advice. Record companies, in turn, using a mixture of subcontracted and in-house manufacturing sites, made the artist’s music available to distributors. These distributors coordinate supplies between the record companies and the retail merchants who sell music to end users. Other critical music industry functions have for decades been performed by copyright royalty administration collectives, notably the American Society of Composers, Authors and Publishers (ASCAP) and
Broadcast Music, Inc. (BMI), which collect and distribute public performance rights for songwriters and music publishers, and the Harry Fox Agency, which collects and distributes reproduction or mechanical rights on behalf of songwriters and music publishers.
Digital Technologies Create Opportunities Today, however, due largely to advances in digital technologies, the preeminent roles of record companies and collective licensing and royalty administration organizations appear to be under siege. Predigital, the recording artist needed a multimillion-dollar recording studio to produce sophisticated recordings. Now, through the benefits of a production revolution, a professional, marketable recording can be produced in a basement studio driven by PCs and only several thousand dollars of equipment. Similarly, recording artists needed cooperative recording companies to assist with marketing, distribution, production, and retail. Today, the Internet is capable of providing a recording artist with a decent living if he or she can creatively market to a waiting fan base. Recording artists have historically suffered—largely due to their own youthful enthusiasm to create and gain distribution—from one-sided contracts with record companies that are reminiscent of those signed by actors under the famed motion picture studio system. Typical contracts with major record labels (1) are long-term, lasting for six to seven albums over an indefinite time frame (often extending beyond California’s seven-year limit on personal service contracts); (2) transfer ownership of all “masters” to the record company; and (3) require repayment to the record company of all production as well as half (or sometimes all) promotional costs before payment of royalties to the artist.1 Today, however, powered by digital technologies, the education of the public spotlight, and piracy, many recording artists are seeking and gaining new levels of control over their own art and commerce. For example: —The ability to purchase studio-quality equipment for well under $10,000 enables recording artists to produce sophisticated masters without tying themselves to a well-heeled investor or record company and is an 1. Major record company labels are those owned or controlled by the world’s five major sound recording companies: BMG (a Bertelsmann company), EMI, Sony Music, Warner Music Group (an AOL–Time Warner company), and Universal Music Group (now owned by Vivendi Universal).
essential precondition for the transformation of a physical good into a digital product moving directly from artist to consumer. —Free software and home computers that allow inexpensive selfproduction of a few hundred or even a few thousand CDs enable recording artists to satisfy a larger audience independently than was previously possible. —Specialty websites such as MP3.com allow artists to market directly to their fans, or generally to music consumers who search the Internet to hear “unsigned” talent. None of these opportunities matter to blockbuster million-album acts, but perhaps they will provide just enough opportunity to up-and-coming acts to enable resistance against the pull of long-term record company bondage, and ultimately a better contract when the timing and offer improve. For the blockbuster artists, one opportunity is to sell tickets for performances directly from one’s own website and thereby reduce reliance on (and presumably payment to) the promoter or venue that has historically been responsible for promoting and selling tickets. Additionally, the piracy threat of Napster and other peer-to-peer filesharing technologies has energized recording artists toward collective public education efforts promoting the value of intellectual property. Several prominent recording artists, artist managers, and independent allies are trying to build on that common interest to create a recording industry guild or labor union similar in nature to the Directors Guild. If this effort is even moderately successful, the power transfer from recording companies to artists will quicken, and it is likely that “free agency” will be as prevalent in sound recording as in movies and professional sports. Consumers initially were empowered by Internet search engines that enabled them to “pull” music they wanted, either from Internet retail or Internet radio, rather than having music “pushed” on them by marketdriven radio station program directors and record label promotions staff. Recently, consumers have been more attracted to Napster and other filesharing programs that enable free downloads of consumer selections. And just catching on (and building strength as the Internet infrastructure develops) are consumer-influenced Internet radio websites (for example, Radio Sonicnet, Launchcast.com, MusicMatch Radio, and Echo Networks) that permit individuals and small groups to listen to music and artist types they specify, to hear new music that closely matches their registered preferences, and to avoid music they do not enjoy. Of course, consumers’ need for portability to enable listening in the car, at the office, and in various rooms in the house is supported by CD
recorders that are included with new PCs, dual-deck CD recorders sold by consumer electronics companies, MP3 recorders and players, and PCs enabled by RealNetworks and MusicMatch software jukebox products. In the future, however, when stable wireless broadband connectivity is ubiquitous, the killer application may be consumer music “lockers,” which enable consumers to hear their purchased and remotely stored music anywhere that an Internet connection exists. This application also supports online retail, as it enables the retailer to deliver a sound recording to the consumer’s “locker” so it can be enjoyed immediately upon purchase, as well as ship a CD to the consumer if both formats are desired. What does all this consumer opportunity mean to music industry economics? It may mean the ultimate demise of albums as consumers choose to download single songs instead (of course, singles might get very expensive if record companies and artists have to earn album-sized profits on blockbuster singles). It may mean that CD manufacturing is less important, again reducing artists’ reliance on deep-pocket industry giants. And certainly the consumers’ plethora of choices will drive sales of multifaceted consumer electronics or computing devices that enable consumers to choose codecs, formats, and storage media, so long as there is backward and forward compatibility. And, of course, all this digital transmission and storage married to persistent Internet connectivity enables remarkably accurate knowledge of consumer activity and the associated marketing and royalty collection opportunities—if they can be managed carefully and nonintrusively. Moreover, the marketing activity can be done by large companies or small through traditional or guerilla techniques: again, creating new opportunities for creative artists and nimble small labels. Additionally, individual songwriters and music publishers are empowered by digital technologies that promise more accurate tracking of public performances and more accurate collection and distribution of royalties. In today’s broadcast industry, copyright collection and administration organizations audit a radio station by listening to each station a few days a year and extrapolating to ascertain what compositions are performed through the entire year and how that station’s annual royalty payments should be divided among songwriters and music publishers. In the world of Internet radio, through the use of software applications that the most sophisticated webcasters will eventually use in order to sell advertising, performance data can be measured accurately and royalties distributed exactly. Moreover, the data can be shared directly with artists and publishers, rather than solely with the collection societies. The result will
be a loss of information advantage for collection societies (which are, today, not accountable to their members for delivery of data in support of royalty allocations) and an offsetting advantage to individual songwriters and music publishers, who will feel significantly less allegiance to the organization that now is a necessary partner for them to be paid. Additionally, when software tracking is the norm, collection organizations will find it harder to maintain current pricing, which is generally 15 percent of gross licensing revenue as payment for royalty collection and distribution. An additional opportunity, first for the largest music publishers and then for all that follow, is global direct licensing. Today, most music publishers are paid by territorial collective licensing organizations: ASCAP, BMI, and Harry Fox Agency in the United States, SIAE in Italy, SACEM in France, and GEMA in Germany. These territorial monopolists reign supreme in their own country, but most are prohibited to license globally, and almost all are incapable of doing so. Additionally, the collection societies transmit funds to publishers slowly (perhaps six months after performances and collections occur), do not provide good data to publishers, and—like all monopolists—maintain high prices and benefit from low price elasticity. In contrast, Internet radio and retail present opportunities for worldwide source licensing that eliminates the collectives. At minimum, the savings to be divided by copyright owners and users is 10 percent; figuring in the international rights management where at least two societies get a bite from every royalty, the savings to be divided can easily approach 20 percent. If an Internet radio programmer or retailer offers a copyright owner 10 percent more than is currently paid by the collection organization, provides the money in ninety days rather than 180, and throws in a bit of valuable marketing data, what music publisher would decline a global license?
Industry Leaders Fight Back Basic economics dictates that advances in the relative strengths of recording artists, consumers, songwriters, and music publishers will reduce the relative strength of the members of today’s industry oligarchies: in America, the five major recording companies, ASCAP and BMI, and the Harry Fox Agency. Each, however, is responding aggressively in the face of these threats, with the final outcomes far from clear. Changes in four areas of the sector—recording companies, manufacturing, distribution, and royalty
administration—demonstrate the capacity of traditional industry leaders to harness new technology to maintain their economic positions.
Recording Companies After several years of fighting the Internet and the MP3 recording format with litigation, and then trying unsuccessfully (so far) to develop industrywide worldwide transmission and recording security standards, the five major recording labels are going their separate ways and experimenting with various download formats and security standards. Although consumers will reject proprietary standards that are not seamlessly interoperable, there are many opportunities for recording companies to reduce costs, improve customer service, and maintain primacy in the popular music value chain. It is not simply a battle of newcomers versus incumbents; the major recording companies are vying against the newcomers and each other for technology advantage—perhaps a mistake as the record companies’ expertise has traditionally been artist development and promotion, not technology.
Decentralized Manufacturing There are already a number of experiments under way to enable in-store compact disc production—for example, Red Dot Network—or controlled retail downloading (as distinguished from home downloading)—for example, Music Etc. If successful, these opportunities and similarly managed home downloading opportunities permit recording companies to adopt just-in-time manufacturing practices, market direct to consumers, and reduce significantly their reliance on distributors of physical product to retail that also served as expensive contract promotional agencies.
Leveraging Copyrights to Control Distribution and Resale Pricing For several years the sound recording industry has suffered antitrust scrutiny, most recently illustrated by the industrywide agreement with the Federal Trade Commission to cease Minimum Advertised Pricing requirements that were associated with cooperative advertising support. Now, however, the advent of digital downloads appears to offer the recording industry another bite at the resale price maintenance apple.
Specifically, at least two of the five major recording companies are changing traditional business practices with regard to the distribution of digital downloads. Rather than selling product to retailers for resale to consumers (or selling to distributors who sold to retail who sold to consumers), these record labels are licensing online retailers and commissioning the retailers to sublicense digital files to consumers. This licensing model accomplishes several recording industry goals, permitting —effective and perhaps lawful enforcement of minimum resale pricing, as antitrust laws historically provide more latitude in this regard to intellectual property licensors than to wholesalers; —effective imposition of technological requirements, including the adoption of specific download distribution technologies, including proprietary technologies; —effective imposition of requirements to share customer data, which may enable the record label to build a direct relationship with the consumer and ultimately undermine the retailer’s economic position; —the record company to (1) limit consumers’ ability to copy the recording, even for noncommercial personal uses (such as making a copy to listen to in the car) that have historically been viewed as “fair use”; and (2) perhaps eliminate the used music market, which is an increasing threat since the introduction of digital durable goods that can be resold globally through Amazon or eBay. It remains uncertain whether government authorities will question the practice of leveraging digital downloads to undermine historic antitrust limitations, but until they do this may be the record labels’ most effective strategy to maintain their profitable economic position. As for limiting consumer enjoyment, at least one congressional committee has already indicated an intention to hold hearings in 2001 about consumer fair use and the implications to consumers of music industry practices and technologies.
Copyright Royalty Administrators ASCAP, BMI, Harry Fox, and their global brethren are also fighting back, largely by developing cross-border cooperative agreements to develop rules for international licensing and royalty administration that ensure each society will continue its opportunities to maximize administration fees. This strategy is by no means certain, as it will not eliminate competition between societies to administer international licensing for entities that have
the flexibility to choose their home country (for example, if U.S. companies need to select a European country in which to locate the local licensing entity). Additionally, cooperative agreements among national monopolists are likely to attract significant antitrust scrutiny.
Conclusion Digital technologies have the potential to redefine the character of major segments of the music industry. The task of getting the artist’s product to the consumer is changing as webcasting, digital music transfers, and remote storage increasingly pervade the sector. Artists may attempt to short-circuit the traditional production system through direct music provision to consumers, recording companies may reorient themselves to the new environment through business models that rely on their distribution and promotion advantages, and retailers may salivate over the possible savings in decentralized manufacturing. The realization of virtual lockers based on ubiquitous wireless connectivity might vitiate many of these plans and once again reshuffle the players and their strategies. In short, the music industry is entering a technologically induced protean phase, the results of which are still up for grabs.
Standard Modules and Market Flexibility
“the Internet will change everything” are simply too broad and simplistic to allow for a real understanding of the changes being wrought by information technology. What such blanket statements conceal is the fact that the scope and impact of the Internet on various industries cannot be determined in isolation from the nature of the product and the relationship among the customer, product, and vendor. The personal computer (PC), semiconductor, automobile, and hearing instrument industries were chosen to illuminate differing adoption rates and impacts of the Internet. In the hearing instrument industry, the impact might be minimal, whereas in the PC industry, the Internet has influenced the success of direct marketing and build to order (BTO) business models such as Dell and Gateway. For the auto industry, the Internet may prove to be the most disruptive force in many decades: it could fundamentally change both the business-to-business (B2B) and business-to-consumer (B2C) areas. Finally, Internet-enabled communication is reorganizing the semiconductor industry’s design and manufacturing model, allowing for separate firms to develop and produce integrated circuits. In the aggregate, the studies of these sectors show that the Internet will have a variety of impacts, but at present they do not appear to be revolutionary—successful incumbents of the industries examined have not been replaced by newly formed firms.
A
Modularity, Interchangeability, and Build to Order Dell Computer was one of the most successful companies of the 1990s because of its ability to leverage the modularity of the personal computer to implement a BTO system centered around its direct marketing operation.1 Modularity simplifies a complicated assembly process by dividing a product into components that can be manufactured and sold separately and assembled later. The personal computer, for example, consists of a floppy disk drive, hard disk drive, motherboard, CD-ROM drive, mouse, keyboard, case, and various smaller printed circuit boards. Despite its numerous components, the coherence of the PC is maintained by rigidly specified interfaces that are interconnected by various standardized cables and sockets. The subdivision of this fairly complicated product to a series of discrete modules makes final assembly a simple process. When defined in this way, however, most assembled products could be considered modular in nature; many consumer goods consist of separately manufactured and previously assembled components. To give meaning to the definition, modularity should be viewed not as binary (a matter of is or is not) but discrete—a matter of degree. In other words, modularity is best measured by ascertaining for each component how much substitutability there is while still maintaining operability. Interchangeability, however, can be further examined along three dimensions: (1) the interchangeability of components produced by different manufacturers across a given product model; (2) the interchangeability of a given component across a given product line; and (3) the interchangeability of components produced by different manufacturers. The first dimension can be illustrated by an example from the automobile industry. Various sizes of tires can fit an automobile, but for optimal performance tires need to be chosen with the specifications of the vehicle in mind. So automobile tires, while modular, are not completely interchangeable when operability is taken into account, and thus they exhibit a lower than expected degree of modularity. The second way to measure a product’s modularity is to consider how interchangeable component parts are across product lines. To illustrate, the automobile is almost entirely modular in the sense that its components are fairly standardized—seats, engine, tires, steering wheel. However, automo1. On modularity, see Baldwin and Clark (2000).
bile parts are far from being entirely interchangeable across product models. A Toyota Corolla dashboard will not fit in a Toyota Camry auto body. A step further is the situation in which nearly all components are interchangeable across not only one firm’s product models, but also across manufacturers—that is, between the models of other firms. Using the illustration above, if the Toyota Corolla dashboard does not fit into the Toyota Camry auto body, it certainly will not fit into a Nissan Sentra. Conversely, for the PC, nearly all components are interchangeable across manufacturers. An Intel processor, for example, can easily fit into a Dell, Gateway, or IBM computer. The exceptional modularity and interchangeability of PC parts enabled Dell to implement a BTO system with direct marketing to the customer. The BTO system allowed Dell, even before the introduction of the commercial Internet, to disintermediate all of the distributors, retailers, and other intermediaries that operate between the customer and the PC assembler. The enormous advantages this provides the BTO direct marketer are discussed both in Kenney and Curry (chapter 7) and Helper and MacDuffie (chapter 8). In industries lacking high degrees of modularity and interchangeability, direct marketers of all varieties have found it easy to switch to Internet marketing, as indicated in Hammond and Kohler (chapter 13). Many of these firms are now considered paragons of the Internet.
The Customer-Product-Retailer Relationship The ease of moving a business process online is related to the ability to integrate the online relationship into the hierarchical branching structures characteristic of software programs. Of course, even if the purchasing decision cannot be moved online, the Internet can simplify the communication of schematics, information, and routine business forms such as requests for proposals, purchase orders, and receipts. Often in such cases, Internetbased platforms are replacing the existing electronic data interchange (EDI) systems, meaning that it is hard to separate the changes wrought by the Internet from those already under way. The hearing instrument industry, though very small, provides an insight into the limits of Internet-based e-commerce when a product is extremely tactile. As Lotz demonstrates in chapter 10, the difficulty with moving this industry online manifests itself in both the B2B and B2C areas. In B2C,
the problem is that each person’s auditory canal is slightly different, and to operate well the hearing aid must be customized to the user’s ear and tuned to his or her comfort: each hearing aid must be customized, and this customization requires direct human interaction. The B2B area is similarly affected by the need for each hearing aid to be assembled from customized parts, making it difficult for assemblers to order these parts entirely over the Internet. Intensive interaction with critical component makers is necessary to ensure that the product works as a tightly integrated whole. As a result of the tactile nature of both the product and the customer-productretailer interaction, it is unlikely the Internet will have a dramatic impact on the hearing aid industry. The PC industry has almost entirely opposite characteristics, with customers almost totally focused on product functionality and reliability. Design appeal in terms of physical appearance is less important; modules are designed for interchangeability, and customization, for the most part, simply refers to modules with different performance characteristics, such as a customer’s desire for a 1-gigabyte, 10-gigabyte, or 100-gigabyte hard disk drive. With few exceptions, user-chosen design variation is limited; everything comes in a beige box. The physical interfaces of the PC—mice, keyboards, and monitors—are standard products. The result is that the entire sales process could very easily be modeled in software and moved online. Of course, not all sales can be moved online, as there are still many customers who want to try out a computer before purchase.2 Still, for the most part, people are purchasing a white box, and branding is more a sign of reliability than cutting-edge design or “hipness.” Because of the standardized, modular character of the components, the supply chain can also be moved online and parts can be sourced from the lowest bidder. The extreme standardization of the PC, combined with a near total lack of interest in the look and feel of the machine, also makes it a commodity that can be easily sold online or through catalogs. The automobile industry is a mixed case. As Helper and MacDuffie indicate, moving the supply chain online could save up to $477 an automobile, so there is ample reason to move purchasing online. However, as 2. Notebook computers are different because the necessity of space and weight savings means that altering any one component might affect other components in terms of physical dimensions, portability, heat dissipation, and user friendliness vis-à-vis smaller or larger keyboards, key placement, and tracking device, to name a few. Thus the intended user often wishes to experience the product before purchase.
the authors point out, not all components are standardized and interchangeable modules. Many auto parts are designed for a specific year and model, and for some, deep and rich interaction is required between engineers at both the design and manufacturing firms. Here the Internet may assist in the automation of the accounting and tracking aspects of the interaction, but it will likely not be able to entirely replace the direct human interaction necessary to design such parts. For consumers, automobile design variation is more extensive than with a personal computer. Customers often want to ascertain an automobile’s performance through a test drive. Moreover, customers have preferences regarding color, interior, and other features. These can be depicted online, but often consumers wish to see and “feel” these in person. Despite these limitations, an increasing number of auto purchases are Internet-enabled. More specifically, consumers are using the Internet to evaluate models, prices, and dealerships. Thus even though the Internet may not disintermediate the auto dealer, there is little doubt the Internet will have an important impact as a central communication medium for all nodes of the industry value chain. In the semiconductor industry, Leachman and Leachman show how electronic data communication and, specifically, the Internet, facilitate a division of labor between semiconductor design houses in the United States and semiconductor manufacturing facilities in Taiwan. This permits a global division of labor that would have been far more difficult without electronic data communications systems. The Internet allows design firms to monitor in real time the status of orders at their manufacturing partners. In this way, the Internet enables an even closer integration of the business processes of both firms.
Summation The PC industry provides a powerful example of the relationship between modularity, interchangeability, and standardization and the potential for vertical competition.3 The combination of these product attributes creates an industry ripe to move its value chain completely online. For these firms, the Internet will not merely be a different channel for marketing and communications but a fundamentally new way of doing business, increasing 3. Borrus and Zysman (1997); Bresnahan and Richards (1999).
coordination, and reducing costs. For industries lacking this combination of qualities, the Internet is still poised to enable change—though perhaps, from the perspective of the consumer, less visibly and less dramatically. Of course there is a danger in making blanket statements about the effects of the Internet on industry organization. But equally significant, the chapters in this section indicate how the Internet has begun to be integrated into existing industry structures. Let us now briefly turn to each of the chapters in this section.
The Hearing Aid Industry Peter Lotz, in “The Old Economy Listening to the New: E-Commerce in Hearing Instruments,” argues that the hearing aid industry offers interesting insights into more general transformations occurring in the digital era. Demonstrating the characteristics of a small market with small players, the hearing aid industry does not show a readily apparent affinity with the Internet. The physical and extremely customized nature of the product prevents firms from creating a direct e-commerce good, but that does not mean that innovations in communications technology cannot alter the current market structure. Three nodes exist in the global hearing aid industry value chain. A small number of monopolies or duopolies manufacture the specialized components for the devices. These companies sell their wares to roughly five major instrument manufacturers that act mainly as assemblers for the various parts. Since 1995, instrument manufacturers have experienced significant consolidation that seems in part due to the high costs of developing digital technology. The instrument manufacturers then sell devices to distributors who dispense hearing aids to end users. Several distribution patterns exist, shadowing national health care systems. Nations such as Sweden and the United Kingdom, which offer devices free of charge, often incorporate audiological clinics in hospitals while in the United States, where individuals carry most of the cost, “dispensers” constitute a rather fragmented market with some recent attempts to create franchises. The hearing aid industry globally runs revenues of $1.5 billion to $2 billion. Of the three supply chain interfaces, Lotz argues that the last—between distributors and end users—may experience the greatest degree of transformation. Traditionally, the distributor has dominated the interface. Customers have little access to information about different manufacturers outside of the distributor’s advice. Web technology will allow manufactur-
ers to market directly to potential customers, providing customers with access to a bevy of information sources, increasing transparency, and creating an environment with tougher price competition. This rise in demand for higher-quality service has the potential to push retailers to consolidate in order to provide quality websites and regain leverage over manufacturers. Recent moves by distributors to form franchise-based systems demonstrate the potential for standardization and economies of scale within the last stage of the value chain. Rising consumer information regarding technical products places pressure on distributors and squeezes efficiency gains out of the system. More radical business models have attempted to redefine the nature of the industry by shifting the focus of the customer interface. For example, several companies have attempted to develop one-size-fits-all instruments that customers purchase over the Internet using web-enabled tests and then tune the devices with the help of web-based adjustment programs. Since such hearing aids do not require on-site fittings or adjustments, manufacturers have direct access to consumers. These companies hope to disintermediate distributor networks and enhance service markets based on adjustment. The outcome of these efforts remains in doubt. This small industry provides powerful evidence of how government policy directly influences the transformative effect of information technology. Government regulation regarding audiological referrals hinders radical restructuring of a distribution market organized in large part by the structure of national health care systems. For example, current U.S. law (and similar regulation in other countries) prohibits the sale of hearing aids without an audiological referral, undercutting the potential for indirect e-commerce strategies and business models that shift consumers to increased product-related services delivered over the Internet. Lotz’s study of the hearing aid industry demonstrates that even highly customized sectors may benefit from the Internet. However, the way in which these benefits are realized will be mediated through the action of government and corporate actors and consumers. At this point, there is no doubt that there will be effects, but who will be the beneficiary and how they will affect the industrial structure is not yet obvious.
The PC Industry Martin Kenney and James Curry, in “The Internet and the Personal Computer Value Chain,” discuss the Internet-enabled BTO business
model pioneered by Dell Computer, which has had perhaps the most profound impact in redefining standards of competition within the PC industry. Dell was the first PC firm to use the Internet as a core element in its business strategy. When it first launched its Internet strategy in 1996, Dell was the sixth largest PC firm, with roughly a 4 percent market share. In 2001 Dell, with 10 percent global market share, became the largest PC vendor in the world—surpassing Compaq. Dell’s use of the Internet to organize its entire value chain pushes the logic of the firm’s earlier direct marketing system to an entirely new level; it collapses cycle time from customer order to final delivery, further reduces inventory costs, and enables Dell to undersell competitors while maintaining identical quality. Kenney and Curry describe the origins and evolution of the Dell model and explain why the economics of Dell’s Internet direct system is changing the industry by forcing virtually all of Dell’s primary competitors to imitate aspects of this model. Although the PC industry can be characterized as a type of oligopoly dominated by large firms such as Compaq, IBM, HP, Apple, and even Dell itself, there is nonetheless fierce competition among these global companies. The personal computer itself has evolved into a modularized and standardized box of components and subassemblies made by suppliers primarily in Taiwan and East Asia. In addition, from a very early date in the evolution of the industry, PC builders did not control the two most important internal components: the operating system, which is controlled by Microsoft, and the microprocessor, which is controlled, to a somewhat lesser extent, by Intel. PC firms essentially assemble these components. The availability of these supplies in the market, combined with the modularity of the components in the final product, has meant that intense competition and price pressure mark every stage of the value chain. For Kenney and Curry, competitive dominance in the industry is contingent on the capacity of PC firms to organize this value chain in the most costefficient manner. Before the Dell direct model, PC makers assembled machines and sold them through an intricate distribution channel consisting of value added resellers (VARs), system integrators, retailers, and superstores. Based on demand forecasts, manufacturers “pushed’ inventory into the channel and suffered the consequences when forecasts did not meet demand. In the mid-1980s, Dell devised a BTO business model in which customers telephoned orders to Dell and the firm built units as they were ordered without incurring carrying costs of inventory and nearly eliminating risk. Dell
delivered final products directly to end users, enabling it to bypass traditional distributors in the channel. What the Internet is enabling Dell to do differently from its earlier direct marketing system is to integrate customer order, procurement, assembly, and fulfillment operations more systematically. Outside suppliers are brought directly into contact with Dell’s customer orders to balance real-time order and supplier inventory. The firm relies on roughly twentyfive suppliers for 90 percent of the PC and uses the Internet to bring existing suppliers into its operations in a way it describes as “virtual integration.” Thus for Kenney and Curry, Dell is actually more of a logistics firm that uses the Internet to streamline the PC value chain and distribute products for Microsoft and Intel. The impact of the Dell model on the competitive dynamics of the PC industry is observable by the extent to which firms such as Compaq, IBM, HP, Gateway, and others are imitating aspects of Dell’s direct selling system. Yet, while most of the major firms are now selling computers directly to end users through the Internet, these other companies were slower to integrate their supply chains through the Internet to the same extent as Dell. Kenney and Curry conclude by discussing the competitive pressure personal computers are under from alternative devices that can deliver access to the Internet. These include the newly emerging handheld devices, wireless phones, and other next generation Internet “appliances.” Although PCs will continue to dominate, Kenney and Curry argue that PCs will also likely diminish in importance as the Internet continues to grow.
The Automobile Industry The maximalist vision of the not so distant future sees e-commerce applications enabling mass-customization in the automobile industry—this “industry of industries.” Internet applications make it possible to aggregate and transmit individual choices and thereby “pull” consumer product preferences through complex supply chains. Helper and MacDuffie, however, in “E-volving the Auto Industry: E-Business Effects on Consumer and Supplier Relationships,” temper this vision as they investigate integration of consumer choice into an Internet-enabled, build to order auto industry. They demonstrate how two parameters will shape the “e-volution” of the auto industry. First, specialization and economies of scale limit the number of consumer choices that product designs can accommodate. Second, limiting product complexity, by creating menus for consumer choice, will
impact existing relationships between OEMs, their suppliers, and retailers. Integrating consumers into production will reorganize some or all of these relationships, displacing interests that have become embedded around them. “E-volution” in the auto industry will not be exclusively, or even primarily, driven by technology. Rather, Helper and MacDuffie demonstrate how multiple actors will shape the incorporation of Internet applications and consumer choices. Their argument forces us to focus on how existing industry actors shape the construction of menus for consumer choice. Auto industry executives have been considering lessons from desktop computers, particularly Dell, as a model for the evolution of the auto industry. Automobiles are, however, more complex products than the desktop computers. Early strategic decisions allowed the desktop to evolve as a “white box.” Industry leaders standardized both the functional interfaces and physical dimensions of component parts, allowing desktops to evolve as fully “modular” products. Consumers could choose, and suppliers could produce, a great deal of functional variety—as long as it fit into the white box. This functional and dimensional standardization simplified the task for Dell of linking consumers and suppliers “end-to-end.” But automobiles rely on their physical attributes as products. Fashion and design are critical in marketing. Consumer tastes, therefore, influence the level of modularity possible. Integrating differential physical and functional attributes complicates automobiles as a product, the meaning of “modularity,” and the construction of menus for consumers. OEMs offer customers both packages of functional “trim” within models and a variety of models differing in physical attributes. The distinction between “model” and “trim” is only one degree by which “modularity” can vary. Components vary not only within and between the models of one OEM, but also across the models of different OEMs. So how should “modularity” be defined to manage complexity: by “model,” by OEM, by component, or by system? This affects both technical relations between parts and relations among industry actors. Defining a menu to integrate consumer choice into production has impacts on the interests of OEMs, suppliers, and retailers as well as the tasks of product design, component sourcing, and retail fulfillment. Consumer choice touches most directly on decisions about product design. Appearance and performance are the consumer’s greatest concerns after price. Currently, product design evolves within two types of OEM-supplier relationships. An “exit” relationship, in which OEMs design systems and
critical components but suppliers bid to produce commodity components, is characteristic of mass producers, particularly GM. Alternatively, OEMs share responsibility for critical system design with specific suppliers in a “voice” relationship, common among “lean” producers like Toyota. Internet applications present opportunities and challenges for both types of relationship. Because these technologies favor neither form of production, producers’ preferences will shape technological choices and consumer integration into design. Changes in the process of product design affect sourcing decisions. Where OEMs see control over design as essential to protection of a brand, consumer input will be guided toward the relationship with the OEM. The latter will outsource only commodity components. If OEMs are comfortable that supplier brands do not threaten their own, they may outsource critical systems and permit system “modularity” across OEMs. The process of product design will shape whether Internet applications are used to facilitate communication between OEM and supplier engineers or to conduct auctions between OEM purchasing and supplier sales departments. Integrating consumer choice into production also shapes retail transactions and relations between OEMs and retailers. The legal monopoly over retail transactions that franchise laws grant dealers adds another constraint to the menu of consumer choices. Dealers want customers to choose from options available on their lots, not from all potential options. Many observers regard dealers as ripe for “disintermediation” by OEMs (“factory direct”) or new online entrants. Fifty state franchise laws, however, may make the political price of dislodging dealers prohibitive. Indeed, because dealers organized around OEM brands, they may find a new partnership with them in an e-volved auto industry. Dealers provide opportunities for a test drive. They also offer a geographically organized conduit through which OEMs can continue to provide products and services to the consumer after the initial retail transaction. “E-volution” may require dealers to “repurpose” themselves, but it does not necessarily herald their elimination. The revolutionary challenge of e-commerce in the auto industry is to integrate consumers into the production process. While the Internet may “change everything” in this “industry of industries,” it is unlikely to do so in completely unexpected ways. E-commerce does not mean building the industry anew, but rather integrating consumers into existing relationships among OEMs, suppliers, and retailers. Moreover, it is possible that different models for this may emerge side by side.
The Semiconductor Industry Electronic data interchange (EDI) and e-commerce applications have facilitated, not driven, segmentation of semiconductor production, as Robert C. Leachman and Chien H. Leachman explore in their chapter, “ECommerce and the Changing Terms of Competition in the Semiconductor Industry.” Firms integrating product design, marketing, and fabrication dominate production of high-end (leading-edge logic), low-end (chips for analog and mixed-signal), and memory devices. But a new division of labor has emerged in fabrication of less sophisticated logic devices. This segment consists of firms with no design or marketing operations that offer pure foundry capacity. The complements of these foundries are smaller firms, specialized in product design and marketing, that maintain no fabrication capacity of their own. High barriers to entry in semiconductor fabrication and the development of sophisticated electronic design software led to the “fabless-foundry” model, although EDI and e-commerce applications facilitate this division of labor. Two circumstances create high barriers to entry in semiconductor fabrication. First, wafer fabrication generates large economies of scale. The investment required for a competitive fab exceeds $2 billion. Second, rapid product cycles in markets for microelectronics reward early entrants with high margins and penalize latecomers with losses. Even firms able to fund a fab might find the market has moved on before they can deliver their product. Firms that integrate design and manufacture, therefore, cannot enter or exit easily. Vertical integration precludes rapid innovation cycles associated with start-ups. These barriers, however, also generated incentives for the “foundryfabless” division of labor. Leachman and Leachman observe that the Taiwan Semiconductor Manufacturing Company entered the foundry business because it was easier to compete on manufacturing skill rather than in product design and marketing. Product designers, on the other hand, were reluctant to reveal their best technologies to potential competitors when they leased foundry capacity. Indeed, the threat that integrated firms might hoard foundry capacity in peak demand periods represented a prohibitive risk to “fabless” designers. Separating design from fabrication, however, made it possible both to pool investment resources and to disperse risk. Two types of software application enable the “foundry-fabless” division. Both are enhanced by the Internet but do not depend on its existence. First, electronic design automation (EDA) software embeds the capacities
and limitations of a particular foundry in code. This code sets parameters to which products designed for that foundry must conform. Process technologies, through EDA, limit product designs rather than products driving construction of fabrication facilities. Demand for EDA software also creates opportunities for new and existing software firms. Supply chain management software is the second application enabling the “fabless-foundry” division. Centralization of information about and control over the production process are the classical reasons for hierarchical organization in firms. Only the availability of reliable, timely information about production status—and a foundry’s good reputation—permits decoupling of design, marketing, and fabrication. The Internet facilitates disintegration by reducing data transmission costs and promoting flexible allocation of foundry capacity. Security concerns and the need for compatibility between foundry and product designs, however, put limits on the efficiency that can be gained through supply chain management applications. At present, process technologies drive the organization of semiconductor production. Integrated firms dominate production segments that require leading-edge process technologies and those segments where such technologies are a lower barrier to entry. The “fabless-foundry” organization thrives in the segment for less sophisticated digital logic devices where products experience rapid obsolescence. E-commerce applications support the “fablessfoundry” model but did not launch the evolution that began in the 1980s, before commercialization of the Internet. As the cost of digital logic capacity declines and it is integrated into more products, semiconductor production could experience another transformation in response to greater demand for customized products. If such a development occurs, the “fabless-foundry” organization might fit well with complicated supply chains that exploit the network externalities created by Internet e-commerce. From these four case studies, it is possible to conclude that the impact of the Internet will not be uniform across industries, but rather will be refracted through the political, economic, and social institutions and arrangements within which the industries exist. For example, in the case of auto dealers, state and local laws will constrain and channel the Internet’s impact. In contrast, the PC industry is largely free of such constraints, so Dell is unfettered in the way it uses the Internet. What these chapters indicate is that existing industries and firms will not easily be dislodged, as they have many assets and competencies that new entrants—even with a lever as powerful as the Internet—will find difficult to overcome. Our case
studies show that in each of these industries, existing firms and new entrants are experimenting with ways to use this exciting new tool.
References Baldwin, Carliss, and Kim Clark. 2000. Design Rules. Vol. 1, The Power of Modularity. MIT Press. Borrus, Michael, and John Zysman. 1997. “Wintelism and the Changing Terms of Global Competition: Prototype of the Future?” Working Paper 96B. Berkeley Roundtable on the International Economy (February). Bresnahan, Timothy, and John Richards. 1999. “Local and Global Competition in Information Technology.” Policy Paper 99-7. Stanford Institute for Economic Policy Research.
7
The Internet and the Personal Computer Value Chain
We’re in a fashion industry where there are several product turns a year. , ,
- the terms of competition in many industries because it makes it possible to rearrange and restructure segments of the value chain. This chapter explores the impact of e-commerce on the personal computer (PC) industry. The PC is particularly appropriate for study for a number of reasons—most important because it is the device linking most persons to the Internet and because the PC industry played a significant role in exploring new business models that were later adopted by other industries. The most prominent experimenter with the new business model was Dell Computer. Not only PC firms but nearly every other firm involved in producing and selling a product has evinced interest in the Dell model.1 Dell Computer was successful in an industry characterized by
E
1. The essays in this volume by Helper and MacDuffie (chapter 8) and by Hammond and Kohler (chapter 13) indicate the interest in the Dell model.
cutthroat pricing, rapid technological change, foreign competition, global value chains, and changing consumer tastes. The PC industry, as one of the first to adopt the Internet as a business tool, can provide insights into what might prevail in other industries.2
The PC Industry In contrast to many industries where a dominant design emerges and then a period of consolidation occurs, in the PC industry fierce competition and price wars continue to be the norm. Since the immediately successful introduction of the IBM PC in 1981, IBM and other large global players such as Compaq, Dell, and Hewlett Packard have dominated the dramatically growing PC market. In 2000, it was estimated that PC sales in the United States alone would be over $85 billion.3 Despite the emergence of major brands, at least 30 percent of the market remains controlled by noname brands (in industry parlance, “white boxes”) produced by firms ranging from very small local shops to the large distributors such as Ingram Micro. In the retail segment, cost continues to be a major differentiating factor, but even in the institutional market price is significant. In 2000, nearly twenty years after the introduction of the PC, no single business and distribution model was entirely dominant. Moreover, due to the low barriers to market entry, there has been a constant stream of new entrants, some of which have sufficient capital and the highly compelling new business model they need to become significant players. The roots of this competitive dynamic can be traced to IBM’s decision to purchase the microprocessor and the operating system software from outside vendors.4 The unexpected result for IBM was a loss of control of the PC standards. The providers of the microprocessor and operating system, Intel and Microsoft, were free to sell their products to other vendors, thus unleashing a slew of “clones.” The result was that no single company was able to integrate the entire value chain, and with the exception of operating system software (Microsoft) and, to a slightly lesser degree, micro2. This chapter considers the situation only for PCs, by which we mean desktop computers that use the Windows operating system and a compatible microprocessor. Niche products such as the Apple Mac, Playstation, Nintendo, Atari, and Amiga exhibit different dynamics. Also, the notebook and handheld computer sectors have a different structure. 3. Petska-Juliussen and Juliussen (1996). 4. Langlois and Robertson (1992).
processors (Intel and AMD), there is competition at every link of the chain. The market availability of all components on the open market combined with the extreme ease of assembly make the PC a quintessentially modular product. This means that in nearly every stage of the value chain there is intense competition. Bresnahan and Richards described these dynamics as “vertical competition,” an environment in which firms at each stage of the value chain encourage competition at the other stages.5 So, for example, Microsoft certifies microprocessors made by firms other than Intel as Microsoft-compatible; Intel develops microprocessors to work with the Linux operating system. Price competition is continuous and fierce: even acquiring a dominant position cannot entirely protect a firm (with the possible exception of Microsoft). The pace of change, both technically and economically, is driven by innovation in components and software. Constant dramatic improvements in performance for roughly the same price are explained by the fact that two of the most costly and important components in a PC, semiconductors and hard disk drives (HDDs), are subject to rapid technological improvement. The first and most famous improvement dynamic is described by Moore’s Law, which states that the performance of semiconductors will double approximately every eighteen months.6 Moreover, the new chip can be sold at roughly the same price as a chip with one-half the capability sold for eighteen months earlier. Intel, the leading microprocessor producer, has made the rapid development of new product generations and subgenerations a cornerstone of its business model.7 Similarly, in the 1990s the per-megabyte cost of HDD magnetic storage experienced a rapid decline as areal density of data storage doubled every seventeen months.8 The persistent tendency for the price of the most technology-intensive components to drop for any specified performance level is difficult enough to manage. There are also periods of extreme price instability due to factors such as overcapacity in certain components or increased competition in a 5. Bresnahan and Richards (1998). 6. Gordon Moore is one of the founders of Intel, the world’s most prominent semiconductor company and most important producer of microprocessors for the PC. 7. Don Clark, “A Big Bet Made Intel What It Is Today: Now It Wagers Again,” Wall Street Journal, June 6, 1995, pp. A1, A5. Intel’s strategy was to sell its newest and fastest microprocessor at a high price. As faster models are introduced, the prices of earlier models are significantly reduced. However, in 1999 this strategy came under significant pressure due to the introduction by AMD of an entirely compatible family of microprocessors of comparable speed at lower prices. 8. McKendrick (1997).
particular component segment. For the PC value chains, this means that inventory problems extend far beyond simply having capital in process and storage costs. They expose the inventory’s owner not only to a persistent depreciation but also to the risks associated with more unpredictable price declines.9 The PC value chain is conditioned by the loss-of-value dynamics, which means that making the supply chain more efficient—from component producer through to the consumer—is an overriding concern. Any strategy decreasing the holding period for inventory makes an immediate and significant contribution to profitability.10
The Value Chain before the Internet The complicated network that is the PC value chain is depicted in a highly simplified form in figure 7-1. The value chain was never fully integrated. Even with the first PCs, SCI and Avex, former NASA contractors from Huntsville, Alabama, won contracts to assemble motherboards and add-on cards (respectively) for the original IBM PC in 1981.11 The IBM sales channel consisted of IBM salespersons and computer stores it qualified, such as Businessland. Almost from its introduction, demand for the IBM PC outstripped supply, and nearly immediately there was a flood of fully compatible or almost compatible clones, legal and illegal. The cloners could purchase the operating system from Microsoft and the microprocessor unit (MPU) from Intel; all they had to copy was the BIOS. IBM’s head start, brand name, and control of the ROM-BIOS was sufficient until 1984–85 to control the industry and restrain new entrants. In 1984 Compaq emerged as the first creditable competitor of IBM. With the cloning of the ROM-BIOS chip, any firm anywhere could enter the marketplace. Very quickly, a number of firms, particularly in Taiwan, began subcontracting for the large U.S. firms and various retailers.12 As the premium brand, IBM was able to extract a rent from customers in the 9. Examples of crisis vary. One example is the 1997 collapse of the Korean currency and economy that prompted Korean firms to flood the world economy with DRAM (dynamic random access memories) chips at devastatingly low prices. Also, any event that slows consumer purchasing affects assemblers with PCs in the pipeline because turnover slows, but the PC’s value inexorably declines. 10. Curry and Kenney (1999). 11. Sturgeon (1999). 12. Dedrick and Kraemer (1998).
form of 18 percent net operating margins.13 Compaq established itself as a competitor with comparable quality but slightly lower prices.14 However, a market for components was maturing under the IBM/Compaq price umbrella. The improving component quality and the assurance of compatibility simplified market entry for second-tier producers, especially in the low-end market. These clones were offered at significantly lower prices and still were profitable because Compaq had a 67 percent price premium over a comparable Gateway 2000 computer.15 The strength of the IBM and Compaq brands offered them much pricing protection, and thus there was little stress on optimizing the value chain. This set the stage for the entry of still more low-cost vendors. At that time, parts and completed machines could remain in inventory or in the channel for relatively long periods of time because there was little significant time-based competition. Components and even finished PCs could be sourced from abroad with little profit penalty. This provided Taiwanese OEMs with the headroom for their market entry. As table 7-1 indicates, in 1990 the PC market was in transition; five of the top ten firms in unit sales were Japanese or European and, if IBM is included, seven of the top ten positions were occupied by existing firms. In 1990 it appeared that the established computer firms were poised to control the industry. However, the industry was actually at an inflection point. In 1990 there were three important sales channels: computer company salespersons, computer superstores, and local computer stores or vendors (white box vendors and value added resellers). However, the dominant firms, IBM and Compaq, were experiencing market share loss due to direct sellers such as Dell and Gateway 2000 (now renamed Gateway), Taiwanese firms, and no-name clones, all of which undercut the market leaders on price.16 In 1992 Compaq responded to its low-cost competitors by dramatically lowering its margins and engineering costs out of its value chain. As a relic of the earlier period when Compaq integrated most production to protect quality, as late as 1992 Compaq was still building its own power 13. “Compaq: How It Made Its Impressive Move out of the Doldrums,” Business Week, November 2, 1992, pp. 146–51. 14. Rick Whiting, “Personal Computer Have-Nots Fight for Bigger Slice of Market,” Electronic Business, October 30, 1989, pp. 34–35. 15. “Compaq: How It Made Its Impressive Move.” 16. Cortino (1992).
Also semiassembled Acerb Mitacb FICb
The channel
Local computer shops, among others
May also assemble white boxes
CompUSA Compucom MicroAgea GECapital IT
Corporate resellers and retailers, VARs, integrators, retailers
Internal service organizations to deliver total solutions, especially IBM
Ingram Micro Tech Data PC Wholesale CHSa
Distributors
Source: Martin Kenney. a. Corporate resellers and retailers tend to be no larger than VARs, integrators, and the like. b. Taiwanese vendors.
Subassemblies Stuffed motherboards Mitacb Acerb FICb Intel (motherboards)
Dell Gateway Micron
Component suppliers Intel Seagate Microsoft Others
Compaq IBM HP
Assemblers
Suppliers
Figure 7-1. PC Value Chain before the Internet, circa 1995
u s e r s
E n d
Table 7-1. Global Ranking for PC Sales, 1990, 1997, 1999 Company Rank 1 2 3 4 5 6 7 8 9 10 11 12 13 14
1990 IBM Apple NEC Compaq Toshiba Olivetti Groupe Bull Fujitsu Unisys Commodore Hewlett Packard Dell Packard Bell Gateway 2000
1997 Compaq IBM Packard Bell NEC Dell Hewlett Packard Gateway Apple Acer Fujitsu
1999 Compaq Dell IBM Packard Bell NEC Hewlett Packard Gateway Apple
supplies, even though high-quality power supplies made in Taiwan were available on the market for a fraction of Compaq’s cost.17 The industry growth combined with the downward pressure on prices to convince PC assemblers to purchase even more Taiwanese parts and even finished computers. U.S. contract manufacturers continued to manufacture PCs and related products but moved to diversify their customer base, retreating from the lower-margin PC business. According to Sturgeon, the Taiwanese quickly became more adept than U.S. producers at building motherboards, peripheral devices, and later finished computers.18 Initially, these parts were for the generic “clone” market and later for branded companies such as Dell and Packard Bell. IBM and Compaq were forced to follow suit. One Taiwanese assembler, Acer, went further and designed and sold PCs under its own name. Even while Compaq was cutting margins in an effort to recover sales, the small but rapidly growing direct sales firm Dell abandoned its efforts to enter the retail chain. The unsuccessful experience of selling into the retail channels taught Dell the advantages of the order-taking model. Because 17. “Compaq: How It Made Its Impressive Move.” 18. Sturgeon (1999).
Dell operated on a true supermarket system, in which the customer “pulled” the merchandise through the system, it had far less inventory in process and reduced risk because it built only computers that already had been sold.19 This permitted Dell to sell computers at a lower price and have higher margins. The result was that Dell grew significantly faster than its competitors, thus increasing its market share.20 Build to order direct marketers had two significant advantages over their competitors. First, because they built to order, their inventories reflected only immediate expressed demand, and they experienced far less value erosion. Even minute changes in demand were registered immediately, and losses attributable to faulty demand forecasts were virtually nonexistent. Even better, because Dell’s suppliers essentially managed inventory, Dell was nearly free of exposure to declining prices. Second, machines were built upon receipt of payment so there were no losses from product that could not be sold. In other words, the direct marketing model permitted Dell to immediately know customer demand, allowing the company to manage and automate its entire value chain. The traditional PC firm had two basic responses to the Dell challenge. The first was to develop ancillary services: system integration services for businesses or a bundle of software and services for the home consumers. In the business area, this approach was probably best exemplified by IBM, which provided a wide range of services including preconfigured Internet and e-commerce server systems, business service software (including electronic data interchange-type services such as Lotus Notes), systems installation, and information systems consulting. In 1997, to expand its servicerelated offerings and diversify its product offerings in the higher value server market, Compaq acquired Digital Equipment Corporation.21 In the consumer and small business market, PCs were offered bundled with additional services—most important, Internet access. To maintain or expand market share, particularly among first-time computer buyers, most PC assemblers offered Internet service as part of the purchase of a PC—usually in the form of rebate. For the least expensive PCs, the strategy was to charge full price of Internet service and essentially give away the PC. The recognition here was that the “killer application” was the ability to surf the Internet, not the other PC applications. This created opportunities for low19. Dell (1999c). 20. Curry and Kenney (1999); Dedrick, Kraemer, and Yamashiro (1999). 21. Evan Ramstad and Jon Auerbach, “Tech Takeover: Compaq Buys Digital, an Unthinkable Event Just a Few Years Ago,” Wall Street Journal, January 27, 1998, pp. A1, A8.
cost PC marketers such as E-machines to create alliances with Internet service providers such as America Online’s (AOL’s) CompuServe. The Internet service providers (ISPs) would rebate approximately half of the cost of an E-machines PC ($400) in exchange for a long-term service contract with the customer. In 1999 this became less popular, as various Internet firms, particularly the portals, began giving away Internet access. The second major approach has been to offer extremely inexpensive PCs through the retail channel. These machines experienced less value erosion than did more expensive ones. The direct marketer’s overhead militates against high profit margins in these extremely inexpensive machines. In 1999 E-machines, a start-up, had become the number three retail brand in the United States because it was able to import completed PCs from Korea.22 In effect, E-machines created a space at the low end of the market that was not sufficiently profitable for the build to order (BTO) direct marketers to attack. Ultimately, the difficulty for the nondirect marketers was an inability to abandon their existing channels. Quite naturally, the channel resisted efforts on the part of manufacturers to develop direct sales, particularly for corporate accounts. Consider the situation for the traditional firms and their market channels as represented in figure 7-1. The PC value chain is quite complicated and contains three different demand chain elements: assemblers, distributors, and a polyglot group of resellers, value added retailers, integrators, and retailers. For the manufacturers the status quo is dangerous, given the easy availability of parts. Any constituent in the value chain could change brand-name manufacturers or begin assembling its own white boxes. The highly disaggregated sales system was vulnerable to disruptions. Consider: the assemblers’ decisions on which computers to produce were made by forecasting demand six months in advance on the basis of demand information that came upstream from the channel. The assemblers’ factories and their suppliers were building for supposed future demand. This was fine so long as demand was constant and predictable, but of course, demand was subject to the vagaries of a market characterized by rapid change. When a firm overbuilt, since the value of a PC was a rapidly wasting asset, it would 22. PC Data, “Retail Desktop PC Sales End 1999 on a Sour Note as Unit Sales Growth in December is Slowest of the Year,” January 24, 2000 (www.pcdata.com [March 16, 2001]). E-machines purchases its machines from two Korean firms, TriGem Computer and Korea Data Systems Co., and the Taiwanese firm Jean Company. TriGem Computer also outsources some manufacturing to a facility in Xiamen, China, and another facility in Shenyang, China.
use measures such as rebates and price protection to push the product into the channel. This is known as “channel stuffing.” This led to periodic bouts of gross excess capacity that continued until the manufacturers and their suppliers ramped down production. This would appear to be advantageous for the channel because prices would fall and they could collect their rebates, but in fact the inefficiency, excess inventories, and extra effort associated with returning product disrupted the channel’s profitability as well. The traditional system had still other vulnerabilities, centering on its ability to interchange and process information. The actual information interchanges were idiosyncratic, and the descriptors of products varied among firms. This was curious, because the PC is highly standardized. However, there was no one set of agreed-upon criteria for comparison. As important as the information and its format, the interfirm communication media varied but for the most part were based on phone and fax. Often large paper catalogs were used, and most transactions were paper-based. Only the larger vendors had expensive, hard-to-use proprietary electronic data interchange (EDI) systems. Information flowed haltingly through convoluted, error-prone channels, which injected much noise into the system. In summation, by 1996–1997, the traditional assembly-to-channel marketing system was at a competitive disadvantage. Inventory problems, slow responses, and faulty forecasts led to massive financial losses and eroding market share as the direct marketers, particularly Dell, grew far more quickly that the rest of the industry. Compaq, IBM, and others still sold PCs through the traditional channels, either to the computer superstores and value added resellers or through direct sales to large corporate customers. The white box remained the largest single “brand,” because it cost less than the machines of the majors did. However, both the white box makers and the traditional assemblers were losing market share to the direct marketers.
Welcome to the Internet The widespread diffusion of the Internet created opportunities in nearly every segment of the PC value chain. Already in the late 1980s, Gopher was available for PCs. However, it was not until the Mosaic browser for the PC was released in spring 1993 that the World Wide Web began its dramatic increase in use. The enormous PC-installed base was what made the
WWW such a fast-growing phenomenon and powerful new tool.23 Conversely, the WWW became the new “killer application” that drove the PC industry. It was not surprising that PC firms recognized the significance of the Internet earlier than most firms and moved to adapt it to their business plans. The commercialization of the Internet created space for new entrants even while it provided opportunities for existing firms to create new connections to their customers. It also created opportunities to reorganize the existing value chain to allow disintermediation of various intermediaries. With all the disruption and confusion among the various constituents, it is clear there is neither a final resolution nor certainty about the ultimate impacts of the Internet on the value chain.
Direct Marketing Dell, almost immediately, understood that the Internet might be significant for its business. This prescience is not entirely surprising because Dell’s business was predicated upon the use of communications technologies, both telephony and mail-order catalogs. In a sense, the direct marketers were ecommerce firms before the emergence of the commercial Internet. Interacting with customers through a telephone made the step to the Internet very short—it was a natural progression. As with some other early adopters such as Federal Express, once a firm established an online presence, customer demand and suggestions led to next steps. In the late 1980s Dell established a file transfer protocol (FTP) site so its customers could download technical bulletins and other information. In 1994 Dell was the first important personal computer firm to launch a commercial website (www.dell.com). Initially, the site provided only technical support information and an e-mail link for support. Then in 1995 online configuration and pricing options were introduced, though the actual sale was still consummated on the telephone.24 With the introduction of the Secure Sockets Layer in the browser and increased confidence in online credit card purchasing, Dell transferred the entire transaction online. Dell confronted a unique opportunity; since it had already given up on selling PCs through the channel, it had no legacy distribution channel to consider. For Dell, replacing telephone operators (who were simply conduits 23. Jimeniz and Greenstein (1998). 24. Dell (1999c).
for entering orders into a computer) with an Internet-based interface was not a great technical and business strategy leap. Internet-based sales grew dramatically. In December 1996 Internet-enabled sales were $1 million per day.25 This had grown by February 2000 to web-related sales of $40 million per day or 50 percent of total sales. Dell’s savings from moving transactions to the Internet were substantial. For example, Dell estimated that orderstatus calls, which can cost up to $13 each, can be handled over the Internet for essentially no cost. Dell estimated its savings through avoided orderstatus calls were more than $21 million in 1999. In addition, each online purchase transaction produced an average of 40 percent fewer order-status calls for Dell and 15 percent fewer technical support calls, at a savings of $3 to $8 per call.26 In 1996, with the introduction of the Premier Pages program offering a password-protected, Dell-developed web page, its largest customers, such as Ford Motor and Shell Oil, could order directly from Dell. Each page is uniquely designed for each customer and contains account team information and procurement and purchase-order processes unique to the customer.27 The efficiency of this web-based ordering system allowed one global customer, Shell Oil, to save 15 percent of its annual PC purchasing costs. Another firm was able to reduce its procurement staff from fifteen to four.28 These web pages created a link with these customers and provided Dell with a pipeline for the introduction of new IT products. There were also benefits for the customer. Control and tracking was simplified because all PC purchases and billing were centralized. Dell could even put the corporate property numbers on the computer in the factory, eliminating the necessity of having someone find and place the property tags on the machine after it was put into service. The Internet permitted Dell to increase the service it provided its corporate customers.29 Dell also inaugurated “valuechain.dell.com,” which connected the company with its largest suppliers. Through this site, the suppliers could find out Dell’s requirements for their incoming materials, receive statistics from Dell’s manufacturing lines, and obtain data on the reliability of their components. This permitted Dell and its suppliers to monitor each other in
25. Dell (1999c, p. 93). 26. Dell (1999a). 27. Dell (2000a). 28. Dell (1999b). 29. Dedrick, Kraemer, and Yamashiro (1999).
real time. The transparency of the system allowed Dell’s managers to observe inventories passing through their supplier’s operations.30 The efficiencies of the direct sales model were accentuated by the diffusion of the Internet. The edge the direct marketers experienced before the Internet translated nicely into still further advantages. In contrast, for those using the channel and those in the channel, the situation would only become more dire, even though the Internet also provided them with opportunities to become more efficient.
Trying to Score on Mike The commercialization of the Internet created challenges for all PC firms and allowed the entrance of some new players whose business models were predicated on using the Internet. Competing with the direct marketers was difficult enough, even when the direct marketers were limited by their dependence on catalogs and labor-intensive telephone ordering. With the introduction of Internet-based ordering, the cost advantages (combined with the other advantages) became overwhelming. Recognition of the problem was simpler than fashioning a credible response. On the one hand, a dramatic move to direct sales methods meant alienating the existing sales channels. On the other hand, remaining with the push system, no matter how sophisticated, meant that the direct sellers would retain their advantage. This problem faced not only manufacturers such as Compaq and IBM but also distributors such as Ingram Micro and Tech Data, value added resellers (VARs) such as Compucom and General Electric IT Services, and retailers such as CompUSA and Fry’s Electronics (see figure 7-2, where value added resellers and retailers are combined).
The New Entrants The possibilities for marketing PCs created by the Internet were not lost among entrepreneurs. The Internet quickly attracted a number of start-ups that intended to sell PCs from their websites; the dotted boxes in figure 7-2 represent these. Moreover, one failing bricks and mortar retailer, Egghead Software, closed its stores and transferred its operations entirely to the web.
30. Dell (2000b).
Also semiassembled Acerb Mitacb FICb
Ingram Micro Tech Data PC Wholesale CHSa
Local computer shops, among others
May also assemble white boxes
CompUSA Compucom MicroAgea GECapital IT
CNET.com Yahoo.com
Internet referrers
Buy.com Priceline.com
Internet retailers
Corporate resellers and retailers, Internet-only VARs, integrators, retailers firms
Internal service organizations to deliver total solutions, especially IBM
The channel
Distributors
Source: Martin Kenney. a. Corporate resellers and retailers tend to be no larger than VARs, integrators, and the like. b. Taiwanese vendors.
Subassemblies Stuffed motherboards Mitacb Acerb FICb Intel (motherboards)
Dell Gateway Micron
Component suppliers Intel Seagate Microsoft Others
Compaq IBM HP
Assemblers
Suppliers
Figure 7-2. PC Value Chain, circa 2000
u s e r s
E n d
Creating an electronic storefront was quite simple from two dimensions: the first was the ease with which a retail engine can be implemented on the web; the second pertains to the ease of organizing fulfillment. In the PC sector the existence of distributors such as Ingram Micro simplified entry in much the same way as Ingram Books facilitated the establishment of Amazon.com. The Internet storefronts had significant advantages—they carried no inventory, they required no sales staff, most of their orders were handled electronically, and they operated twenty-four hours a day, seven days a week. The Internet retailers also had weaknesses. The first of these was their dependence on distributors for fulfillment. So, for example, in fiscal year 1998 Cyberian Outpost purchased 38 percent and 10 percent, respectively, of its products through two major distributors, Ingram Micro and MicroAge. Buy.com had an even closer relationship with Ingram Micro, which was contracted to provide all of its computer hardware and software products.31 Buy.com was completely dependent on Ingram to provide timely and accurate order fulfillment. The core competency of the online retailers was the attraction of customers and the development of their brand name. Their long-term viability was uncertain, because these computer products were commodities and profitability could be difficult to attain. Another methodology was a referral system, whereby an Internet firm such as a portal referred customers to an assembler or distributors. Leaders at this were Yahoo! and CNET. For example, CNET claims that in fourth quarter 1999 it was the top referrer of traffic to the online transaction areas of Dell, Gateway, IBM, Acer, and Apple Computer.32 The significance of these referral programs for the PC industry is difficult to gauge. However, they offered yet another channel from the manufacturer to the end user and could outflank the bricks and mortar channels. CNET has made a major advertising commitment in an effort to raise the visibility of its site and make it the premier technology-related reference site. The PC industry is nothing if not innovative. Another strategy for selling more PCs is to launch “affiliate” sales programs. A San Francisco startup, PeoplePC, pioneered affiliate buying in 1999, when it announced deals to provide PCs to employees of Ford and American Airlines.33 PeoplePC 31. Buy.com, Inc., “SEC S-1 Filing,” October 27, 1999 (www.buy.com [March 10, 2001]). 32. CNET, Inc., “CNET, Inc. Announces Fourth Quarter Financial Results,” February 3, 2000 (www.cnet.com [March 10, 2001]). 33. Rachel Konrad and Michael Kanellos, “Dell to Supply PCs for American Airlines,” CNET News.com, April 6, 2000.
teamed with HP and Uunet to offer a PC and Internet access at $5 per month for three years. Ford sees this as a way it can communicate more regularly with its employees, and the UAW supported the program as a way to communicate more effectively with its members.34 These affinity programs could expand; Intel and American Airlines have announced similar programs. In contrast to the experiences in some other retail sectors where ecommerce start-ups captured significant market share, in computers the start-ups have had difficulty capturing a profitable market space. This is, in part, due to the difficulty nearly all participants have in achieving sustained profitability. Moreover, in contrast to other sectors such as books and CDs, in PCs the established leaders such as Dell quickly implemented WWWbased sales and other activities. The new entrants did not unleash a wave of creative destruction; rather, they formed a new pipeline to the customer.
Traditional Assemblers: Compaq, IBM, and Hewlett Packard The Internet actually reinforced the competitiveness of the direct marketers and increased the difficulties for the traditional assemblers. The late 1990s were difficult for the traditional assemblers as they continued to lose market share to the direct marketers and were slammed by component price decreases, which devalued their inventory. The Internet posed a powerful dilemma. Since a significant share of their sales is through VARs and other system integrators, shifting away from the channel would create significant costs related to augmenting their own customer service divisions. If they did not begin direct sales, then they would likely continue to lose market share. This was not an idle threat; major firms such as Packard Bell/NEC and AST Research/Samsung had already been driven out of the market. Moreover, the channel could always switch their efforts to selling white boxes, essentially augmenting their full-service product lines with their own PCs. To top it off, it was estimated that Compaq’s profit on each consumer PC sold was as little as 4 percent.35 The chaos among the traditional assemblers was profound. For example, after online retailers began selling Compaq PCs, in February 1999 Compaq responded by forbidding such sales because they undermined its 34. Joe Wilcox, “Ford Wires Employees with PCs, Net Access,” CNET News.com, February 3, 2000. 35. Joe Wilcox, “HP Surges Ahead of Compaq in Retail,” CNET News.com, March 30, 2000.
Table 7-2. Compaq’s Distribution Moves, 1996–2000 October 1996 July 1997 November 1998 January 1999 May 1999 January 2000
Implements web-based intranet Implements build to order program and channel configuration program—distributors/resellers complete final PC assembly Unveils Prosignia line of PCs, marketed and sold by direct order only Forms Compaq.com business division to oversee Internet and direct sales Launches distribution alliance program—contracts resellers to produce direct order computers Purchases Inacom, a distribution partner with 4 U.S. assembly and distribution facilities
Source: Ken Popovich, “Compaq’s Latest Direct Bid: Purchasing Inacom,” PC Week, January 10, 2000, p. 10.
offline retailers. Consider the difficulty of the situation: the direct marketers were constantly increasing their market share, and the assemblers were wrestling with the impacts of the Internet on their business models. Abandoning the channel may save costs, but it also meant abandoning intermediaries who still play a very important customer service role. Having said that, reaction by the assemblers was slow. From an analysis of Compaq’s press releases, it appears that only in October 1996 was there any announced reaction to the potential of the web, and then it was only the creation of a WWW-based intranet. This is approximately two years later than Dell. It was not until July 1997 that Compaq truly responded to the threat. The most important measures in this response can be seen in table 7-2. Compaq’s first significant measure, to develop a channel assembly program, can be seen as a response to the threat of the mid-1990s, and not to the looming new competitive disadvantages posed by e-commerce. Only in November 1998 did Compaq unveil a line of computers meant to be sold on the web. But perhaps most telling was the striking admission in 2000 by Compaq’s president that the company did not have “the ability to take an order, do the configuration online, and be able to track the order and fulfill it.”36 In January 2000 Compaq bought the fulfillment operations of one of its distribution partners, Inacom, so that it could integrate the value chain. 36. Ken Popovich, “Compaq’s Latest Direct Bid: Purchasing Inacom,” PC Week, January 10, 2000, p. 10.
Compaq was not alone. IBM’s experiences in the retail channels were, if anything, worse. The difficulties were so overwhelming that as of January 2000 IBM withdrew its consumer market brand, Aptiva, from the retail market.37 This decision freed IBM to enter the BTO direct marketing arena. By 2001 it appeared as though IBM’s direct marketing initiative was successful. In the retail channel, the largest beneficiary of IBM’s withdrawal was Hewlett Packard, which managed to capture most of the market share IBM abandoned. In the business sector, IBM’s competitiveness was based on its ability to deliver total systems’ solutions. In these cases, IBM could build to order, and the cost of the PC was hidden in the cost of the entire solution.
The Distributors The early literature on e-commerce posited that through using the Internet it should be possible to make direct links between manufacturers and end users, thus disintermediating distributors.38 Of course, this is what the direct model already did. In the PC value chains, distributors such as Ingram Micro, Tech Data, Avnet, and CHS have been critical players.39 The breadth of their offerings is staggering. For example, Ingram has more than 280,000 different stock keeping units (SKUs), which range from the smallest passive component to finished system and software. These distributors are fully capable of assembling and delivering a finished PC directly to the customer. To facilitate their operations and expedite delivery, these firms have built a network of warehouse/logistics facilities, enabling them to deliver orders the next day in most of the contiguous United States.40 37. Joe Wilcox, “IBM to Sell Aptiva Direct,” CNET News.com, October 18, 1999. 38. Profits are elusive in PC distribution. Most of these firms distribute a wide variety of IT products, and PCs, though large volume, are quite low in profitability. Distribution has great difficulty extracting profits from its sales. For example, in full fiscal year 1999, net sales at Ingram Micro topped $28.1 billion and grew 27 percent over 1998. However, net income was only $183 million versus $245 million for 1998. Tech Data, though smaller ($11.5 billion in sales), was able to earn nearly $192 million in fiscal year 1999 (Tech Data [1999]). For the distributors the problem is that their suppliers can open a website and sell directly to consumers, capturing the distributors’ profits and removing time from the value chain. 39. Ingram Micro is interesting because its sister companies include Ingram Books, which is the world’s largest book distributor, and Ingram Entertainment, which is a major distributor of entertainment products. Amazon.com was able to quickly enter the book market because it could outsource fulfillment and logistics to Ingram Books. 40. Distributors such as Ingram and Tech Data have sophisticated delivery operations in Europe and are building networks in Asia.
For the assemblers, such as IBM and Compaq, the distributors were critical players in the channel. Though in some cases the distributors and assemblers were linked by EDI systems, for the most part the relationships were relatively distant. The traditional method was to simply ship as many computers as possible to the distributors, give them price protection, and then hope the channel could sell the machines. Of course, this distribution methodology led to bouts of excess inventory and too much handling when compared with the direct marketers. The disadvantages of the traditional value chain were obvious. In response to a severe inventory crisis in 1997–98, the large assemblers and the distributors introduced a new business model called “channel assembly.” This was meant to divide the assembly of a PC into two segments. In the first, the box, motherboard, floppy disk drive, and other components whose value was decreasing only slowly would be undertaken; in the second and final segment, the addition of the parts most susceptible to value erosion—the DRAMs, the microprocessor, and the hard disk drive—was completed in the channel immediately before the PC was delivered. The distributors and VARs have been under enormous pressure and the smaller distributors have been either acquired or left the business. The pain has not been confined to the smaller firms; two leaders, CHS and MicroAge, filed for bankruptcy in April 2000. Channel assembly was an aspect of the effort to adopt a “pull” model. Ingram Micro renamed its business model “demand-chain management,” referring to the idea that demand should “pull” the computers and their components through the system.41 The need to reduce inventory and other costs also has contributed to a consolidation of the value chain. In May 1999 Compaq decreased the number of its U.S. distribution partners from thirty-nine to four.42 Another effort to streamline the demand chain was vendor co-location programs, in which the distributor established a configuration operation adjacent to or even in the assemblers’ factory. Systems reconfiguration and customer shipping occurred from the vendor’s site, resulting in cost and time-to-market efficiencies. Channel assembly and co-location strategies were not directly related to the Internet. Contemporaneously, in 1998 the major distributors began introduction of electronic commerce tools to encourage customers to 41. Ingram Micro, Inc., “SEC 10-K Filing,” March 31, 2000 (www.ingrammicro.com [March 10, 2001]). 42. Ingram Micro, “SEC 10-K Filing.”
migrate to web-based ordering. For example, Ingram’s web site, www. ingrammicro.com, is meant to be a business center for resellers, that is, the next tier downstream. The site features real-time pricing and availability, online ordering, order status, and an extensive product catalog. Ingram also provided resellers access to real-time ordering, product allocation, order status, product search, and pricing and availability status. This permitted downstream VARs and the like direct access to Ingram’s mainframe inventory systems. With these tools, retailers could act as the intermediaries between the customer and Ingram without the customer knowing. Depending on the product, Ingram could even drop ship the product directly under the VARs’ label. The economics of online sales were as compelling to the distributors as they were for Dell—by 1999 all of the major distributors had an e-commerce site for their customers. In conjunction with introduction of the site, many of their smaller customers were transferred to the e-commerce site in an effort to cut costs. So, in effect, the establishment of an e-commerce site was used by the distributors to rationalize their customer chain.
Value Chain Solutions The electronic components and particularly the PC supply chain were rife with incompatible formats for providing product information, and there was no taxonomy for that information. Parts numbers were not even defined in a standardized fashion. Figure 7-2 illustrates the difficulties when the players in such a complicated value chain all have different definitions and descriptive parameters for their products. This is particularly true when contrasted to the direct marketers, who did not have to depend on this complicated Tower of Babel. What this meant was that the channelbased value chain was plagued by informational inefficiencies, which in an offline world were surmounted by a thick web of personal connections and information sharing. The extremely complicated topography of the PC value chain, characterized by its diverse EDI systems, a reliance on phone, fax, and paper purchase orders, and varying manufacturers’ or distributors’ websites, rendered the value chain opaque and inefficient. The terrific growth of the direct marketers in 1997 and 1998 encouraged the firms in the channel and those dependent on the channel to search for strategies for decreasing costs, speeding information flow, and making the value chain more transparent. In 1998 a group of major PC and other IT firms formed
an independent nonprofit organization, RosettaNet, dedicated to promoting industry-wide initiatives to adopt common electronic business interfaces.43 Many of these firms also participated in the formation of Viacore, which is developing an e-commerce hub to translate RosettaNet information for member top-tier demand chain companies. Even with a nonprofit organization seeking to develop standards, the disaggregated and chaotic nature of the IT industry’s value chain created an opportunity for a business-to-business (B2B) entrant capable of knitting the value chain together and providing customizable solutions to various participants. The only important entrant specializing in electronics is pcOrder.com. The firm’s business proposition is that all of the information in the value chain should be moved online. Ideally, pcOrder.com would link all the firms in figure 7-2 into one compatible XML-based system. However, to date most of its efforts have focused on the parts of the value chain from the assemblers downstream. pcOrder.com has a multidimensional business model. The first dimension is a modular suite of customizable software applications for any constituent of the value chain. The second dimension is a standardized database consisting of over 600,000 SKUs from over 1,000 manufacturers, which for a fee VARs and resellers can use to compare, configure, and order products online.44 Full implementation of pcOrder’s solution would make all of the information flow in the value chain electronic, dramatically lower inventory, and render the entire system more transparent. The pcOrder business model does appear to have some contradictions. For example, though Ingram Micro and Tech Data offer products through the pcOrder database, they also have their own website for VARs. It is really not in the distributor’s interest to contribute its data to a pcOrder.com type of website; however, the fear of losing customers does force distributors to participate. The pcOrder database, Techbuyer.com, allows the VARs and resellers to compare configurations and prices online and then order online. This could be a valuable option, because reselling hardware has margins as small as 1–2 percent.45 For the VARs, the advantage is ease of use, thereby saving time that can be better used for value adding activities.
43. RosettaNet, “3Com and CompUSA Adopt Web-Based Supply Chain Process,” press release, February 2, 2000 (www.rosettanet.com [March 10, 2001]). 44. pcOrder.com (1999). 45. Tiffany O’Brien (investor relations, pcOrder.com), telephone interview with Martin Kenney, April 10, 2000.
The use of the system by the resellers and VARs means that the demand chain will become more customer-driven. This does not mean that the distributor’s inventory will be eliminated entirely; however, it should be able to dramatically decrease its inventory. In particular, if this system is combined with complete channel assembly, it might be possible to create a value chain that is nearly as efficient as that of direct assemblers while retaining the service and close interaction with customers that was the strong point of the nondirect system. The ultimate outcome is still, of course, indeterminable.
The Post-PC Era There has been a veritable tidal wave of prose announcing the dawn of the post-PC era. The two technological developments hailed as harbingers of this new era are the Internet and wireless. The claims of the adherents to the “post-PC” position are often difficult to understand. The strong interpretation is that the PC will disappear, to be replaced by another device or set of devices. Considering that the current global installed base of PCs is over 200 million, this claim appears dubious. A weaker interpretation is that the PC will become one of many devices connected to the Internet. The crux of this argument is that the PC will gradually lose its status as the only end-user device attached to the Internet—a more credible argument. Any claim that handheld devices will replace the PC as the Internet access device of choice is dubious for both convenience and technical reasons. For small bits of information such as stock quotes, time, weather, or even traffic reports, mobile devices such as telephones are a viable option. But handheld computing devices such as the Palm Pilot provide an only barely adequate viewing experience. For a richer experience, full-size monitors (either flat panel or CRT) are far superior—witness the continued increase in the screen size of notebook computers. From the technical perspective, there are significant issues about how to shrink web pages from those developed for a computer monitor to ones that are readable on a mobile telephone or even a Palm Pilot (most of which are gray-scale). There seems little likelihood that non-PC mobile devices will usher in the post-PC world; however, they will end the hegemony of the PC as the only Internet on-ramp device. There are more formidable competitors of the PC, which deliberately use attributes of the Internet in an attempt to dethrone the PC. These have
combined with macroenvironmental tendencies that will have a significant influence on the viability of the PC. The first tendency is the massive increase in bandwidth and concomitant decrease in cost throughout the telecommunications infrastructure. There can be little doubt that homes and small businesses will soon have high-bandwidth service onto the premises, be it through DSL (a digital subscriber line), cable, or some other media. The second tendency is for computing power on the desktop to no longer be a limiting factor for the vast majority of applications. The third tendency is that there will be a web-centric solution for nearly every desktop PC application. The harbingers of this are free web-based e-mail, calendars, and online photo sharing. Many office productivity applications may also be used online if latency and bandwidth problems are resolved. The Internet thus may have the paradoxical result of cannibalizing some of the functions of the device most important for its diffusion. The Network Computer (NC) is the device that has attracted the most attention as a possible substitute for the PC. The NC was first touted publicly in 1995 by Larry Ellison, the CEO of Oracle.46 The argument for the NC is that corporate management information system (MIS) managers would have a much easier time managing the computers on their networks if the PCs were converted into “dumb” terminals. Initially, one of the arguments in favor of the NC was that it would be less expensive than a fully configured PC, which at the time cost an average of $2,500. In the interim, the cost of a PC dropped below $1,000, removing that advantage. The more significant advantage was the lower total cost of ownership of a PC, which is much greater than the initial cost of the PC. Despite the promise of the NC, it never sold well. For example, in 1999 an estimated 700,000 units were sold in the corporate sector, but projections estimated that sales would increase to 6 million units in 2003.47 In 2001 the NC, though gaining market share, has not yet mounted a serious challenge to the PC. There will be a continuing effort to create an NC or, at least, move the applications software to the web. The most interesting effort in this area was the 1999 release by Sun Microsystems of the office productivity suite, Star Office, which is predicated on networked computing. Given these efforts, it would be a mistake to completely dismiss the NC’s potential. 46. Gary Rivlin, “The Network Computer Strikes Again,” The Standard, March 20, 2000 (www.thestandard.com [March 16, 2001]). 47. Rivlin, “Network Computer Strikes Again.”
Another potential threat to the suzerainty of the PC in the home is the television set-top box. The set-top box is meant to provide the computing and connectivity and functionality to allow the TV to take the place of the computer. As the cable modem becomes more prevalent, the television, which is really a monitor (though of very low quality) and an electromagnetic wave reception device, could be converted into a networked entertainment and shopping device. When high-definition television (HDTV) is available, the “television” networked to the Internet could become a significant competitor of the PC, particularly in the family room. The set-top box also has the potential to connect other home devices to the Internet; if the home has a high bandwidth connection to the telecom network, a varied menu of functions, including those of a PC, could be transferred to the Internet. As an example, one new General Instruments set-top box has an IDE port to which a hard drive can be connected. In the home computing environment, the set-top box and the HDTV could be an alternative to the still-difficult-to-use PC. Another alternative is the “Internet appliance,” which consists of a visual output device and an input device such as a keyboard. Normally, these do not have a Microsoft operating system and often have no permanent storage device. Internet appliance offerings have come from Dell, Compaq, and even Microsoft. In April 2000 AOL and Gateway announced an alliance to offer their own Internet appliance.48 Of course, for all these players, the success of the appliance would cannibalize their PC sales. The final threat to the PC is from the game consoles. For example, the Sony PlayStation 2 has as much graphics power as any PC and will be equipped with a DVD drive, an IEEE1394 FireWire connector, and at least one PC card slot. The PC card slot could have a network connection in the form of an Ethernet or cable television connection. All of this would be sold for game machine prices. The machine does not have any Microsoft software or Intel-compatible chips. These game machines could banish the PC to the home office. And if web-based productivity applications were offered, the PlayStation could absorb the PC’s office functions also. There can be no doubt that both Microsoft and Sony recognize this potential. For example, in late 1999 Sony announced plans to work with Cablevision Systems Corporation to develop and deploy a new-generation digital entertainment and broadband communications platform through48. Stephanie Miles and Jim Davis, “AOL, Gateway Join Internet Appliance Fray,” CNET News.com, April 5, 2000.
out the New York metropolitan area. For now this system is being designed to connect with Sony set-top boxes. One potential drawback is that users may want their files stored locally. If the wide variety of PC functions is unavailable, is the cost and simplicity of the game machine sufficient to displace the home PC? It is impossible to predict the outcome of the competition among the various platforms. The handheld devices will gain market share, but they are not a direct threat to the PC. More uncertain is the outcome of competition with devices intending to move much of the computing from the desktop to the network. If these solutions are adopted, then the future of the PC industry could be dramatically altered. These NCs, set-top boxes, and game machines are not PCs. Should demand shift, a response by the PC industry would be difficult, because, quite simply, the reasons for adoption would not be cost, but rather ease of use and lower total cost of ownership. This is especially true because the traditional profit driver in the PC industry, increases in speed and storage, are becoming less important factors. A PC running a microprocessor a generation or two old is still adequate for most of the market; consumer focus seems to be moving from having the fastest PC to having more Internet bandwidth through a DSL line or cable modem. While technologies such as voice recognition are touted as stimulating future demand for more powerful PCs, so far widespread adoption still seems distant. The next decade will almost surely be one in which multiple Internet access devices will compete.
Discussion The impact of the Internet on the PC industry has been intertwined and contemporaneous with the competitive threat from the direct marketers. This makes it difficult to attribute the attempts to streamline the channel to one threat or the other. There is no question that the direct marketers were able to leverage the Internet to make their operations even more efficient than they already were. The willingness of Dell customers to use the Internet permitted the company to achieve significant savings throughout its entire operation and heightened its competitiveness. The dawning of e-commerce did attract some new online retail entrants, but in comparison to a number of other industries (such as books and CDs, autos, and services), the new entrants were unable to disintermediate existing players, so they became yet another segment in the value chain. In
this sense, we can say the Internet has had little transformative impact on the PC industry. From another perspective, the Internet will have a dramatic influence on the value chain. The PC industry is a chaotic shambles of incompatible information systems, inadequate and incomparable product descriptions, and non–value adding human involvement in the communications stream. The threat of BTO and the open and nondiscriminatory Internet standards create an environment in which market competitors can agree without providing any single firm an advantage—one of the difficulties that often emerges when competitors discuss the adoption of an EDI system. The adoption of these standards will have a profound impact on the efficiency of the PC value chain. The final impact of the Internet on the PC is the dethroning of the PC as the exclusive device for Internet access. Depending on the speed with which greater bandwidth becomes available, it is possible that a network computing device—an NC, a game machine, or even an amalgam of one of these and a PC, perhaps without the Microsoft operating system and an x86-compatible microprocessor—could challenge the PC for primacy as an Internet access device. During the next decade, the computing industry will shift from the PC-centric world to an Internet-centric world. This should allow a “thousand flowers to bloom,” in the sense of devices connected to the Internet. Such an evolution does imply a major reorientation of where the locus of technological innovation will reside.
References Bresnahan, Timothy, and John Richards. 1998. “Local Competition in Information Technology.” Paper prepared for Eleventh Annual NBER-TCER-CEPR Conference, “Competition Policy, Deregulation, and Re-Regulation.” International House of Japan, Tokyo, December 18–19. Cortino, Juli. 1992. “Squinting at the Future.” Upside (December): 46–55. Curry, James, and Martin Kenney. 1999. “Beating the Clock: Corporate Responses to Rapid Change in the PC Industry.” California Management Review (Fall): 8–36. Dedrick, Jason, and Kenneth L. Kraemer. 1998. Asia’s Computer Challenge: Threat or Opportunity for the United States and the World? Oxford University Press. Dedrick, Jason, Kenneth L. Kraemer, and Sandra Yamashiro. 1999. “Refining and Extending the Business Model with Information Technology: Dell Computer Corporation.” Working Paper. University of California, Irvine: Center for Research on Information Technology and Organizations (May 19).
Dell, Michael. 1999a. “Building a Competitive Advantage in an Internet Economy.” Address at the Detroit Economic Club, November 1. ———. 1999b. “The Dell Advantage.” Keynote address in San Francisco, March 3. ———. 1999c. Direct from Dell. New York: HarperBusiness. ———. 2000a. “E-Business: Strategies in Net Time.” Address at the University of Texas. Austin, April 27. ———. 2000b. “Building an E-Government Partnership.” Keynote address at the Southwest Government Technology Conference. Austin, Tex., February 16. Jimeniz, Ed, and Shane Greenstein. 1998. “The Emerging Internet Retailing Market as a Nested Diffusion Process.” Working Paper. Northwestern University: Management and Strategy. Langlois, Richard, and Paul Robertson. 1992. “Networks and Innovation in a Modular System: Lessons from the Microcomputer and Stereo Component Industries.” Research Policy 21 (4): 297–313. McKendrick, David. 1997. “Sustaining Competitive Advantage in Global Industries: Technological Change and Foreign Assembly in the Hard Disk Drive Industry.” Working Paper 97-06. University of California, San Diego: the Data Storage Industry Globalization Project, IR&PS. pcOrder.com. 1999. Prospectus. Austin, Tex. Petska-Juliussen, Karne, and Egil Juliussen. 1996. The 8th Annual Computer Industry Almanac. New York: International Thomson Publishing. Sturgeon, Tim John. 1999. “Turn-key Production Networks: Industry Organization, Economic Development, and the Globalization of Electronics Contract Manufacturing.” Ph.D. dissertation, University of California, Berkeley. Tech Data, Inc. 1999. Annual Report. Clearwater, Fla.
8
E-volving the Auto Industry: E-Business Effects on Consumer and Supplier Relationships , , and Kate has decided she wants to buy a new car. Her web pad brings up many sites, and she narrows her choice to two: Ford.com and Buildyourowncar.com. On Ford.com, she settles back with a cup of coffee as a list of options is presented to her. (She has given the Ford site permission to look at data generated from her previous web surfing and purchases, so it knows that she is a twentyeight-year-old woman who likes windsurfing and who often takes her four-yearold nephew on outings. The information includes full body measurements for both of them.) The site starts with available models, listed in order of potential appeal. Her list starts with the Ford Focus, an inexpensive compact, and ends with the Ford Behemoth, a twenty-two-foot vehicle that can transport an entire soccer team. She clicks on the Focus and looks at a range of options. Prominently displayed is a roof rack specially configured for windsurfing boards; with a smile, she selects it without hesitation. A little further down is the “Lego car seat,” which includes a desktop and several receptacles filled with Lego blocks. “Wow!
I
Thanks to Morris Cohen, David Ellison, Charlie Fine, David Levine, Jim Rebitzer, and Anita McGahan for helpful discussions. We are also grateful to the many people in the industry whom we have interviewed, most of whom have asked that their firms not be identified.
Perfect for my nephew!” she says as she clicks on the item. Next, based on Kate’s height (5'3"), the Ford choice board suggests extra-high seats; Kate agrees. Based on the usage records from her past vehicle (most of her driving is local), the Ford site recommends the basic navigational package, which costs only $400; Kate accepts this, too. The next recommendation is for a service provided by a Ford alliance partner that allows two-way interaction with local businesses in the areas where she travels most (restaurants and movie theaters, gas stations and dry cleaners) through a voice-activated interface embedded in the instrument panel. For a monthly fee of only $7.95, Kate decides this is worth a try. She also notes that the Focus is equipped with the new Win-Shield interface, which will let her plug in various IT devices (her e-book thrillers or her Palm LXVI). She’s offered a chance to visit the Win-Shield store of accessories, but passes—she’s already got plenty of gadgets, but it’s good to know she can now use them all during her commute, and that she can upgrade the Focus’s memory and disk storage capacity at will. She makes a few more choices and clicks to get the final price on her customconfigured car, which she could have delivered to her driveway in three days. Up comes a list of dealers in her area who could offer a test drive (of a similar but not identical vehicle to the one she ordered); she schedules an appointment for the next day. The test drive will cost her $35, but she decides it is worth it. Her dad has told her that dealers now make their money selling assorted transportation services. She decides to ask the Ford salesperson about their CustomLease program; she wants to reserve a minivan for the two weeks her parents will be visiting in June and an off-road SUV for her August vacation with three college roommates. Before clicking the “Finance Your Purchase” button, Kate saves her Ford Focus configuration and looks at a brand-new site, Buildyourowncar.com. This site lets her pick components from any manufacturer; her custom vehicle is then built at a contract assembly plant used by Buildyourowncar and other similar firms. Kate is very intrigued by this, because while she likes the styling of the Ford Focus, she has always admired the reliability of Honda engines and also fancies a Bose sound system. Buildyourowncar.com assures her that all of these parts will fit together just fine, due to the standard interfaces agreed on in 2008 by all automakers and suppliers—but Kate isn’t sure. Consumer Reports, among others, reports quality problems with these “mix and match” vehicles, including ambiguity about who covers warranty costs. Kate is tired from having spent a couple of hours looking for cars, so she takes a break to think about her options. She does not really believe her parents when
they say that they used to spend days or weeks looking for new cars and still not end up with one they really wanted. How likely is the above scenario? What would need to be done? When? By whom? Who would gain and who would lose? In this chapter we provide a preliminary exploration of these issues. We argue that the Internet-mediated scenario described above will come about only if several other major changes in the auto industry occur as well. That is, in order to be able to buy a car the way that we buy a computer today (online, with the consumer specifying components, software, and services provided by different firms), cars will have to be “built to order” as personal computers are today. To make this possible would require large changes in product development (a more modular product architecture, with more standardized or common parts across models); in the supply chain (a larger role for suppliers—whether financially independent or vertically integrated—in designing, building, delivering, and possibly even installing modular parts); and at dealers (who would serve as conduits of information between consumers, designers, and assembly plants and would derive revenues primarily from the provision of services rather than from vehicle sales). This is a daunting prospect, given the vastly greater complexity of automotive product designs, production processes, and supply chains, the lack of evidence that consumers are willing to pay a premium for customization, and the much slower rate of technical obsolescence (the reason that it is so costly to hold inventory in computers). Nevertheless, “build to order” is the energizing vision the Internet gives the auto industry. “Build to order is the key,” according to J. T. Battenberg, CEO of Delphi Automotive, the world’s biggest supplier. “That’s the game-changer in the industry.”1 “Build to order” is where the incumbent automakers potentially gain a competitive edge over a variety of dot-com challengers by tying their Internet-facilitated relationships with consumers together with their Internet-facilitated relationships with suppliers into one integrated “end-to-end” package. A fully realized “build to order” system would transform industry structure dramatically from the status quo, raising the prospect of automakers who focus only on design and marketing, suppliers who control key elements in the dominant design, contract assemblers who build vehicles for multiple automakers, and new kinds of intermediaries for retailing and distribution. 1. Alex Taylor III, “Detroit Goes Digital, Fortune, April 17, 2000, p. 174.
In the first section of this chapter, therefore, we sketch the build to order scenario, not because we believe it is imminent (the most optimistic observers place the implementation of a full end-to-end build to order system ten to fifteen years away), but because it provides a useful framework for evaluating a host of interrelated trends. The Internet will still have a very large impact on the auto industry even if the build to order vision is not realized. We next lay out the “not build to order” scenario to evaluate the consequences for the industry of this less fundamental set of changes. At a conceptual level, the Internet is a powerful tool for promoting fast, asynchronous communication among large groups of people without requiring them to invest in a specific asset (such as specialized software). Just how large the impact of these developments will be depends on how Internet-fueled reduction in information costs interacts with current business processes. We anticipate potential inventory-related savings of about $500 per car, most of which would accrue to automakers and consumers. We also believe that supplier relationships will be strongly affected by the newly created industrywide e-procurement consortium known as Covisint. Given that the Internet can promote both marketlike dealings with suppliers (through its auction capabilities) and collaborative relationships (by facilitating the transfer of information), Covisint may reinforce the dominant approach taken by various automakers in the past. Finally, we anticipate that a variety of models are also likely to coexist on the dealer side. While Internet-informed consumers and dot-com intermediaries are strongly challenging the traditional retailing model, many (but far from all) auto dealers are adapting with surprising speed to the new opportunities created by e-business. Whether one takes the long-term or short-term view, many questions remain about the impact of e-business on industry structure and competitive dynamics. Will the Internet offer a step function improvement in efficiency and effectiveness of core processes for all players (or at least those that stay in the game)? Or will it provide differential advantage to particular firms (and particular nations) based on how it is combined with existing and emergent capabilities? In particular, will the firms with the greatest mastery of lean production systems be affected by or take advantage of e-business developments in different ways than firms still heavily influenced by mass production? We focus on these questions in a concluding discussion, summarizing our views on what scenario, lying between the two extremes presented here, is most likely for the auto industry over the next ten years.
Scenario 1: Automotive Build to Order We start with a brief description of Dell’s highly successful business model, which has been the object of much scrutiny by the auto industry. We then explore the changes necessary to adapt this model to the auto industry by working through the value chain from the final customer back through retail and distribution, product design, manufacturing, and procurement.
The Dell Direct Model Dell has demonstrated the power of a build to order system enabled by the Internet in the personal computer industry. The Dell Direct model is based on a reconfiguration of the supply chain, a tight integration of business-to business (B2B) and business-to-consumer (B2C) capabilities, and new approaches to dealing with customers. Consumers choose a custom configuration at Dell’s website, arrange purchase and payment details online (often with phone support from a real person who tries to sell related products and services), and then can track the progress of their order through every phase of production, right up until delivery. Orders go directly from the website into Dell’s production schedule, parts are ordered from suppliers only after the order (and payment) is received, parts are kitted immediately before production and built up in cells, and the final product is tested and loaded with software before shipment. Accessories such as a printer or scanner are warehoused by their manufacturer, and the logistics provider ensures their arrival at the customer’s site on the same day as the main product.
Consumers as Designers: What Do They Want? It is easy to see the appeal of this model to automotive consumers, who would be able to order the precise vehicle they wanted, produced on demand. This would stand in stark contrast with the current system, where hypothetically consumers can order a custom vehicle from a long list of options (albeit with a hefty delay), but in reality all the incentives for manufacturer and dealer alike are to persuade consumers to buy a vehicle that is already built, even if it does not match their preferences exactly. Thus a true build to order system would deliver not only speed but real fulfillment of consumer demand, as illustrated in our opening vignette. Online configuration could also speed the incorporation of innovative features and advanced technologies into new products, with automakers
able to capture vast amounts of data on consumer preferences that could be fed directly into product development. These data could also trigger offerings of services customized to the needs of each individual. In effect, the customer becomes a codesigner of not only his or her current vehicles but of bundled services and future product offerings as well. The volatility associated with a system that “pulls” production based on custom orders from consumers, if left unfiltered, would present overwhelming complexity to upstream operations. But in fact, a successful build to order system will be designed to shape consumer demand by controlling what choices are offered. Rather than large numbers of individual features, consumers are likely to be presented with configurations of features to choose. Configuration choices may be tied to time and price considerations as well. More frequently ordered configurations might be available cheaper and faster than more fully customized products; similarly, a high degree of customization might only be available for high-end vehicles. The intriguing dilemma that may face automakers here is that the demand for personalization of vehicles over the next decade is expected to be highest among young, Generation Y consumers, whose initial purchases will be entry-level vehicles. Thus the logic of reserving customization for highmargin vehicles may be challenged under build to order by the direct exposure to consumer demand. Ultimately, the challenge for automakers will be figuring out how to limit configuration choices—necessary to reduce complexity—while giving consumers the ability to choose what is really important to them (or at least the feeling of choosing among features that makes build to order so appealing). At the same time, there is a distinct danger of overwhelming consumers with too much choice or failing to differentiate those customers who want lots of choices from those who do not.
Automotive Retailing: Replaced or Repurposed? With an early proliferation of B2C automotive buying services on the web, many observers argued that disintermediation would be the most likely fate of automotive retailing. Yet it appears that physical dealerships linked to automakers will continue to exist, especially if build to order comes to pass. Seeing, touching, and driving the product are still crucial to the purchase decision for most consumers. Moreover, as margins on vehicle purchase are driven down, the automakers have powerful incentives to form
successful service relationships with consumers in order to capture a larger percentage of lifetime ownership expenditures. Dealers may still prove to be the best partners for these relationships. Nevertheless, build to order would clearly enable a “repurposing” of the OEM (original equipment manufacturer) retail channel. Imagine a “Gateway Country” equivalent in which consumers can examine samples from the full product range, take a test drive, and talk with a product expert for help in determining their preferred set of features. The actual order occurs at a computer terminal, either at the dealership or at home, where all configuration choices are offered, services are bundled, and prices are determined. The dealership becomes the place to initiate or reinforce the customer relationship, not the focal point of the purchase transaction.2 We expect that dealers of this kind will still seek a large role in vehicle repair and maintenance, but they will face competition from specialized firms with no attachment to particular OEMs or brands. With such a major shift in the sources of dealer revenue, OEMs would need to find innovative ways of compensating these new-era retailers. Could new intermediaries take the place of OEM-linked dealerships? The logic of build to order makes it unlikely. The competitive advantage of this business model results from linking a customer’s order directly with a production process that yields the customized product. Thus we believe that build to order strengthens the survival prospects of OEM-linked dealerships, albeit while requiring dramatic changes in their role and relationship with customers. If incumbent dealers cannot make this transition quickly enough, OEMs may encourage new entrant dealers to assume this critical position in the build to order system.3
Modular Product Design as Enabler of Build to Order Dell Direct depends heavily on the modular product architecture of a personal computer, which is made up of a small number of separately produced, physically independent “modules” joined along a common interface. Customized products can be easily built by mixing and matching modules. Modules are increasingly standardized across the industry, creat2. From this perspective, Gateway rather than Dell has been the pioneer in attaching services to the purchase of hardware by individual consumers; Dell’s service focus has instead emphasized corporate customers. See chapter 7 by Martin Kenney and James Curry for more discussion. 3. As we discuss in our second scenario, most of these changes would require reform of automotive franchising laws.
ing the opportunity for huge cost savings through volume production and supplier competition. Best-in-class module suppliers can innovate without high coordination costs through independent upgrades of functionality. There is tremendous OEM interest in modular design and production as a way to cut costs and manage complexity.4 But there is ample ambivalence as well. To explain why requires a brief description of the dominant design of an automobile and how that affects the applicability of modular design rules. We use these definitions, following Ulrich:5 —component: basic building block of systems or modules; —system: totality of components, interfaces, and software providing one of the key vehicle functions. The elements in a system are typically distributed physically across the vehicle; —module: a physically proximate “chunk” of components, typically from multiple systems, which can be assembled into the vehicle as one unit; —product architecture: the scheme by which functional elements are arranged into physical chunks and by which the chunks interact. This can range from modular to integral, also from open to closed. The current dominant product architecture for automobiles is still substantially integral rather than modular, and closed rather than open. That is, most components are not standardized across products or companies and have no common interface; hence they are highly interdependent with other components and idiosyncratic to a particular model. The specifications for components are typically treated as proprietary and modelspecific, shared only between an OEM and a supplier, rather than being widely known and accessible to a wide range of suppliers. Components from different companies or even different models within the same company cannot be easily combined, so customization requires idiosyncratic modifications. The design integrity of a vehicle and certain systemic problems such as noise, vibration, and harshness (NVH) are felt by many designers to require such an integral product architecture. In this view, having standard modules designed to be usable in a wide array of products would compromise design quality or result in overspecification, with modules 4. Fiona Murray and Mari Sako, “Modular Strategies in Cars and Computers,” Financial Times, December 6, 1999; Sako and Warburton (1999). 5. Ulrich (1995).
having to be designed to meet the highest requirements of the product range. This product architecture is partly the result of the history of the industry, in which automakers chose centralized control over product design to maintain bargaining power over suppliers. But some integrality seems inherent to the functioning of a car. For example, when a safety system is designed, seat belts and airbags need to be in the interior, where the passengers are. But sensors need to be near the outside of the car, where the obstacles are. In contrast, it is relatively easy for computer makers to unite system functions and geography—that is, to put all functions related to typing in the single physical unit of the keyboard. Thus, using the safety example above, it is possible to imagine different approaches to the division of labor in relation to systems versus modules. There could be one “safety system” supplier that could optimize the design of all the far-flung components, with the production and installation of those components distributed according to their physical placement in the vehicle. Conversely, there could be a “front end” module supplier in charge of designing and building all the parts in that area of the car, including the bumper, headlights, and various sensors related to the safety system. The “system” strategy has the advantage of making the safety system work smoothly but requires the careful integration of idiosyncratic parts by the automaker during final assembly. The “modular” strategy could potentially compromise functionality but allows the car to be divided into large “chunks” that can be designed and assembled independently and then easily combined in response to customized consumer demand. Either strategy could erode the distinctive “look and feel” of individual brands. Besides concerns about possible design compromises, there are questions about the cost impact of modularity. In favor of modules is the prospect of design efficiencies from the combination of parts and functions that might lead to cost savings. For example, Visteon has an instrument panel design in which the cross-car beam (the principal structural support) provides heat sinks and brackets for electronic parts.6 Other cost savings could come with the high volumes associated with modules that were sufficiently standardized to be usable across a wide array of vehicles. Automakers have also anticipated cost savings through the outsourcing of modules to suppliers. If suppliers took over module design, automakers could reduce their own engineering staffs. Furthermore, suppliers, often 6. Georgievksi (1999).
nonunion, have labor rates that are half those at vertically integrated parts plants, all of which are unionized. However, there is reason to question whether these cost savings are real or simply represent a shift of costs from OEMs to suppliers or workers. Savings from the redesign of components into integrated modules have been slow, partly because many suppliers are just developing their design capabilities, and partly because OEMs, reluctant to allow suppliers much independence in modular supply, continue extensive “shadow engineering.” This is turn prevents the realization of expected savings in engineering staff. Similarly, expected savings in direct labor costs can be minimal since labor costs are usually less than 15 percent of a supplier’s total costs, and suppliers relying on cheap labor are not always able to meet the productivity, quality, and delivery demands of OEMs. As suppliers take on investment responsibilities, their cost of capital is typically higher than that for OEMs, and they simply pass these costs back in their per unit price. Finally, logistics costs associated with transporting fragile and oddly shaped modules can be high, thus offsetting savings from the outsourcing of production. Thus doubts about the benefits of modularization are still sufficiently large to prevent any automaker from making a major commitment toward a substantially more modular design based on cost considerations alone. Therefore, the fates of modularity and build to order appear to be powerfully intertwined. Modularity would both permit a mix-and-match approach to customization and help manage the resulting complexity. If cost and design constraints impede the progress of modularity, build to order may be similarly constrained. Conversely, a strong consumer demand for build to order could spur greater investment in modularity.
Manufacturing under Build to Order On the production side, OEMs face both costs and benefits from build to order. Given the current characteristics of the typical car’s design (4,000–5,000 parts; 300–500 suppliers; a proliferation of options), the complexity resulting from customized orders could quickly become unmanageable. Most automakers have been seeking to reduce their build configurations, and at first impression, build to order would seem completely at odds with this goal. However, there are potential production advantages from the combination of modular design and build to order. First, if features are bundled
into carefully chosen configurations and the choices offered to consumers are limited, the total number of build combinations could be fewer than the status quo, which resulted from the ad hoc addition of more and more options over time. Second, modular production, in which suppliers build up modules and deliver them in sequence to the OEM, can make the final assembly process much shorter and simpler. Third, building customized products will reduce finished goods inventories dramatically. The reason is that online ordering will give automakers earlier and more accurate information about consumer preferences than in the current system, where ordering is done by dealers, who may not transparently reflect consumer desires, due to strategic concerns. Running a true build to order production system also differs greatly from either mass production or lean production. Lean production operates with very low levels of inventory and with quick setups, so it can handle rapid product changeover as long as these changeovers are predictable. Predictability is necessary because very low inventory requires extreme leveling of production, or heijunka, to avoid the waste of idle capacity or of overproduction. A production system based on 100 percent build to order might have too much volatility to allow for this degree of production leveling. Thus lean production could accommodate build to order only with production scheduling that combines “pull”- and “push”derived demand. The customer front-end would need to support such a scheduling system by helping to steer customers toward those combinations that can be most readily built, given production and parts supply constraints at any point in time.
E-Procurement under Build to Order Electronic, Internet-mediated procurement will provide the underpinning for build to order by facilitating rapid, low-cost dissemination of order information, production scheduling, engineering changes, and other crucial information. While a variety of alternate methods of communication between OEMs and suppliers, such as proprietary electronic data interchange (EDI), now exists to accomplish this task, the Internet offers an infrastructure that can distribute large amounts of information simultaneously and at low cost to all upstream links in the value chain. Beyond this, however, the path taken by e-procurement under build to order will be a function of what mode of supplier relations is dominant as well as the move toward a modular product architecture. To explain these
contingencies, we need to summarize, briefly, two different modes of supplier relations: “exit” and “voice.”7 In the exit model, automakers solve problems with a supplier (regarding price, quality, and so on) by replacing it with another supplier. In the voice model, an automaker works with the original supplier to resolve problems. Since the 1930s, the U.S. industry has generally been characterized by exit relationships, while the Japanese industry has been characterized more by voice. The advantage of the voice model has been a rich flow of information that can eliminate unnecessary or expensive process steps; the disadvantage has been that the trust required for such information exchange makes it difficult to switch suppliers. Conversely, the advantage of exit for the automaker is that it is not locked in to any supplier. Maintaining a credible threat of exit has led U.S. automakers to vertically integrate complicated parts of the value chain to minimize barriers to entry into supplier industries. Thus between the 1930s and the 1980s, most suppliers tended to make relatively simple parts that were designed by automakers; these parts were built into subassemblies in the assembly plant. This was consistent with the highly integral (nonmodular) dominant design and also helped maintain OEM power over easily replaceable suppliers. Under pressure from Japanese competition, the U.S. industry has moved toward voice in the last fifteen years. A key source of superior Japanese quality was held to be the proximity of design to production, so that defects could be ironed out quickly. Accordingly, many suppliers have invested heavily in design capabilities in order to take over design tasks from OEMs. In this sense, the move toward voice in the United States helped pave the way for the current interest in modularization. Suppliers have hoped to persuade automakers of the benefits of sourcing full modules (such as complete interiors) from one firm. The central argument has been one of “core competence”—design and production would be integrated on a large scale by firms that specialized in all the relevant technologies. Wall Street looked favorably on this strategy, and a great wave of consolidation occurred among auto suppliers in the 1990s. We believe that electronic procurement could reinforce either the exit or the voice model depending on the nature of the product architecture’s moves toward modularity. Accept for the moment our premise that build to order requires modularization of the car into a few easy-to-assemble chunks. Modules can vary on two dimensions: (1) they can be produced 7. Helper and Sako (1995).
either by vertically integrated or independent suppliers; and (2) they can be designed to fit only one OEM (or only one car model) or to be standard across models and OEMs. If modular designs are outsourced to suppliers but remain nonstandard and OEM-specific, extensive interaction during design between OEMs and suppliers will be required. Similarly, if vehicle design remains integral yet the production of OEM-designed components is outsourced extensively to suppliers, extensive interaction will needed. In both cases, voice could be enhanced by Internet capabilities that support collaborative product design (as elaborated in the next section). In contrast, an exit strategy works well if: (1) there are many suppliers who can make a particular part (so the threat to leave is credible); and (2) there is little payoff to interaction between automaker and supplier, so frequent switching does not harm quality. If modular designs are kept vertically integrated (because control of modules is seen by OEMs as a “core competence”), the outsourced components would be relatively simple parts procured more readily via the exit model. An intermediate case might occur if module designs are standardized across the products of different automakers. On the one hand, this would reduce the need for communication between OEM and supplier, as in the exit mode; on the other hand, these parts would be more complicated to make than individual components, with fewer suppliers capable of making them, reducing the automakers’ ability to switch, as in voice mode. Automakers may well vary in their choices about outsourcing and parts standardization.8 If a given module design is standardized—that is, interchangeable across products of two customers—then the supplier of that module to both customers would be hedged against sales declines at either one.9 However, agreeing on a module standard across the industry limits any single automaker’s freedom to design new capabilities into its vehicles. For example, a firm such as Toyota, which favors more integral designs and a voice approach to suppliers, would most likely move toward modules by doing the design work internally, maintaining model-specific idiosyncrasies to ensure the integrity of the overall design and working closely with its long-term suppliers to accomplish production performance targets. On the other hand, GM—which favors a more modular product architecture, the 8. For insights on the pros and cons of standardization, see Farrell and Saloner (1992); Shapiro and Varian (1998). 9. If a module supplier was owned by an automaker, other automakers might be reluctant to buy large amounts of parts from it, for fear of dependence and revealing proprietary information, so common interfaces would have less impact in the modules-with-vertical-integration scenario.
outsourcing of modules, and an exit approach to suppliers—would prefer standardized modules available from multiple suppliers so it could use its volume purchasing power and threat of exit to drive down module prices.
Summary This overview of build to order is necessarily speculative, but it reveals the vast number of interrelated changes necessary to make this production model a reality for the auto industry, including modular design and modular production, supplier relations that support the provision of either nonstandardized modules (voice mode to insure collaborative product development) or standardized modules (exit mode in support of commoditization and cost reduction), dealers incentivized to pass information on consumer preferences directly to automakers, and consumers willing to pay a premium (in money or time) for more choice and speedier fulfillment.
Scenario 2: E-Effects without Build to Order and Modular Design If barriers to modular design and other factors outlined above (for example, lack of consumer willingness to pay a premium in the short run; concern about “look-alike” vehicles) prevent a shift toward a build to order system, the Internet will still offer new economies, new capabilities, and opportunities for new business development. As noted above, the key feature of ebusiness is the de-specification of information technology assets: the ability of firms to achieve fast, cheap, asynchronous communication without investing in proprietary electronic data interchange software or training. We focus here on three areas to develop this “not build to order” scenario: business-to-consumer automotive e-retailing; business-to-business eprocurement; and business-to-vehicle (B2V) products and services.
B2C: Automotive E-Retailing The earliest business-to-consumer (B2C) automotive application of the Internet was to arm potential car buyers with massive amounts of information about products, dealer prices, factory incentives, and dealer sales tactics in order to even the scales in a transaction where consumers typically felt pressured, misled, and taken advantage of. In one sense, this simply
automated and increased the visibility of the growing number of automotive information services available to consumers since the early 1980s. Yet the Internet makes it possible to integrate these different kinds of information (product specifications, new and used car prices, safety test results, consumer-based quality rankings) much more quickly and cheaply and can better support a personalized search. Such “buying services” have proliferated; some offer primarily information (Edmunds.com), while others provide a referral to a convenient dealer (AutoByTel.com; AutoWeb.com; Carpoint.com). OEM sites, initially no more than online brochures, now offer similar services. Increasingly, a wide array of sites provides information about the in-stock availability of products that most closely match the consumer’s search specifications, complete with a price from various dealers. The transaction is then completed through direct communication (e-mail is most common) between the dealer and the end consumer. Virtual communities of potential car buyers have also emerged at these sites. On Edmunds.com, you not only get price information but can also join “chat room” discussions about the pros and cons of different models among satisfied and dissatisfied owners as well as shoppers. The dot-com buying services see this as the next stage in their business plan. In the words of the AutoWeb CEO, “we are moving from lead generators to providing community and content” with maintenance reminders, online scheduling of service appointments, and links to related services such as financing, insurance, and cell phones.10 For all the increased use of the Internet, there is still a dealer at the end of each purchase transaction. That is because it is essentially illegal in the United States for any end customer to purchase a vehicle directly from the vehicle manufacturer, due to powerful franchise laws at the state level. Auto dealerships are independently owned franchises that decide what vehicles to purchase from the manufacturer, in what quantities and at what time, and how to price and sell them. Dealers place orders based on their feeling for the local market, their past experience with the ups and downs of the business cycle and its effect on sales, their desire to have sufficient numbers of popular models in stock, and their strategy of using models “loaded” with options as a way of price discriminating; this often bears only a glanc-
10. Automotive News, January 2000.
ing resemblance to actual patterns of customer demand. The owners of dealerships are typically entrepreneurs with strong local networks who give generously to political candidates and hence have considerable political power. While the state franchise laws will be challenged aggressively as consumer demand for direct purchasing increases, they are unlikely to be eliminated quickly, or completely. Furthermore, for reasons outlined above, automotive retailers may survive because they offer the customer a chance to see and touch the vehicle, to take a test drive. On occasion the customer may be able to exert some bargaining leverage (for example, when a dealer is clearly eager to move inventory off his lot). For the OEM, dealers still offer the best opportunity to wrap additional services around the purchase transaction and to establish a more personal link to the OEM brand. So far, the OEMs appear to be holding their own against the dot-coms in this area, as their own websites (and the support they provide to dealers to manage Internet sales leads) become increasingly sophisticated.11 While retailers may survive, new entrants may still challenge dealers tied to OEMs. In the “dealer direct” model, a new player (for example, Cars Direct, partially owned by Michael Dell) buys at least one dealership in key states, which allows it to sell vehicles to anyone in those states. These new dealers would then aggregate demand of custom-configured vehicles and fulfill those orders through batched purchases from OEMs. Financially, this appears to be a viable short-term strategy. According to one estimate, it would cost about $75 million to buy dealerships for the top forty product brands in states containing 70 percent of the U.S. market—quite a feasible sum of money to raise.12 Another new entrant, GreenLight.com, has a similar strategy but aims to enlist a select group of dealers as part of its order-taking network. While these new players are likely to be more nimble than the OEMs in providing an appealing B2C interface for consumers, they will ultimately be constrained by current manufacturing inflexibilities and factory-to-dealer fulfillment inefficiencies. OEMs are in a better position to apply the speed and information cost efficiencies of the Internet to eliminating the worst delays of the current distribution system. OEMs can also potentially offer their own “factory direct” alternative, with more 11. Techweb.com, April 24, 2000. 12. Lapidus (2000, p. 43).
direct control over the inventory needed to fulfill the aggregated orders. There are substantial savings to be achieved here. Currently, dealers hold seventy-five days of inventory on average of each car, for a carrying cost of $431 per vehicle.13 If use of e-business could cut this in half, $200 per vehicle could be saved. The biggest disadvantage for OEMs in this scenario is the burden of getting their existing retailers to make this transition. The new retail competitors will help by making traditional dealers desperate enough to accept a different contract with automakers.14 In short, the biggest B2C battles ahead in a non–build to order world may not be over vehicle purchase, but rather winning ongoing access to consumers to encourage repeat transactions, brand loyalty, and opportunities for cross-selling of services.
B2B: Transforming or Reinforcing Supplier Relationships? On November 4, 1999, in an action heralded by the Economist as “the moment e-commerce grew up,”15 both Ford and General Motors announced plans to put virtually all of their global purchasing activity into huge, separate, web-mediated exchanges. Less than four months later, these e-archrivals announced that they would merge their separate exchanges into one—now labeled Covisint—and would invite DaimlerChrysler and potentially most of the rest of the world’s automakers to join as well. The scale of the economic transactions involved is huge—GM spends about $87 billion a year, working with 30,000 supplier delivery points, with Ford’s purchases nearly as large, and DaimlerChrysler’s about half of GM’s. Furthermore, if all the suppliers of these OEMs use these exchanges for their own purchasing, the impact will be magnified significantly. All told, e-business activity through this mega-exchange could reach $500–$800 billion within a few years. Along the way, tremendous efficiencies are envisioned—from inventory reduction, reduced administrative time, shortened lead times, faster product development cycles, and facilitated communication and collaboration between automakers and suppliers, as well as across suppliers. 13. Lapidus (2000, p. 13). 14. Lapidus (2000). 15. “Riding the Storm,” Economist, November 4, 1999 (www.economist.com/displayStory.
cfm?Story_ID=256474).
Other e-markets can potentially compete for a share of automotive e-procurement, from specialized materials sites like e-Steel and Plastics.com to B2B pioneers like Ariba and ProcureNet. Furthermore, some automakers, most notably Volkswagen, are intent on developing a proprietary site with their primary suppliers. However, with IT companies like Oracle and Commerce One and Cisco already tied to Covisint as alliance partners, and with all the smaller auto companies affiliated with GM, Ford, and DC, as well as Renault-Nissan, Toyota, and mega-suppliers Delphi, Visteon, Lear, and JCI already signed up, this industry consortium approach to e-procurement has considerable momentum.16 Four developments associated with B2B appear to have the most potential for affecting (either changing or reinforcing) past norms of automotive procurement: (1) open architecture and information transparency; (2) automation of steps in the purchasing process; (3) new pricing models that commoditize purchases, such as auctions; and (4) new tools to facilitate collaborative product design of complex components or modules. . The planned system architecture for Covisint, and for e-procurement more generally, relies on eXtensible markup language (XML), which provides data tags and data field labels that can be read by any operating system or application with minimal translation effort. As XML encoding becomes widespread, it will be possible to put all participants in a supply chain—large or small and located anywhere in the world—on the same information system with access to real-time data. This will reduce the barriers to smaller suppliers, who have been disadvantaged by the high costs of proprietary IT systems in the recent past. However, a key determinant of supplier access to new customers is whether the XML tags will be standardized or specialized to one exchange. If, for example, Covisint and e-Steel have different ways of
16. While Covisint has passed initial scrutiny by the U.S. Federal Trade Commission, the agency will continue to keep an eye on the potentially giant site. According to Susan S. DeSanti, the FTC’s director of policy planning, the following e-marketplace activities will be examined for possible anticompetitive implications: joint purchasing or marketing that involves agreements on prices or quantities, detailed information exchange among competitors (such as airline ticket price information that can be used to enforce collusion), and exclusion of firms from membership (DeSanti, 2000). Covisint would appear to do well on most of these grounds, since the exchange seems to be set up to facilitate interactions between individual automakers and their suppliers. It appears that access to the exchange is open to anyone in the industry (though opportunity to purchase equity is not and purchasers reserve the right to qualify bidders).
describing a purchase order, this will make it very difficult for a firm to link its production system to orders coming from both.17 This new capability could lead to savings from reduced scrap and increased productivity. Open architecture IT should also allow a reduction in inventory held as a buffer against uncertainties created by inaccurate or out-of-date information. . An even larger effect of the web’s open architecture comes from the ability to automate much of the purchasing process. Expert systems can be created, even for smaller suppliers, that can greatly simplify processes such as need identification, vendor selection, receiving, and accounts payable, even for types of purchasing that have been resistant to automation in the past. For example, procurement for maintenance, repair, and general plant operations (MRO) is plagued by ad hoc fixes and by suppliers that are infrequently used. So an expert system that would explain how to do a repair job for a broken piece of equipment in the plant and an online auction for suppliers to do the work might well result in substantial savings (although MRO is only 7 percent of total purchases).18 . Auctions present huge opportunities for reducing prices on commodity parts and will create huge advantages for best-in-class suppliers to capture high market share where there are scale economies. In one dramatic example,19 an automaker is buying plastic parts through FreeMarkets.com (GM’s B2B partner before the announcement of Covisint). It paid $745,000 for the last pre-auction batch of parts. This time, after thirty-three minutes of bidding by twenty-five suppliers, the price comes down to $518,000. That auction was one of five run on that day for that automaker. Parts that would have cost $6.8 million under the old procurement system sold for $4.6 million after the auctions. Small wonder that many interpret the establishment of Covisint as evidence that supplier margins will be more effectively squeezed than in the past.20 OEMs will run the exchange, levy fees on participating suppliers, and benefit during price negotiations from information transparency that could
17. Glushko (1999). 18. Lapidus (2000, p. 18). 19. Geoffrey Colvin, “Seller, Beware!” Fortune, May 1, 2000 (www.fortune.com/indexw.
jhtml?channel=artcol.jhtml&doc_id=00001321). 20. Taylor, “Detroit Goes Digital.”
reveal supplier cost structures. But first-tier suppliers may also reap benefits from easier auctions—for example, in gaining price breaks from their own suppliers or in selling excess production capacity. . Auctions clearly will not be used for all components in a vehicle. Indeed, the information-intensity of interactions between suppliers and their OEM customers has increased tremendously in recent years, as design responsibilities are outsourced to suppliers and as the product architecture becomes more modular. E-procurement of complex modules will not proceed by auction; these parts are rarely sourced entirely on the basis of price. Furthermore, bids for such parts are unlikely to be sought very often, since relationship-specific knowledge must be extensive for suppliers to fulfill customer requirements. For these parts, the value of Covisint will be as a source of timely and accurate information that aids coordination and collaboration. Covisint could facilitate collaboration in a variety of ways. Automakers can post production schedules on the web. This step increases productivity since no one has to call or fax each supplier affected by a change in the schedule. The asynchronous nature of web communication could facilitate communication with a global supply base. Imagine an automaker sending a video of a quality problem whose cause was unknown to suppliers of adjoining parts. While this is not as good as having all parties come to the actual site of the problem, it is better than trying to describe it over the telephone or via fax.21 Designs could also be posted on the web. This step has a number of benefits. It would eliminate the expense of proprietary design software. Firms that supply more than one automaker have had to operate multiple CAD systems; this software is quite expensive ($100,000) and requires at least one engineer dedicated to staying fresh on each package. It would also facilitate discussions of quality or design problems that involve several suppliers. However, technical barriers are not the largest obstacles to posting design data. Suppliers do not want their competitors to see their designs without some assurance that they would not lose business to a firm that could cheaply imitate it. Protection of proprietary information with firewalls and secure customer-specific sections of the site will be required. 21. The problem-solving methods favored by Honda and Toyota place great emphasis on going to the actual site of the problem because it facilitates intuition about systemic causes that might not at first glance seem to be related to the problem (MacDuffie and Helper, 1997; MacDuffie, 1997).
But no technological security mechanism will fully substitute for the presence of trust between supplier and customer, already crucial for the voice mode of supplier relations to function effectively. Collaborative mechanisms will need reinforcement from other aspects of the customer-supplier relationship.
Summary Even in the absence of build to order, electronic procurement will result in significant savings over current procurement systems. Some of these are one-time conversion savings, while others will affect every transaction. Ultimately, e-procurement could end up reinforcing either the exit or the voice model, because it facilitates both auctions and collaboration.22 We explore the implications of this for industry structure and the balance of power between suppliers and OEMs below. What do these savings add up to? Goldman Sachs estimates total savings across the supply chain (procurement + OEM work-in-progress + supplier inventory + productivity and quality gains) as $807 per vehicle, or more than 7 percent of purchased parts and 4.4 percent of the total cost for a $20,000 car. Almost one-quarter of these projected savings come from reducing inventory held in the supply chain.23 Based on more conservative estimates of how many components will benefit from auction-based pricing models, we believe a more accurate figure might be $477 per car—still a large number.24
B2V: The Information-Intensive Vehicle One clear impact of the Internet on new business opportunities in the auto industry is the acceleration of efforts to bring information technologies into the vehicle. The potential market for in-car services is vast. In the United States, commuters spend an estimated 500 million passenger hours a week in their cars—or 25 billion hours a year. That is equivalent to 22. In the short run, the auction-related capabilities of e-exchanges are receiving the most attention from the press and from software developers. Neither firm involved in developing the Covisint website (Oracle and Commerce One) has much supply chain experience, so the site’s initial capabilities are heavy on the auction side (Garretson, 2000). 23. Lapidus (2000). 24. For more detail, see the version of this essay posted at weatherhead.cwru.edu/helper.
roughly 10 percent of the work hours put in by the U.S population, or about 2 percent of all waking hours for the average person. Vying for all those hours of captive eyes and ears is a wide array of potential services, from personal productivity (phone-fax, voice mail–e-mail, custom newsweather) and convenience (travel and restaurant reservations, concierge services, interactive shopping) to entertainment (Internet radio, video on demand). Already (or soon-to-be) available are services related to safety (emergency connect, sensors to ensure safe distances between vehicles), security (remote door unlock, stolen vehicle tracking, roadside assistance), and navigation (GPS locators with directions to destination). When the information-intensive vehicle is linked to a “smart highway” or Intelligent Transportation System (ITS), even more services become possible, from toll collection to congestion avoidance to (more fancifully) hands-off driving systems. Several challenges loom ahead before this futuristic vision of the “online car” can become a reality. The upgrading of the electrical infrastructure in the vehicle is under way, with an industry consortium having agreed to standards for a forty-two-volt system. Experiments with flat wiring, which is molded into plastic interior parts and eliminates all the loose wire and connectors of wire harnesses, are also in process. Prototypes exist of instrument panels whose top half can be opened to allow easy insertion of “smart cards” containing new processors, memory, or software upgrades.25 Yet key aspects of the vehicle information architecture remain unresolved. Will the “online car” be based on an operating system from the PC world, like the modified version of Windows now being prepared by Microsoft? Or will a different, more specialized operating system be developed? Will there be one dominant operating system that becomes an industry standard or competition among suppliers of “wired” interiors? Will the information architecture be open or closed? Right now, automakers are pursuing these opportunities by forming alliances with a wide array of hardware and software specialists from the IT domain. Ford and GM have been the most aggressive, often announcing deals with separate alliance partners on the same day. Will companyspecific initiatives dominate movement toward the “online car,” or will we see a repeat of the pattern with Covisint, with major automakers agreeing to back a single set of standards for information infrastructure?
25. Georgievksi (1999).
What revenue models will succeed is also open to question. Most predictions anticipate monthly fees as the primary source, with network scale effects driving the potential for revenues. Many alliance possibilities present themselves, with Internet portals, mobile telecom firms, customerservice-through-call-centers specialists, and entertainment content providers all potential partners with each other and with automakers and IT suppliers of the operating system. Finally, there is the annoying detail that drivers need to devote a certain amount of their attention to driving. Absent some technological breakthrough allowing unattended driving, this will limit the ability of even the most skilled multitasker to consume all these additional services. Safety concerns may lead to regulation of what drivers will be allowed to do in their vehicles. And even if these barriers can be overcome, consumers may resist allowing every minute of their attention while commuting to be monetized. The increased availability of these information services will proceed whether or not there is a substantial move toward build to order. Services like GM’s OnStar that allow immediate satellite connections to call center representatives who can help with emergency aid or concierge services are already being offered as “standard equipment” with high-end Cadillac models. Navigation systems may soon be standard equipment on many models; already in Japan, they are installed in 40 percent of new vehicles. Indeed, if the diffusion of the information-intensive vehicle continues, we will think of cars as mobile platforms for computers. “Until now, Toyota has sold 1.8 million vehicles per year. But from now on, it will sell 1.8 million computers [in its cars],” said Shigeki Tomoyama, the head of Toyota’s B2C subsidiary Gazoo.com.26 This will provide an instant “installed base” of massive proportions, without great differentiation by product segment or customer. But there may well be few “first-mover” advantages to be gained because the computer industry has already agreed on standards and common interfaces for modular production. Thus these technological innovations could become an important part of a “build to order” strategy through the customization of the information services available in a vehicle. Indeed, one appeal of the informationintensive vehicle is that much of the customizing could be based on software and on the after-purchase addition of “plug-in” information devices. 26. Quoted in Emily Thornton, “Toyota Unbound: Can the Carmaker Become a New Economy Star?” Business Week, May 1, 2000, pp.142–46.
This customizing would not require the full-scale modular design and “pull” manufacturing approach of a more ambitious build to order strategy—merely industry agreement on a standard interface for plug-in devices and possibly a standardized operating system.
E-Effects on Industry Structure and Stakeholders We have now presented two scenarios for how the Internet and e-business will affect the future of the automobile industry—one based on a longterm view of the full-scale adoption of a build to order model and a modular product architecture, and the other based a short-term view in which build to order and modular design do not move forward. In either scenario, the Internet is likely to have a major impact. We assess that impact in this section, for automakers, suppliers, dealers, employees and unions, customers—and for competitive dynamics in the industry.
Automakers Ultimately, build to order may be a new source of competitive differentiation for automakers. To respond effectively to individualized demand from the final customer, firms will need to adopt numerous complementary capabilities effectively throughout the entire value chain. Consider what this strategy requires: designing a modular product that can facilitate “mixand-match” customization without losing the distinctive characteristics of a brand; maximizing manufacturing flexibility and mastering the tight timeframes of a true “pull” system; managing supplier relationships effectively from design involvement through price negotiations, logistics synchronization, and meeting cost, quality, and delivery goals; making the right choices about how much customization to offer customers; and providing the right “front-end” for configuration and customer support. Even if components are modularized, build to order will require extensive system integration capabilities that would be difficult to outsource. Modularity carries its own set of implications for industry structure. If dominant suppliers for key modules emerge, OEMs could end up in a position of extreme dependence that could shift the balance of power, much like the “Intel Inside” phenomenon in the personal computer industry.27 This 27. Fine (1998).
could pull OEMs to keep the design and production of such modules inside the firm. On the other hand, only the very largest automakers might be able to buck an industry trend toward modularization once it got started. Imagine suppliers offering the following proposition to an automaker determined to continue with an idiosyncratic module design: “We’ll be happy to do that for you, but if you purchase our industry standard design, which reflects our latest technology and most current design thinking, it will cost you 20 percent less because we can produce it in such volume.” Over time, transactions of this kind would speed the diffusion of standardized modules—although this still represents a much more haphazard approach than standard-setting by an industry-commissioned organization. A different scenario would see increased outsourcing and the involvement of new players. In the personal computer industry, contract assemblers have taken over manufacturing and suppliers dominate the value chain, leaving OEMs primarily responsible for design and postsales services.28 Recent rhetoric from automakers (notably Ford) about the appeal of moving from being “heavy manufacturers” to being “consumer services” companies suggests an embrace of this scenario. In this view, a more modular product architecture could facilitate the transfer of large amounts of responsibility for design and manufacturing, for capital investment, and for risk absorption from the automakers to their suppliers. It is important to note that automakers bring different sets of capabilities to these problems, based on their history and past strategies.29 For example, lean producers like Toyota can build vehicles with different platforms on the same assembly line without a cost or quality penalty, in contrast with more traditional mass producers.30 In addition, Toyota’s suppliers (even second- and third-tier) have had long experience with the complex scheduling that just-in-time inventory requires—and that cheap and fast Internet information transfer now facilitates. On the other hand, as noted above, Toyota’s production system depends heavily on heijunka or production leveling, and facing the volatility of a true build to order system could be destabilizing. The recent wave of industry consolidations will influence trajectories as well. For example, Chrysler’s growing experience with outsourcing major responsibilities for product design to suppliers may make it more ready to embrace a modular product architecture than Ford or GM, but the influ28. Sturgeon (1999). 29. Freyssenet and others (2000). 30. MacDuffie, Sethuraman, and Fisher (1996).
ence of Daimler-Benz in the newly consolidated auto giant, which has always favored highly integral designs, will pull in the opposite direction.
Suppliers As noted above, the Internet’s facilitation of auction pricing models could have the paradoxical effect of reinforcing the traditional U.S. exit model of supplier relations. It is possible that Internet auctions could lead to a reversal of recent (since the mid-1980s) trends toward sourcing from fewer firms and from “full-service” firms that can do design and subassembly as well as build to the automakers’ print. While complex modules and safety- or image-critical parts will never be bought on price alone, the reduced transaction costs of the auction model might move some parts back from voice to exit, reducing supplier bargaining power. Yet the Internet can also increase supplier bargaining power through its facilitation of collaboration. While the posting of real-time data on an open architecture system slightly lowers barriers to entry by new suppliers (because no proprietary electronic data interchange software is required), experienced suppliers are more able to take advantage of this improved information, since they are better at scheduling and logistics and at doing quick changeovers from one product to another. Suppliers’ bargaining power would also increase if a prerequisite for sharing design data electronically were a commitment by automakers not to undercut existing suppliers. The outsourcing of modules would, under most conditions, also increase supplier bargaining power. First, modules are likely to be large and complex, meaning that only a few suppliers will be able to make them. An instructive example is the case of seats. A seat set for a midsize car costs about $800, the most expensive part after the engine. Until the 1980s, seats were designed by automakers, with individual components (seat fabric, rails, foam) sent by small suppliers to auto assembly plants, where they were built up into a finished product. Gradually, however, suppliers have taken over the design and engineering of the entire seat. The result is that two firms (Johnson Controls and Lear) make 70 percent of the world’s automotive seats.31 This trend would be reinforced if suppliers could design 31. Why have automakers agreed to this? The arrangement has had many benefits, at least in the short term. Lear and Johnson Controls specialize in this product, which does not contain core technology for the automakers. Seats are made of textiles and have fashion elements reminiscent of consumer products, in contrast to the steel components with relatively long product cycles that the automakers are familiar with. As a result of their specialization, the suppliers have made seats far more
products that consumers would ask for by name. Johnson Controls is attempting to do just that in its partnership with Lego to produce the seat described in our opening scenario. Almost all parts of the car today are specially designed for a particular model (even seats). However, this is beginning to change as suppliers get more powerful and as open-architecture computers become more prevalent in vehicles. For example, TRW recently introduced a rain sensor that automatically sets windshield wipers to the correct speed. The biggest challenge was designing a sensor that will work with any type of glass used in windshields; connecting the sensor to all available wiper motors (a product TRW does not make) was reportedly easy.32 Standardization of modules could have a variety of impacts on bargaining power. Studying the early history of the U.S. auto industry yields some insights.33 In its early days, the industry was quite modular, as auto assemblers attempted to use parts from established carriage and bicycle suppliers as much as possible. (Henry Ford’s factory’s entire contribution to the assembly of his first car in 1903 was to place tires on wheels, wheels on chasses, and body on chasses.) Smaller assemblers wanted to keep parts standard because it meant they had to design and produce fewer parts themselves. In the 1910s, these firms established bodies such as the Society of Automotive Engineers (SAE) to promote standardization. Early projects focused on standardizing large parts such as carburetors. But as Ford and GM grew, their engineers increasingly staffed the SAE’s committees and pursued a different agenda. Ford and GM had the scale to make carburetors in-house and wanted to be able to compete on the basis of a superior design of these parts. So they narrowed the SAE’s focus on standardization to parts like nuts and bolts and grades of steel. As independent suppliers of carburetors, bodies, and engines were bought up by the firms that became the Big Three, barriers to entry in the auto assembly business went up dra-
comfortable and stylish than they used to be, at an attractive price for automakers. Because the seat connects to the rest of the car in only one place (the seat rail), it has been a relatively easy part to modularize (seats are so far the only part that is widely obtained by automakers in modular form). Seat manufacturers are now working to make their dominant design more modular internally—that is, more decomposable into smaller modules—to aid customization. Competition between these two firms and newcomer Magna keeps prices and designs competitive; by multiple sourcing of seats, automakers avoid undue dependence on one supplier. 32. TRW, Rain sensor press release (www.trw.com [2000]). 33. Thompson (1954); Hochfelder and Helper (1996).
matically and designs become more integral. Only suppliers that diversified into other industries (such as Timken and TRW) remained independent and profitable. The implication is that the variance of profits in the supplier industry is likely to increase dramatically. Suppliers of commodity parts will see their margins shrink—although if these products are useful in other industries (as are switches and small motors), they may benefit from Internet-facilitated access to new customers. Suppliers that have tried to escape from producing commodities by doing specialized engineering will face great challenges in getting paid for it. Yet firms that can make popular modules may find themselves in the driver’s seat, particularly if it is something that consumers will ask for by brand name. Thus it is possible that the future industry will consist of a handful of multibillion-dollar global mega-suppliers making returns similar to the automakers; a second, much larger tier of medium-size firms earning normal returns; and even larger third and fourth tiers of very small, low-overhead, technically backward firms that win auctions for commodity parts, but which make life difficult for other tiers because their low margins do not permit much investment in quality or responsiveness.
Retailers Recent years have seen repeated predictions of a “retail revolution” that would overthrow the long-established, often-detested traditional dealership. Before the Internet, the focus was on new competitors such as AutoNation and CarMax—large public companies buying up dealerships and opening new mega-stores at a rapid clip, aiming to apply professionalized management and centralized systems to a determinedly local business. More recently, OEMs have attempted to control distribution more directly through direct ownership and consolidation of retail outlets in a given geographic area (for example, the Ford Retail Network in Tulsa, Oklahoma). Add the widespread prediction of disintermediation that accompanied the early B2C dot-coms and you might expect that auto dealers would be in a very precarious position indeed. Instead, there is surprising life in this dominant retail channel. The big public companies have failed to generate efficiencies or enduring selling innovations, and as their stock has gone out of favor, their pace of acquisition has slowed dramatically. The Ford Retail Network has been blocked by local dealer opposition, as has a similar effort at GM. While there has
been consolidation among dealers, with the top 100 dealer groups selling over 2 million vehicles in 1999, this is less than 15 percent of total sales; smaller, community-focused dealers still dominate. Furthermore, the most innovative of dealerships have also moved quickly to embrace the Internet, and the OEMs are getting smarter about finding ways to support dealer efforts without getting into zero-sum struggles over territorial control. Dealers are themselves pursuing the dot-com vision of expanding the provision of services, and here the existence of a physical infrastructure as a point of contact has largely been an advantage. The OEM desire to grab a higher percentage of “vehicle lifetime” expenditures will reinforce dealer primacy as the “mortar” portal in a “clicks and mortar” strategy. Still, traditional dealerships that resist changes in their retail practices and that spurn Internet sales are not likely to last for long. They will face pressure from OEMs and customers alike. Local owners will find eager buyers, not least from investors associated with dot-coms offering a “dealer direct” model. Two competing business models are now on the scene (CarsDirect.com aims to own its own dealerships in all key markets, while Greenlight.com plans to work through a select network of dealers), with more undoubtedly on the way. If one of these models of aggregating demand for custom-ordered vehicles is successful, it will put continued pressure on OEMs to move toward a full-fledged build to order system— not least because it will be the only way to capitalize fully on their control of upstream production scheduling and inventory management.
Employees and Unions If build to order production becomes the next step in the evolution of industrial models for this industry, what are the implications for the nature of work and the employment relationship, both of which changed substantially in the transition from mass to lean production?34 Any jobs affected by the customization process may well be changed—for example, the engineer who works on module rather than component design; the auto salesperson who works with a customer who has already configured a new vehicle online; the production scheduler who tracks the “pull” from customer orders; and the worker who installs a module on the much shortened final assembly line. Many transaction-related jobs may be eliminated 34. MacDuffie (1995); Pil and MacDuffie (1996).
as coordination efficiencies are achieved through use of the Internet. Production jobs might become more routinized as module standardization and the use of common interfaces simplify installation; the latter has been the trend for consumer electronics. Automotive repair work may also be deskilled, as module replacement rather than component repair becomes more common. On the other hand, diagnosing problems internal to a complex module may take a higher level of analytic skills and electronics training. Furthermore, if horizontal collaborations between suppliers become more common in a build to order world, managers and engineers will need to learn new ways of communication and coordination to replace the direction provided by the OEM’s “shadow engineering.” Both upskilling and downskilling outcomes seem likely. In terms of managerial and engineering jobs, as the demand for ITrelated skills rises in the auto industry, it will bring the tight labor markets and intense competition for talent found elsewhere. Careers will increasingly depend on finding innovative ways to leverage e-business initiatives for competitive advantages and on the ability to speed the implementation of new business processes. More fluid movement of managers and engineers in and out of the auto industry is also likely as IT-based innovations become more central to vehicle design and as B2B and B2C initiatives bring alliances with firms outside the industry. The role of unions and the structure of industrial relations are also likely to change if there is a linked move to modular design and build to order. If power shifts from OEMs to suppliers, this may further weaken union coverage, particularly in locations like the United States, where the OEMs are unionized and the majority of suppliers are not. Module suppliers are increasingly less likely to fall into traditional industrial segments, like metalworking, given that module design often requires mastery of several production processes and a wide variety of materials; this too could affect union strength. The wage premium associated with semiskilled labor in heavily unionized sectors of the industry is likely to shrink, while wage differentials between information technology–mediated work and manual work are likely to grow, due to increased relative demand for the former. Improved global communications will intensify competition among both OEMs and suppliers, reducing the power of nationally based unions. Yet the auto industry remains one of the most heavily unionized in the world, and adaptation by unions and management to the changes brought about by e-business is perhaps more likely than a simple decline in unionization rates. While restructuring in the United States has often had a
zero-sum quality in terms of labor relations, European and Japanese companies and unions have more often negotiated creative arrangements to deal with difficult transition periods. Furthermore, there may be new models for unions to pursue in the e-business era. Consortia of automakers and suppliers that co-locate in support of modular production represent an opportunity for unions to regularize wages and benefits throughout the value chain, albeit on a local rather than national or industry-wide basis. As new models of compensation roll through all levels of the organization, unions may want to bargain for equity stakes in lieu of wage increases. The communications potential of the Internet offers new possibilities for coordination of union activities across industrial and national boundaries. Recent initiatives like Ford’s to provide every employee with a PC could change the nature of both company and union appeals to worker interests.
Consumers Most of the changes described here benefit consumers. Having prices displayed openly on the Internet gives even the worst bargainer an idea of what to shoot for as well as offering more “no haggle” options. Competition among the smaller but more global set of dominant automakers and a continuation of overcapacity in production will create pressure to pass cost savings on to consumers in some form. With speed increasingly important as a competitive factor (particularly if build to order moves forward strongly), consumers may be able to choose more precisely their desired trade-off between price, customization of features, and delivery date. However, depending on the outcome of battles over Internet privacy, retailers may retain some ability to price discriminate. One could imagine retailers capturing data about consumers based on their previous Internet purchases and viewing habits and using this to develop predictions about their willingness to pay. Based on this information, consumers could be sent e-mails offering them individualized special prices. Thus the Internet does not always increase transparency.35 A final influence of consumers might be as voters—through legislation on pollution and traffic congestion that would favor new drive train technologies. If fuel cells became the power source of choice, for example, this
35. See Beloaba (2000) for an example of how this might work in the airline industry.
would disrupt the dominant design in ways that might greatly speed the diffusion of a modular product architecture and hence facilitate build to order. Similarly, a move to allow only low-emissions “minicars” into city centers would create opportunities for new entrants—particularly those making the golf cart–like low-speed vehicles that are appearing with greater frequency in gated retiree communities around the United States. Rhetoric about the power of consumers is ubiquitous, but at a time of great turbulence for the auto industry, there is ample reason to believe that consumer signals about their preferences for vehicle design, level of vehicle customization, bundling of products with services, and retail experiences will have a strong influence on the direction of industry trends.
Conclusion How likely is any of this to happen? It seems clear that consumers want to use the Internet to buy cars. Already, 25 percent of car buyers use the Internet to do research on their purchase, with up to 40 percent saying they plan to do so when buying their next vehicle.36 If security and legal issues can be resolved, it would seem likely that in the near future many consumers will purchase cars online. Our long-term scenario, build to order, is more problematic. We do not know how many consumers would pay a price premium for more customization. And as discussed above, the logistical issues are far more complicated than they are for computers. A key component of making build to order feasible is modularization. Yet modularization with outsourcing would require the automakers to give up control of key design aspects of the car (or retain duplicate engineering staffs, which would be expensive). Modularization without outsourcing would require the automakers to increase their asset bases, reversing the trend of the last two decades and certainly is not a move likely to please Wall Street. Without build to order, finished-goods inventories will decrease due to faster transmission of orders from consumers to manufacturers and better information about consumer preferences captured from the clickstream. However, it will still be necessary for dealers to maintain a selection of cars with different option packages for consumers to choose from, so much of the
36. Lapidus (2000, p. 13).
current 30- to 60-day “normal” inventory of each car model will probably continue to exist even if the worst inefficiencies of the distribution pipeline are eliminated. If modularization does not occur, then automakers are unlikely to radically change their procurement strategies. The Internet is thus likely to reinforce, rather than alter, existing supplier-customer relations. GM, with its exit-based purchasing strategy, will probably move aggressively to adopt auctions for many (though not all) of the components it buys. To facilitate price comparisons, the firm may well do more of its own engineering and design work. In contrast, Toyota may try to figure out how to use the web for collaboration, perhaps having suppliers jointly design adjoining parts. The company also may be able to achieve significant inventory reductions. Toyota and its suppliers have been working to minimize inventory for decades, developing innovative strategies for quick changeovers and accurate logistics.37 Therefore, they will be in a good position to take advantage of improved Internet-based forecasting. In contrast, suppliers with long changeover times may continue to produce in large batches even if they receive accurate daily forecasts. To state these points more broadly, it is clear that the impact of the Internet is not by any means technically determined. For maximum effect, the nature of a firm’s e-business investment should be complementary to its other investments, the investments of its competitors, and the nature of its competitive environment. In this sense, “e-effects” on the auto industry will depend on the extent to which complementary changes occur in retail strategy (build to order, factory direct, dealer direct), design strategy (modular versus not, standardization versus not), procurement strategy (voice versus exit), automaker technology strategy (continuous versus radical technical change), and government antipollution strategy (regulatory regime). These e-effects could be path-dependent: the order in which these changes occur could affect the ultimate outcome. For example, if the shortterm effects of the Internet are to promote exit relations with suppliers and to facilitate the rise of third-party retailers, the build to order scenario would become less likely than if information exchange with suppliers and OEM-linked dealers were to dominate. The reason is that the less tightly linked suppliers would probably not develop the skills to produce complex modules. Similarly, third-party retailers would have less incentive to share 37. Liker (1999); Nishiguchi (1994).
detailed consumer information with automakers than would traditional dealers. In most cases, however, we will not see dramatically different players in the industry as a result of the rise of e-business. There remain enough of the traditional automotive-specific competencies that new suppliers or automakers are unlikely to enter the fray. Even with the increased importance of electronics in the vehicle, most of the development of the electronic infrastructure will occur through existing automotive suppliers. (This is due in part to lack of attractiveness of the auto industry to electronics suppliers—the margins are much lower.) However, alliances outside the industry and the provision of add-on products and services from new suppliers (particularly if a standardized “plug-and-play” interface is established for the information-intensive vehicle) are both highly likely. The consortium model established to launch Covisint, including most of the world’s automakers and some powerhouse IT firms, may prove influential as the industry wrestles with issues of modularization and standardization. In addition, most of the players in the industry have rushed to learn Internet-based skills. In retail, traditional bricks and mortar may yet prove to be an encumbrance on incumbents, but in the short term, barriers to disintermediation are providing opportunities for dealers to make the transition to an Internet era. Even with a full build to order scenario in place, we see an important role for a physical retail presence, particularly as a point of contact for the provision of services. However, much of the real estate around dealerships currently occupied by new and used car lots may become available as the retail function is “re-purposed”; indeed, some investors are already anticipating this, taking over the management of dealership real estate through Real Estate Investment Trusts (REITs) in order to gain access to development rights. The most radical outcome for the auto industry would certainly be affected by the Internet but would hinge much more on the complementary changes mentioned above. With a new dominant design built around fuel cells creating an opportunity for a full modular design, OEMs eager to shrink their asset base and diversify their risk could outsource much design to suppliers and virtually all production to contract assemblers. Automakers could then focus on determining the metadesign rules that would guide a modular product architecture, on developing and extending their brands, on differentiating their product line with respect to customization, and on developing and personalizing a full array of “mobility services.”
Whether this vision comes to pass will depend as much on what consumers want and how clearly their preferences are felt as it will on the current industry structure and the capabilities of key players. The Internet’s impact may be greatest, therefore, in the extent to which it amplifies and accelerates the delivery of the voice of the consumer to the ears of industry leaders—and to the many stakeholders in this “industry of industries.”
References Beloaba, P. 2000. “B2C E-Commerce and the Airline Industry.” Paper prepared for Sloan Industry Center Conference. Ann Arbor, Mich., April. DeSanti, Susan S. 2000. “The Evolution of Electronic B2B Marketplaces.” Paper prepared for the Federal Trade Commission Public Workshop “Competition Policy in the World of B2B Electronic Marketplaces.” June 29. Farrell, Joseph, and Garth Saloner. 1992. “Converters, Compatibility, and the Control of Interfaces.” Journal of Industrial Economics 40 (1): 9–35. Fine, Charles. 1998. Clockspeed: Winning Industry Control in the Age of Temporary Advantage. New York: Perseus Books. Freyssenet, M., and others. 2000. One Best Way? Trajectories and Industrial Models of the World’s Automotive Producers. Oxford University Press. Garretson, Dan. 2000. “Net Revs Up Auto Making.” Cambridge, Mass., Forrester Research. Georgievksi, Biba. 1999. “The Case for Higher Levels of Integration.” Working Paper. Dearborn, Mich.: Visteon Automotive Systems. Glushko, Robert J. 1999. “How XML Enables Internet Trading Communities and Marketplaces.” Working Paper. Cupertino, Calif.: CommerceOne. Helper, Susan, and Mari Sako. 1995. “Supplier Relations in Japan and the United States: Are They Converging?” Sloan Management Review 36 (3): 77–84. Hochfelder, David, and Susan Helper. 1996. “Joint Product Development in the Early American Auto Industry.” Business and Economic History (Winter). Lapidus, Gary. 2000. “Gentlemen, Start Your Search Engines.” Goldman Sachs Investment Research (January). Liker, Jeffrey. 1999. “Logistics at Toyota.” Working Paper. University of Michigan. MacDuffie, John Paul. 1995. “Human Resource Bundles and Manufacturing Performance: Organizational Logic and Flexible Production Systems in the World Auto Industry.” Industrial and Labor Relations Review 48 (2): 197–221. ———. 1997. “The Road to Root Cause: Shop-Floor Problem-Solving at Three Auto Assembly Plants.” Management Science 43 (4): 479–502. MacDuffie, John Paul, and Susan Helper. 1997. “Creating Lean Suppliers: Diffusing Lean Production through the Supply Chain.” California Management Review 39 (4): 118–51. MacDuffie, John Paul, Kannan Sethuraman, and Marshall L. Fisher. 1996. “Product Variety and Manufacturing Performance: Evidence from the International Automotive Assembly Plant Study.” Management Science 42 (3): 350–69.
Nishiguchi, Toshihiro. 1994. Strategic Industrial Sourcing: The Japanese Advantage. Oxford University Press. Pil, Frits K., and John Paul MacDuffie. 1996. “The Adoption of High Involvement Work Practices.” Industrial Relations 35 (3): 423–55. Sako, Mari, and Max Warburton. 1999. “MIT International Motor Vehicle Programme Modularization Project: Preliminary Report of European Research Team.” Paper prepared for IMVP Annual Forum. Boston, October. Shapiro, Carl, and Hal Varian. 1998. Information Rules: A Strategic Guide to the Network Economy. Harvard Business School Press. Sturgeon, Tim. 1999. “Turn-Key Production Networks: Industry Organization, Economic Development, and the Globalization of Electronics Contract Manufacturing.” Ph.D. dissertation, University of California, Berkeley. Thompson, G. 1954. “Technical Standards in the Early US Auto Industry.” Journal of Economic History 50 (July): 1–22. Ulrich, Karl. 1995. “The Role of Product Architecture in the Manufacturing Firm.” Research Policy 24.
9
. .
E-Commerce and the Changing Terms of Competition in the Semiconductor Industry is experiencing a remarkable transformation. Until the early 1990s, the industry consisted almost entirely of integrated firms—that is, firms that design integrated circuit products, develop the manufacturing process technology, and then manufacture and market the integrated circuits. Increasingly, the industry now consists of “fabless” firms, carrying out solely the product definition, design and marketing functions, partnered with “foundry” firms that develop process technology and provide contract-manufacturing services. A combination of factors fuels this transformation. Large manufacturing facilities offer substantial economies of scale, yet many new, innovative products originate in small start-up firms. A fabless-foundry organization of the industry promotes both. Software systems for product design and for supply chain management provide effective e-commerce links between fabless firms and foundries, enabling them to successfully compete with integrated firms. Moreover, business strategies adopted by key firms in the industry favor the fabless-foundry separation.
T
Quantifying the Transformation The rapid transformation of the semiconductor industry and the current extent of the fabless-foundry reorganization are best quantified by analyz
ing trends in the growth of foundry capacity versus capacity owned by integrated firms. Unfortunately, capacity in the semiconductor industry is not simple to define. A particular methodology for capacity measurement was adopted for the purposes of this study, justified as follows. Semiconductor manufacturing consists of two basic stages: wafer fabrication, followed by device packaging and testing. The capital requirements for wafer fabrication are roughly ten times that of device packaging and test. Thus capacity in the industry is primarily a function of the capacity of fabrication plants. (Hereafter, a fabrication plant is referred to using the industry vernacular “fab.”) Most semiconductor devices can be shrunk. Smaller devices can be printed on the wafers in greater numbers, provided a process technology is developed and qualified capable of yielding devices with the smaller feature sizes. Thus introduction of a new process technology that reduces the minimum feature size entails a substantial expansion of fab capacity, even when wafer output capacity is not increased. For this reason, true capacity is not proportional to wafer output capacity, work force levels, or numbers of installed equipment. One must take into account the minimum size of the circuit elements that may be produced. Integrated circuitry may be thought of as a dense collection of electrical functions, where a function could be a memory bit or a logic gate. The density of these functions depends on the minimum feature size of the process technology. If square in shape, the minimum wafer surface area occupied by a function would be the square of the minimum feature size of the process technology. Thus the potential number of functions on a wafer is roughly equal to the wafer area divided by the square of the minimum feature size. Fab capacity is expressed herein in terms of the number of electrical functions that can be produced a month. In calculating this capacity, it is assumed that all wafers output from a fab contain devices with the minimum feature size. In reality, typically only a portion of the total wafer output capacity of a fab can be allocated to production of minimum feature–size devices, because of limited numbers of process tools capable of performing critical operations. Thus estimates of capacity using this metric have a positive bias, but trends in capacity can be studied regardless, since this bias is consistent from year to year and across fabs. Capacity measured this way is very large. When summed across fabs in the North America, Japan, Europe, and Asia Pacific regions, the resulting regional capacities are expressed in hundreds of quadrillions (1015) of functions per month. Data collected in 1998 for 1,175 fabs worldwide
. .
was obtained for this analysis from Semiconductor Equipment and Materials International (SEMI).1 This database indicates, for each existing or announced fab, the wafer capacity, minimum feature size, type of products, location, and location of ownership. This database was updated with data collected by the Competitive Semiconductor Manufacturing (CSM) Program at the University of California at Berkeley. Data was supplied for missing fabs, inaccuracies were corrected, and historical records for the evolution of feature sizes at each fab were appended.2 The CSM Program also classified each fab as a producer of logic products operated by an integrated firm, a producer of memory products operated by an integrated firm, a producer of both memory and logic products operated by an integrated firm, or a pure foundry. (A portion of the capacity at some fabs operated by integrated companies may be devoted to foundry services, but this capacity is not included in the foundry capacity tabulated herein.) As illustrated in figure 9-1, worldwide fabrication capacity accounted for by foundries rose from a negligible amount in 1991 to about 24 percent in 1999. Figure 9-2 displays the regional distribution of foundry capacity. As can be seen, about 87 percent of worldwide foundry capacity is located in the Asia Pacific region, mostly in Taiwan.
Evolution of Industry Business Strategies Classification of the industry into integrated, fabless, and foundry companies is not a clear-cut task. Integrated firms may offer idle capacity for the fabrication of products marketed by others, thereby becoming a foundry as well as an integrated company. On the other hand, by curtailing investments in fabrication capacity and outsourcing manufacturing to others, an integrated firm can become increasingly fabless. Many integrated firms have offered foundry services in an attempt to secure a return on otherwise idle manufacturing capacity, and there has been a substantial amount of this sort of foundry production over the years, mostly of older technology devices. However, there are limitations on the extent of this kind of industry organization. Fabless firms are natu1. Semiconductor Equipment and Materials International (1998). 2. Leachman and Leachman (1999, p. 9).
Figure 9-1. Trends in Worldwide Fabrication Capacity by Factory Type, 1980–99 Quadrillions of functions a month 1,800
Memory-logic Memory Logic Foundry
1,600 1,400 1,200 1,000 800 600 400 200 1982
1984
1986
1988
1990
1992
1994
1996
1998
Source: Authors’ calculations.
rally reluctant to share their designs with competitors or potential competitors, especially designs for their most advanced products. Moreover, the fabless firms fear that the integrated firms may withdraw their foundry services during cyclical upturns in the industry, anticipating that the integrated firms will elect to use the capacity for production of their own products, especially in advanced process technologies. A preferred business partner is a “pure-play” foundry—that is, a company that has no intention of carrying out design and marketing of integrated circuit devices, and one that is much more oriented to providing excellent customer service to the fabless firm. The rise in the 1990s of pure-play foundry companies is a key factor that enabled the rapid acceleration of the fabless-foundry reorganization. Firms pursuing this business strategy are found almost exclusively in the Asia Pacific region. As the demand for semiconductor foundry services has increased, the Asia Pacific pure-play foundries have grown much faster than the rest of the semiconductor manufacturing industry.
. .
Figure 9-2. Regional Distribution of Foundry Capacity, 1980–99 Quadrillions of functions a month Rest of world Japan Europe
450 400
North America Asia Pacific
350 300 250 200 150 100 50 1982
1984
1986
1988
1990
1992
1994
1996
1998
Source: Authors’ calculations.
The Rise of Pure-Play Foundries In the 1970s and 1980s, the government of Taiwan sponsored a research facility to develop complementary metal-oxide semiconductor (CMOS) process technology licensed from RCA. This process technology was subsequently offered to start-up companies. Taiwan Semiconductor Manufacturing Company (TSMC) and the United Microelectronics Company (UMC, later the UMC Group) were the first two major foundry companies, and these are the two leading foundry companies at present. These companies got into the foundry business not because they recognized it as a superior business strategy, but rather because it was the most feasible avenue for the development of a business. (As the TSMC chairman, Morris Chang, admits: “We were lucky.”)3 Success in product design and marketing was perceived as much more difficult to come by than skill 3. “Midyear Investment Guide,” Business Week, 26 June, 2000.
at manufacturing operation. TSMC has always been a pure-play foundry, but UMC tried for a number of years to become an integrated company, with little success. These companies and other subsequent foundry start-ups were not started by large, existing industrial concerns but by collections of relatively small investors. (The largest initial investor in TSMC, at 25 percent, was a customer, Philips.) Their bylaws call for generous distribution of profits among all investors as well as the employees. As a result, management, engineers, and production workers in some of the Taiwan foundries are better compensated than in almost any other semiconductor manufacturing company in the world. This has enabled them to attract top-flight talent. A significant percentage of the senior management consists of Asianborn personnel with U.S. Ph.D.s and substantial U.S. industry experience. The Taiwan foundry companies grew very rapidly during the 1990s. They were able to secure a significant portion of their capital needs for new fabrication plants by pooling investment funds from their Japanese, American, and European customers, although more recent additions to capacity have been largely self-financed. The Taiwan government also encouraged the start-up of dramatic random-access memory (DRAM) companies in Taiwan, evidently as an effort to protect Taiwanese electronics manufacturers from potential periodic shortages of DRAMs. Several DRAM operations have been started up in Taiwan, some with the (reluctant) backing of the foundries. These companies have not been very successful. Manufacturing performance lags that of the DRAM manufacturers in Korea, Japan, and the United States. Some of these facilities have now been converted to foundry production or sold to the foundry companies. For example, Acer sold its fabrication plants to TSMC in 1999. WSMC, a joint venture of Winbond, Toshiba, and various Taiwan investors, was sold to TSMC in the same year. Power Chip, a joint venture of Mitsubishi and Taiwan investors that initially produced DRAMs marketed by Mitsubishi, converted to foundry production. The financial success of TSMC and UMC prompted other foundry start-ups in Taiwan as well as in Singapore and Malaysia. There are now several significant foundry fab operations in Singapore, involving investment funds from U.S. and Japanese customers or joint venture partners. There are also two major foundry start-ups in Malaysia. More recently, there have been two foundry start-ups in South Korea. In a very bold shift of business strategy, Anam, once the world’s largest foundry company for
. .
semiconductor packaging and testing, sold all its packaging and testing facilities in order to finance the construction of an advanced wafer fabrication plant. Texas Instruments, which guaranteed to use a certain portion of the capacity, supplied the process technology. Anam’s 250-nanometer logic foundry fab has been operational now since 1998. Dong-bu, another industrial group, has announced plans to build and operate a foundry fab. Toshiba is supplying process technology. Japanese steel companies founded several semiconductor foundry companies in Japan in the 1980s. Unlike foundries in the Asia Pacific region, the Japanese foundries have not been able to secure a large customer base outside Japan. After the economic downturn in Japan, these companies suffered heavy losses. Significantly, Nippon Steel sold all of its fabrication facilities in Japan to the UMC Group of Taiwan. In the United States, the fabless semiconductor industry has grown rapidly. Many start-ups have been able to quickly move innovative new digital logic and mixed signal products into the market, utilizing the Taiwan foundries to ramp up volumes in concert with their sales successes. The availability of the foundries exerts a destabilizing force on the integrated companies. A small group of brilliant product design and marketing engineers in the employ of an integrated company may choose to stay employed there, where they may enjoy good salaries and fulfilling careers. On the other hand, they can resign, raise a quite modest amount of capital, and go into business for themselves. They can rent some office space, secure appropriate computing equipment and design software, and contract with the foundries for their production. If successful in the marketplace, they can become quite wealthy in a relatively short amount of time. Many integrated U.S. semiconductor companies have increased their use of foundries over the last several years; for example, Motorola announced in 1998 that it intended to outsource 50 percent of its semiconductor fabrication needs within five years. Hewlett Packard abandoned plans to build a new fabrication plant in Colorado, and many of its advanced technology semiconductors are now fabricated in foundries or joint venture firms in the Asia Pacific region. Former integrated stalwarts such as National Semiconductor, Texas Instruments, and Cypress Semiconductor all have made increasing use of Asia Pacific foundries over the last several years. Contract manufacturing enables them to tap markets without waiting for the completion of new fabrication capacity, or to shift older products out of in-house fabrication facilities in order to make room for new products requiring more advanced process technology.
In contrast to most other U.S. digital logic companies stand Intel, AMD, and IBM Microelectronics. All use a business strategy featuring products that must be manufactured using their leadership process technologies. Process technology offered by the foundries lags behind that operated by these companies. Also resisting the use of foundries are the major ASIC (application-specific integrated circuit, that is, custom product) vendors, including LSI Logic and Agere Systems.
Economic Forces Two very profound economic forces in the semiconductor industry are (1) the high capital costs of fabrication facilities, and (2) the large economic rewards for early market entry. Both of these forces have accelerated the fabless-foundry reorganization. Over the last twenty years, the capital cost of a fabrication facility producing 25,000 wafers a month and able to accommodate leading-edge digital process technology has been doubling every four years and now stands at about U.S. $2 billion. This cost already exceeds the financial capabilities of many firms in the industry. If these firms were to build new manufacturing facilities, the facilities would have to be sized for much smaller wafer output. However, wafer fabrication is characterized by substantial economies of scale arising from the indivisibility of process machines and engineers. A study of fab economics at the University of California, Berkeley, demonstrates that a fab making 10,000 wafers a month experiences a 24 percent cost penalty compared to one producing 50,000 a month, even when the two fabs have identical yields, equipment efficiencies, and process technologies.4 Thus there is considerable economic incentive to build and operate large fabs, yet financing such facilities is out of reach for many (perhaps most) companies. Some sort of cost sharing or a reduction of the number of firms in the industry seems inevitable. The foundry-fabless partnership offers an efficient, market-based solution to the need to share the large capital expenditures for fabrication. In effect, the risk of capacity investments by a large foundry company (perhaps pooled with investments by foundry customers) can be diversified across the product portfolios of all of its potential customers. Regardless of 4. Leachman, Plummer, and Sato-Misawa (1999, p. 43).
. .
which fabless firms secure market success with their new designs, the fab’s capacity will be filled, and investment will be allocated where it obtains the greatest return. This pooling of investment risk suggests a greater average return for the fabless-foundry partnerships on investments in manufacturing capacity and development of process technology than for integrated firms independently investing in the face of substantial market risk. The other important economic force to recognize is the extraordinary value of time to market. Prices for integrated circuit devices generally decline steeply with time as the devices are steadily driven obsolete by the introduction and refinement of superior devices. Typically, prices for semiconductor devices decline 25 to 35 percent a year. The longer the elapsed times for development and qualification of process technology, fabrication plant construction, ramp-up of yield and wafer volume, and manufacturing cycle, the less the revenue that is available from the products made in that process technology. A benchmark comparison has been made by the CSM Program of manufacturing costs and “delay costs,” the latter factor equivalent to the revenue missed by the company because of the elapsed times mentioned above.5 The CSM Program surveyed these elapsed times as well as the manufacturing costs of seven leading fabrication plants in Asia and the United States. This group of seven outstanding factories includes fabs operated by the two leading foundry firms in Taiwan, leading DRAM companies in Korea and Japan, and advanced logic firms in the United States. The results are striking. The difference between lowest manufacturing cost and average manufacturing cost (for a hypothetical, standardized 250nanometer process technology operated in all the plants) is only $80 a wafer, or about 5 percent of wafer cost. However, the difference between benchmark delay cost and average delay cost is $700 a wafer.6 Fifteen years ago, there was serious concern about the competitiveness of U.S. semiconductor companies. Manufacturing yields at U.S. firms trailed behind those achieved in Japan, and there was significant worry about the future of the U.S. industry. Today, the gap in manufacturing costs between fabs in different regions of the world has been mostly closed. The chief discriminator of semiconductor firm performance is now speed. Those companies able to reach earlier volume sales of advanced products enjoy much higher sales prices. 5. Leachman, Plummer, and Sato-Misawa (1999, p. 31). 6. Leachman, Plummer, and Sato-Misawa (1999, p. 34).
To interpret this economic force in the context of this study, consider a semiconductor merchant with an attractive new product that requires an advanced process technology not currently operated by the company. The merchant could invest the time and money to develop and qualify the process technology needed to manufacture the product; purchase and install the new process equipment needed to operate the technology; then debug the equipment and process to ramp the yield and volume of the process; and finally learn how to reduce the manufacturing cycle time from a no doubt very high starting point. By the time all this is completed, prices will have dropped considerably. Moreover, the capital investment involved is very formidable, and there is considerable uncertainty and risk about how long it will actually take for the company to master the new technology. Now suppose a contract manufacturer (that is, a foundry) already operates a process technology that is almost suitable for the new product. The foundry is achieving good yields and cycle times for the customers of this technology. Suppose the new product could be slightly redesigned to be compatible with the foundry’s process technology, albeit with a slight loss of performance. The cost charged by the contract manufacturer might be significantly higher, say 30 percent higher, than the expected in-house manufacturing cost over the life of the process technology. But the time-tovolume sales would be dramatically less, and the company anticipates twice the average selling price over the life of the product for this alternative, even factoring in a reduced market value owing to the slight loss of performance. Moreover, the risk is dramatically less. For many companies, especially start-ups, the second alternative is the obvious choice. This alternative can be very successful, provided three conditions are met: —for products that are compatible with its process technologies, the foundries provide competitive yields, cycle times, and on-time delivery performance; —there is little or no risk that the new product will not be compatible with the foundry’s process technology; —the merchant company has the information links to manage its supply chain well, even though its fabrication is subcontracted. The first condition is widely recognized in the industry as true for the leading Taiwanese foundries. TSMC and the UMC Group enjoy excellent reputations for manufacturing service among their North American and European customers and investors. The CSM Program’s performance data
. .
for these companies confirms this: in terms of manufacturing cycle time, on-time delivery, wafer throughput, and yields for logic devices, these companies are among the industry leaders. (The performance of foundries in general for memory devices has been much less stellar, as will be discussed later.) The second and third conditions are met because of recent ecommerce innovations.
Key E-Commerce Tools E-commerce tools promoting the fabless-foundry reorganization of the industry include commercial design software and web-based supply chain management systems. These innovations enable partnerships of fabless and foundry firms to function as efficiently as integrated firms. Successful research efforts at the University of California, Berkeley, and Stanford University during the early 1980s in the area of computer-aided design of integrated circuits led to commercial software now known as electronic design automation (EDA) software. (Leading vendors of such software include Cadence, Avanti, and others.) EDA software is actually a suite of design tools, including software tools for logic synthesis, logic design, circuit simulation, placement and routing, detailed layout, extraction of parameters from layout, design verification, test program development, and pattern generator tape generation. These products enable circuit designers not expert about the process technology in a factory to design products that are compatible with that process technology. The parameters of the manufacturing process are supplied to the software, which in turn expresses design rules to the user in terms understandable to a circuit designer. The software analyzes proposed designs supplied by the user to verify that the designs satisfy the design rules. In practice, a foundry informs its prospective customers of the particular commercial design software for which it will supply process data. The foundry electronically supplies the customer with a data file of its process technology parameters. The user purchases a copy of the design software, also delivered electronically, and proceeds to carry out design activities. Typically, each user will maintain an electronic library of designs and partial circuit designs, editing, combining, and adding to them as necessary to complete each new design. Once the user has verified a design, it contracts with the foundry for its production. The design software outputs specifi-
cations for the photomasks to be used. These instructions are sent electronically by the designer to a third-party mask manufacturer. The completed masks are shipped to the foundry, whereupon production of the designed product may commence. EDA software has proved to be highly effective for digital logic products. Fabless firms generate successful new product designs in rapid succession. Yields in the foundries of logic products designed by their customers have been quite competitive. However, for reasons that will be discussed in the next section, EDA has been somewhat less successful for memory devices and for analog products. The second important type of e-commerce tool is software for supply chain management. Semiconductor fabrication is characterized by variability in manufacturing yields and cycle times; it is important to track work-in-progress closely in order to respond to variations as quickly as possible. The leading foundries offer their customers web-based access to their manufacturing tracking systems. In practice, a foundry user can in real time check the status and progress of each of its manufacturing lots. Process inspection and yield data are also made available electronically, so that the customer may investigate design-process incompatibilities. One leading foundry uses the advertising slogan: “We are your virtual fab. It’s just like having your own fab, only we treat you better.” Given timely information on the status of work-in-progress and reliable delivery performance from its foundry, the fabless semiconductor merchant can manage its supply chain as well as an integrated company. In fact, given that the availability of manufacturing information is comparable, fabless companies are successfully adopting the very same supply chain management systems used by certain large integrated companies. E-commerce also helps the foundry operators to be more efficient. Capacity needs of their customers may fluctuate according to market success and design success. Timely communication of customers’ production plans is critical to the foundry operator. Rapid feedback is needed when capacity reserved for one customer becomes available for allocation to others. Internet links and supply chain management software also are used for the rapid redeployment of foundry capacity to ensure its full utilization. It is interesting to note that the locations of the three largest concentrations of fabless companies are Silicon Valley, California; Vancouver, British Columbia; and Shanghai, China. None of these areas possess advanced foundry fabs, a testimony to the effectiveness of the e-commerce tools.
. .
Limitations to Further Transformation of the Industry There are limits to the penetration of the foundry-fabless business model. While the leading foundries offer advanced process technology, the foundries still lag the industry leaders in process technology. Thus for firms whose business strategy is based on a leadership position in process technology, such as Intel, AMD, or IBM, the foundry-fabless model is not attractive—or at least not attractive for their leading-edge products. Advanced memory devices, especially DRAMs, have been difficult for the foundries to manufacture competitively. According to CSM Program data, DRAM yields and manufacturing cycle times achieved by the foundries sometimes have been inferior to those achieved by the integrated DRAM companies. Perhaps the design margins for such products are so tight as to require considerable process tuning and refinement of the product design based on manufacturing feedback. It is also difficult to achieve design verification for certain analog and mixed signal products; while there are several quite successful fabless analog and signal processing companies, there are many more integrated companies in the analog, linear, and mixed signal markets. These products may require considerable tuning of manufacturing processes on a product-byproduct basis and are thus more awkward for foundries to handle. It also must be recognized that the economic forces are much less strong in the case of analog and discrete products that do not require advanced process technology. Many products in these areas require process technologies with feature sizes of one micron or larger; the capital expense for such fabs is less than for fabrication lines recently built for leading-edge digital products. Typically, secondhand process equipment is used. Given the lower unit costs of process equipment, economies of scale also are less severe. Thus there exist in the industry a number of thriving, relatively small integrated producers of analog, linear, and discrete products.
Summary The semiconductor industry is rapidly reorganizing from integrated firms to partnerships of fabless and foundry firms. From a negligible share of capacity at the beginning of the 1990s, pure-play foundry companies now command more than 25 percent of worldwide capacity. Almost all of this capacity is located in the Asia Pacific region.
The key factors fueling this transformation are as follows. First, the capital cost of advanced fabrication facilities is beyond the financial reach of most firms, yet small start-up firms account for many new, innovative products. The fabless-foundry organization diversifies the risk of large foundry fabs across the product portfolios of all firms that are potential customers. It reduces barriers to entry and the time-to-market for innovative new products devised by small- and medium-size firms, thereby enabling these firms to secure considerably more revenue than if they had to undertake process development and manufacturing on their own. Second, effective e-commerce tools have been developed that enable fablessfoundry partnerships to successfully compete with integrated firms. These tools include design automation software and supply chain management systems. Application of design automation software, involving considerable exchange of technical data between fabless and foundry partners, enables product designers unfamiliar with the manufacturing process technology to design devices that achieve competitive yields. Application of supply chain management software, also involving considerable exchange of technical data, enables fabless companies to efficiently manage their work-inprocess despite subcontracting its manufacture, and it enables foundry operators to sustain full utilization of their manufacturing facilities. Third, there have been a number of successful start-ups in the Asia Pacific region of pure-play foundry companies, led by TSMC and the UMC Group in Taiwan. Availability of competitive foundry services from these companies has enabled a rapid growth in the United States of fabless company startups as well as an increasing trend among established integrated firms to outsource their fabrication needs to the foundries. Few integrated firms excel at both design/marketing and manufacturing. Vastly different management skills are needed, and each area thrives in a different kind of business culture. The new industry structure facilitates the success of firms strong in one area but not the other. The resulting industry dynamics may be summarized as follows. Where the foundryfabless business model is successful, the industry is reorganizing into many small fabless firms supported by a number of foundry companies in the Asia Pacific region. This model has been very successful for digital logic products and (to a lesser extent) for mixed signal and analog products. The two principal digital markets where this business model is not yet applied or is not working well are microprocessors and advanced logic devices using absolute leading-edge process technology and advanced memory devices, especially DRAMs. These portions of the industry are collapsing
. .
into a few very large integrated firms. Only the discrete and analog/ linear/mixed signal portions of the industry continue to feature successful integrated firms with a wide range of sizes, from small to large. The foundry-fabless transformation of the industry has enabled the United States to increase its dominance of the design and marketing of integrated circuits. However, the share of world fabrication activity located in the United States has declined. It is arresting how rapidly world fabrication capacity is becoming concentrated in a few countries, and a testimony to how fast e-commerce can change industrial organization when favorable economic forces and business strategies also are in place.
References Leachman, Robert C., and Chien H. Leachman. 1999. “Trends in Worldwide Semiconductor Fabrication Capacity.” Report CSM-48. Competitive Semiconductor Manufacturing Program, Engineering Systems Research Center, University of California, Berkeley. Leachman, Robert C., John Plummer, and Nancy Sato-Misawa. 1999. “Understanding Fab Economics.” Report CSM-47. Competitive Semiconductor Manufacturing Program, Engineering Systems Research Center, University of California, Berkeley. Semiconductor Equipment and Materials International. 1998. International Fabs on Disk. Mountain View, Calif.
10
The Old Economy Listening to the New: E-Commerce in Hearing Instruments instrument industry is a miniature industry: small products, small players, and small total turnover. Also, it is an industry where the impact of Internet technologies is also small—indeed, miniature. There are, perhaps, quite a few industries comparable to hearing aids, where key characteristics (some intrinsic and fundamental, others regulatory, corporatist, and organizational) inhibit broad use of Internet technologies to lower costs and improve products and services. The hearing instrument industry presents, in this collection of industry studies, something of a null case. Thus, the main purpose of the case is to help identify important obstacles to the diffusion of e-commerce but also to point at more indirect impacts of web-based communication. The chapter starts with an introduction to the product and to the industry.1 The entire value chain is presented from the manufacturers’ perspective to highlight the various market interfaces. The remainder of the chapter discusses the possibilities of e-commerce at each of these interfaces.
T
1. This chapter builds on earlier studies of the industry by the present author. See “Industry Structure Dynamics and the Nature of Technology in the Hearing Instrument Industry” (brie.berkeley .edu/~briewww/pubs/wp/wp114.html) or “Noise and Silence in the Hearing Instrument Industry” (www.cbs.dk/departments/ivs/wp/cis-nois.pdf ).
The Product Hearing instruments (hearing aids) are electronic sound amplifiers. No more, no less. They pick up sound signals from the environment, process these signals in their amplifiers, and “inject” the amplified signal into the ear. A hearing instrument therefore builds on what remains of hearing ability. This ability may be substantially reduced, but the user cannot be totally deaf. In that case other therapies must be tried, such as surgery and cochlear implants. Hearing instruments come in two different shapes: behind-the-ear (BTE) devices and in-the-ear (ITE) (or custom) devices. When a BTE instrument is used, the case with the electronics is placed behind the ear. A small tube leads the amplified sound signal from behind the ear into the ear, where the tube is secured by a custom-designed plug. The ITE instruments carry everything in a case that fits totally into the ear. This case is custom designed and the layout of the components is arranged according to the individual shape of the case. Over the past twenty-five years, ITE instruments have become so small that some of them now are virtually invisible. Fundamentally, the two types work the same way and are constructed of the same basic components: a microphone picks up the sound, an amplifier processes the signals, and a speaker transmits the amplified sound into the ear. A battery powers the devices. Switches and trimmers are needed to control the device, although remote control and programming can substitute for many mechanical parts. Until recently, amplification was analog for all products. In 1995 two Danish companies (Widex and Oticon) introduced digital amplification. Now all major companies offer digital as well as analog instruments.
Two Decisive Features of Hearing Instruments That a product is small does not necessarily make it simple. On the contrary, it is often difficult to produce very small versions of standard products. This certainly goes for hearing instrument components. To produce microphones and speakers at the size of 1–2 millimeters is no easy task. And even seemingly simple components such as mechanical trimmers and switches of the same size produced under very narrow tolerance ranges turns out to be the job of specialists. As most parts for hearing instruments are produced by specialists, they are available on the market.
To become a manufacturer of standard analog hearing instruments is therefore straightforward: you may buy all components on the market, so what you have to do is to take an impression of the customer’s ear canal, cast a shell on this basis, fit in the parts, and fine-tune the device for the customer. However, to advance the state of the art itself takes serious research and development. Any good hearing instrument builds on a delicate balance of technical restraints and auditory performances. The design is optimized over a range of technical possibilities. Interdependency between parts is high, so change of one part will immediately affect the functioning of the entire system. Further complicating matters is that it is not at all clear what is “optimal auditory performance.” It has long been a major challenge to choose the right way to amplify the sound. Just one important challenge: how to pick the right signals to amplify when the user is in a situation of multiple sound signals—for example, the chaotic sound-picture at a cocktail party. Ideally, you want amplification of the immediate conversation partner. This is not easy in itself, but may be partially achieved with advanced amplification schemes (algorithms) or hardware (dual and/or directional microphones). Nonetheless, you do not want to cut off all environmental noise, or the customer will not be aware of new conversation partners. How to balance these concerns is a long-term challenge for the industry. To produce a state-of-the-art hearing instrument therefore is simple, but to develop a better instrument is very difficult. This has been clearly demonstrated during the past several years with the development of digital amplifiers. (I will return to the competitive situation below.) As a fundamental characteristic of the industry, the custom design must likewise be emphasized. It is currently indispensable for a customer to have a hearing instrument fitted on an individual basis. Two features of the hard-of-hearing problem lead to this. First, each individual’s inner ear is anatomically unique, so to fit precisely a hearing instrument must be custom-made. (Although this conventional wisdom is now being challenged, as described below.) Second, hearing losses come in many different forms. One may, for example, lose the ability to hear only certain frequencies. To compensate a hearing loss is therefore much more difficult than to correct vision problems. The hearing instrument business therefore requires intimate contact with customers. Selling hearing aids without meeting the customer in real space has hitherto been unthinkable.
The Value Chain Certain basic features of the industry frame the value chain. Instrument manufacturers have traditionally been mainly assemblers or integrators. Essential components, both standard parts such as batteries and specialized components used only in hearing instruments, have been produced by independent suppliers. The production of microphones and speakers, for example, has been limited almost exclusively to one dominant supplier: the Chicago-based Knowles. Instrument manufacturers have tried to balance that near-monopoly either by internal production or by supporting the only other producer, the Danish company Microtronic. So far these attempts have not eroded the stronghold of Knowles. Analog amplifiers have likewise been available on the market, but the new digital amplifiers have so far only been produced by the instrument manufacturers. Globally, the hearing instrument manufacturing industry runs revenues of $1½ billion to $2 billion. The industry has traditionally been relatively fragmented; until 1996 two companies were dominant in terms of volume: Siemens in Germany and Starkey in the United States. Each held global market shares of approximately 20 percent. A large group of secondtier companies with 5 to 10 percent market share included Oticon, Danavox, and Widex (all from Denmark), Philips (the Netherlands), and Phonak (Switzerland). Added to these were a number of companies with more localized interests, such as Beltone and ReSound in the United States and Rion of Japan. The high development costs of digital instruments are changing this picture dramatically. Already before 1996 Starkey had quietly bought minor American companies, but it was the Oticon’s acquisition of the Swiss company Bernafon that initiated the current merger process. Since this merger Starkey has intensified its acquisitions of American companies; Siemens has also bought some. ReSound (founded only in 1984 but bolstered by venture capital from Silicon Valley) bought an American high-tech start-up plus the Austrian Viennatone, only to be bought itself in 1999 by the parent company of Danavox, with which it was merged under the name GN ReSound. Also in 1999 the hearing instrument section of Philips merged with Beltone, and in 2000 three minor American companies got together. The most recent and most spectacular event occurred in April 2000, when GN ReSound, in a $400 million deal, bought Beltone (including the former Philips activities) to form the second largest global player.
All these changes in a short space of time leave the industry far more concentrated, with many small (especially American) companies wiped out. Four players now dominate the industry: Siemens, GN ReSound, Starkey, and Oticon, each of them holding market shares of between 15 and 20 percent. Only Siemens and GN ReSound belong to diversified conglomerates (both operating fairly independently, though); all other hearing instrument manufacturers are independent firms. The larger ones employ 2,500 to 3,000 persons worldwide and have revenues in the range of $400 million. In the next class of companies, with 5 to 10 percent of the market share, there are only two. The Danish company Widex has not been involved in mergers, but this company is probably the single most successful hearing instrument manufacturer in the new digital period and has achieved its remarkable growth organically. The Swiss company Phonak has maintained its position as a strong second-tier company. What remains are small companies with typically only local presence. Distribution of hearing instruments follows several rather different models. For convenience, let us mention only three: the north European, the central European, and the American. In Scandinavia and the United Kingdom, hearing instruments are provided free of charge as a part of the public health program. Distribution is handled mainly by audiological clinics at hospitals. In central Europe (Germany, France, Benelux), a variety of health insurances exist to reimburse a large part of the cost of a hearing instrument. Distribution is managed by private retailing audiologists, often organized in chains. In the United States, most buyers carry the full costs of a hearing instrument. Only about 15 percent are reimbursed by Medicare/Medicaid. Health insurances and HMOs generally do not cover hearing instruments. Instruments are sold by the so-called dispensers, staffed either by university-trained audiologists or by hearing instrument specialists (with approximately six months’ training in fitting hearing instruments). Attempts at building chains of dispensers have so far been unsuccessful in the United States (but they do exist and I shall return to them below). Generally, no matter the model, the distributors source the hearing instruments with the manufacturers or their local affiliates, which do the custom part of the production on order. Despite different payment schemes, the ultimate fitting of the instrument is handled more or less the same way all over the world. The characteristics of end-users are also relevant to the industry. Loss of hearing is strongly related to age, so most customers are elderly people.
Many are so old when they buy their first hearing instrument that they die before they have a chance to buy a second one. Furthermore, choosing the right instrument is very difficult, in part because of the above-mentioned variation in types of hearing losses and in part because of the different lifestyles of the users and the ensuing differences in use.2 So potential buyers usually do not comparison shop. Once they have entered the shop of a dispenser, they tend to rely on his or her advice. The industry has a fairly simple value chain. There are only three marketplaces or “open market interfaces” in the system: two business-tobusiness markets and one business-to-consumer market. Starting upstream, manufacturers source parts and components with suppliers. Second, manufacturers sell instruments to dispensers (wholesaling). And third, dispensers sell to end-users (retailing). One may argue that there is one more interface—namely, the relation between the manufacturer and its local subsidiaries. Doubtless this relation will benefit from net-based communication, but since a large part of this communication will take place inside a company, it is not dealt with here.
E-Commerce All of the three markets mentioned above are open to e-commerce, but so far little change has taken place. The reasons for this are very different, though, for the three markets, which this section will tell.
Sourcing of Parts and Components The least activity is taking place at supplier relations. While some kind of electronic ordering could be useful, it seems unlikely that the Internet would allow for radical changes in market processes. The kind of bidding that is expected in other B2B net-based markets (auto parts, pulp and paper) is hard to imagine when the number of component sources is rather small and most of the component-producing industries are concentrated into duopolies or even monopolies. In such cases, the costs of inviting offers are so small that buyers probably will not save by using a standard bidding forum.3 2. Hearing instruments thus are an extreme variant of “experience goods,” which are expected to move to e-commerce more slowly than “search goods.” 3. In short, there are no “aggregation” benefits.
For one component, the essential chip that runs the signal processing, the story is different, but the conclusion is the same. Hearing instrument manufacturers design their own digital chips. The ability to design chips is a key competence in this industry. However, the production of digital chips is outsourced to foundries. Internet technologies do not allow for radical changes in the business of contract manufacturing of such chips. Even though the number of potential suppliers might permit “aggregation benefits,” the choice of supplier is based on much more information than simple prices (chips are “experience goods”). We therefore observe long-term relationships between hearing instrument manufacturers and their chip producers. These characteristics inhibit the exploitation of Internet-based trading. Also, the emergence of “pure-play” foundries,4 offering production of chips, does not change the game by, for instance, allowing entrants to piggyback the incumbents, since manufacturers have already outsourced this activity.
Retailing Skipping wholesale for a moment and jumping to retailing,5 the picture is somewhat different. The product is still very material and seems far from becoming a digitized product. And not only is it material, it currently is custom-made in both physical shape and amplification adjustment. Therefore, the Internet apparently cannot be of much use and has not been so. It has, however, the potential of changing the game substantially in the near future. There are four important aspects. First, information about the products now flows freely on the Internet. Not surprisingly, the manufacturers try to reach the end-users directly and create a market “pull” effect. Reaching the end-users has been attempted before, but costs have been prohibitive. Therefore, brand awareness has been minimal. Probably no person without some relation to the industry would be able to mention the name of a brand. Now potential buyers may search the Internet for product information. In itself, that probably is of little use. The specifications do not come in any standardized form and are generally impossible to understand. However, as more elderly people become acquainted with the Internet, they will begin bringing prints of manufacturer information to their local dispensers, challenging their 4. See chapter 9 in this volume on semiconductors. 5. U.S. retailing is a $2 billion to $2½ billion business.
authority. An analogous situation occurred in Scandinavia, where the news of the introduction of digital instruments caused many people with hearing losses to call public health clinics, asking to be fitted with one of these new, promising instruments. Public procurement systems, however, cannot respond so quickly to such demands; they want results of objective trials before a new product is brought into the restricted portfolio of dispensed products. Small American dispensers will face similar restrictions; they have limited capacity to overview the entire range of available products and simply no experience with more than a handful of models. The authority of dispensers will certainly be challenged, with the possible consequence that customers will become more critical, perhaps turning their backs on uninformed retailers and demanding better service, including consumer testing for comparability. This again may cause changes in retailing, favoring dispensers with access to neutral assessments of brands and backed by specialist knowledge. One answer to such demands could be chains with centralized training and testing. If this demand leads to increased transparency, manufacturers may lose opportunities for differentiation, leading to more head-on competition. Second, chains may be favored by sheer scale economies from running websites. If seeking information on the Internet becomes a crucial element, economic muscle may be important in the maintenance and development of websites.6 Web features that may work in this direction are web-based hearing tests, which are already appearing in more or less sophisticated versions.7 Such features take resources to develop and maintain. Another feature, often based on simple tests that relatives may complete (just checking a number of hearing-related questions), may be the issuance of dedicated gift certificates. A relative may buy on the web a certificate for a specific instrument and ship it as a gift for the hearing-impaired person, who redeems it with a local dispenser. This feature favors large chains since it requires a local chain store. Third, even though the product itself cannot be digitized, an important part of the package that includes the instrument is digitized for many products. The adjustments of a hearing instrument are—as mentioned above— 6. Manufacturers may counter this by offering to independent stores more or less predesigned homepages on manufacturer-operated websites. As an example, Oticon runs the site digilife.com for this purpose. Even more efficient could be a web service (web hotel) run by the manufacturers jointly so that they do not offer competing web services. 7. See, for example, the German dispenser chain Kind (www.kind.de) or Australian Hearing (www.hearing.com.au/hearing_health_check.asp).
very individual and precise. To the degree that fitting is not mechanical but programmed, there may be options for web-enabled adjusting (and testing) of hearing instruments. The technology and business models for this seem not, however, to be available currently. And as long as customers need to visit the dispenser anyway for the physical fitting, it may not be all that relevant. This takes us to the last point. Fourth, the possibility of trading electronically may direct the technological development in directions that facilitate such trading. Currently, two product innovations may be taken as examples of such scenarios. One is a disposable hearing instrument, developed by Songbird Medical with technology from the Sarnoff Corporation and financially supported by Johnson & Johnson. This instrument comes in a one-fits-all physical shape with nine different amplifications. A relatively simple hearing test may determine which amplification to choose, so in this case a personal call on a dispenser may not be needed. Somewhat similarly, Sonic Innovation of Salt Lake City, Utah, has launched an instrument that has a soft, rubberlike shell that fits most ear canals and may be changed every three weeks. In terms of programming, it comes with a PalmPilot that can be wired to the hearing instrument for adjustments in amplification. Since both of these products need no physical fitting, they may be acquired via the web. Still, though, the electronic fitting may be too difficult to do without professional help. Furthermore, current U.S. laws forbid the sales of hearing instruments without audiological referral (similar regulation exists in most countries). None of these contenders therefore at this point in time dares challenge the current distribution channel, and none of them seems to be successful. The likely result is that while Internet-based communication may cause changes in the retail industry structure and increase transparency, it also has the potential to increase the size of the total market, simply by offering more and better information on the products. In the long run, the opportunity of interactive communication may induce innovation in directions that facilitate e-commerce. But it may take many tries to develop a hearing instrument that allows for that.
Wholesale The interface between manufacturers and retailers is predominantly a market-based interface. While manufacturer-owned retailers do exist, most retailers carry multiple brands. Accordingly, the individual dispenser does
business with a number of manufacturers, usually in the range of three to five. The interaction is rather complicated, involving much more than simple ordering and delivery of predefined products. The dispenser performs a series of examinations to offer the optimal model and features for the customer. Based on these examinations, the dispenser orders the customdesigned instrument from the manufacturer, receives the instrument by mail, and fits it to the customer. These procedures are cumbersome in many respects. The manufacturers frequently receive flawed order forms, not least because all manufacturers use different forms. The dispensers likewise have problems with these procedures—they cannot track their orders, have no backlog of orders, and so on. Obviously, there is room for some kind of electronic data interchange. But who is to introduce it? The manufacturer may well offer dispensers access to a distinct electronic ordering system. But as the dispenser deals with several manufacturers, the prospect of running three to five different Internet-based ordering systems is not attractive—even less so if these systems cannot feed information into a store-based information system including customer files, billing capacity, and the like. The individual dispenser, on the other hand, has no incentive to develop a system and has probably no means of getting access to manufacturer information. There are at least three possible solutions to this deadlock: an independent supplier, a dispenser-based solution, and a manufacturer-based solution. Although there are examples of close manufacturer cooperation (for example, on fitting software), it does not seem likely that manufacturers get together in this area. An independent supplier may have a chance but lacks the bargaining power to induce the manufacturers to agree on standardized interfaces.8 The dispenser-based solution seem more promising. One emergent possibility is a chain-based solution. Based in Portland, Oregon, the chain Sonus has developed a generic order form to be used by the individual stores and processed by the headquarters of the chain before it reaches the manufacturer. This system improves logistics, creating centralized customer files, enables tracking of orders, and reduces flawed orders. The system has had an unintended consequence. Since autonomous dispensers clearly see the benefit of it, Sonus has experienced a certain interest from such dispensers to join the system—without joining the chain. So far Sonus has accepted this, expecting an important, positive side effect: as 8. There is at least one Danish software company producing a dispenser-internal information system. It still, however, lacks the communication module.
independent dispensers order via Sonus, the volume that Sonus commands increases, which naturally increases fitting experience—and bargaining power vis-à-vis manufacturers. The prospect of this development therefore may well be a consolidation of the dispensing business.
Conclusion This assessment of actual and potential Internet-related changes in the hearing instrument industry shows very little impact of Internet technologies so far, and certainly no transformative thrust. Speculating about future evolution, the analysis pointed at a number of possible changes: —web-based marketing may enlarge the market; —transparency may increase, leading to tougher price competition; —independent retailers may suffer from demands for higher quality of service, leading to consolidation in retailing or less binding cooperation between retailers; —scale economies in EDI and information management systems may lead to the same outcome; —technological change may be directed toward product concepts that facilitate web-based trade, which may disintermediate retailers. It should not be forgotten, however, that the very customized nature of the product probably will not allow any radical changes to take place in the near future.
This page intentionally left blank
Making and Moving Stuff
this section cover two of the oldest goods in human history—food and clothing—and trucking, which moves them. The contrasts between these goods and the complex assembled products examined in the previous section are remarkable. The sectors considered here produce goods that are not modular in character; that is, there are no separable subsystems that when varied permit the customization of the product. The variety in each category is significant; to illustrate, a turnip is not a substitute for ice cream, and a size 6 woman’s skirt is not a substitute for a size 46 man’s jacket. In each of the sectors, the organization of the value chain upstream from the customer interface is different and complex. So how does the Internet affect industries such as these, and what does it mean for the trucking industry, which is now reconceptualizing itself as the logistics industry?
T
The Limiting Effect of Product Characteristics Food and clothing share an important characteristic: many of these goods attract customers by their look and feel. This statement is somewhat misleading, however, as many products in these two categories—such as
canned food and underwear—are highly standardized. Thus the consumer knows the look and feel of these and other items beforehand. As a result, these items should be most amenable to business-to-consumer (B2C) e-commerce, though these products are often relatively low cost and low profit. For products lacking standardization—such as fashion items and vegetables and meats—the average customer wants to see or test the product before purchase.1 Moreover, high-profit impulse items might not be as attractive on the Internet as they are to the browsing shopper. Fortunately or unfortunately, in industries not known for high-profit margins, it is exactly these impulse items that are most profitable. If the Internet offers only limited opportunities for transforming B2C interactions in these industries, the upstream business-to-business (B2B) relationships appear to provide far greater opportunity for the successful application of Internet-based supply chain management tools. Consider the average grocery store, with a myriad of suppliers for an eclectic set of products, many of which are ordered by phone and fax. In such an environment, the ability to move information and record keeping entirely online offers enormous potential savings and a reduction in the elapsed time between recognition of a shortage and its fulfillment. The following chapters indicate that in the food and garment industries, a collective search process for successful business models and market platforms to capture these cost savings is under way.
Supply Chain Organization and Interfirm Relations The use of information technology (IT) to manage vertical business relations certainly predates the Internet. However, the advent of Internet-based e-commerce tools has the potential to weave together various computerized islands of “infomediated production” into a seamless, interoperative data stream up and down the value chain. Furthermore, the Internet enables a common, nonproprietary infrastructure with interfaces and connections that can be reconfigured with comparatively low fixed cost and marginal cost approaching zero. Many benefits of business process automation are
1. However, emerging technologies such as true-color display systems may alleviate some of the obstacles to online apparel retail.
therefore now accessible to “Fortune 5 million” companies, rather than being the exclusive domain of the Fortune 500.2 The reorganization of interfirm relations, however, is more than a story of price compression; it should be viewed instead as a story of strategic sourcing. As with so many other industries, in the three sectors examined in this section, Internet-based software is being implemented not only to lower costs, decrease inventories, and shorten elapsed times, but also to render the intricacies of the value chain and its documentation more visible. In other words, the Internet has become a tool for thinking about, and reconfiguring, the organization and operation of the value chain. In this way, the use of the Internet in the B2B area is another chapter in the evolution of modern industrial management. From Taylor’s time-and-motion studies to Toyota’s kanban and just-in-time system, these transformations have been predicated on making industrial processes more visible, thereby enabling the processes to be manipulated and subsequently made more efficient. A second trend is the emergence of online B2B exchanges, or “e-markets,” at various points in the value chain. These exchanges serve as aggregators for suppliers or buyers and dramatically increase the number of market participants any one player can reach. The transformative effects of these exchanges are most extensive in industries traditionally characterized by a large number of small- and medium-size suppliers that are often geographically dispersed. While some exchanges are owned by, and said to strengthen, lead firms (as is the case with Covisint, discussed in the previous section), others are managed by third-party infomediaries that may well play an important role in the recalibration of market power. In the trucking industry, for example, online load matching sites have helped small firms resist the pressure of industry consolidation by increasing cost efficiency through a reduction in empty back-hauls.
The Centrality of Logistics Management When AT&T commissioned a study several years ago about who would benefit from the arrival of what was then called the “multimedia society,” the Boston Consulting Group suggested that AT&T acquire a logistics 2. Remarks of Douglas Alexander of the Internet Capital Group (U.K.) at the annual meeting of the World Economic Forum, Davos, Switzerland, 2001.
company such as United Parcel Service (UPS). By neglecting to heed what in hindsight was excellent advice, AT&T failed to capitalize on the increasing importance of logistics management, a critical requirement when aiming for a seamless supply chain. If goods cannot be moved quickly and reliably, real-time updates on inventory and demand are of little use. Efficient and effective supply chains dictate knowing where products are and when they will arrive at their destination. Thus, when physical products need to be moved, mastery of logistics becomes crucial for business success. For the fulfillment of the promise of e-commerce, innovation in the area of logistics will be imperative. Trucking and transportation companies have taken advantage of the rise in online sales and the subsequent need for physical delivery of goods and components by refashioning themselves as one-stop logistics providers. For example, UPS has begun to provide not only transportation and delivery but also goods management and storage. Furthermore, it has both established call centers to receive orders directly from its customers’ customers and begun to advertise its ability to handle returns, a crucial logistics requirement for online sales of products.
Organizational Prerequisites for Infomediated Production Putting information to its optimal use requires suitable organizational structures. Dell and Cisco, two undisputed early champions of online sales, could exploit the Internet’s opportunities only because of substantial organizational innovation in their respective supply chains (see chapter 7 by Kenney and Curry). Conversely, a “culture of distrust” among suppliers and retailers could inhibit the seamless exchange of information that would otherwise be technologically possible (see chapter 11 by Kinsey). Thus, technology alone may not improve efficiency if it encounters previously existing and deeply held organizational resistance. Now that the dot-com revolution has come and gone, it is both interand intrafirm changes that will prove to be lasting. Thus, if optimizing IT entails sweeping organizational changes both within firms and within industries, the question of who can implement these changes becomes crucial. Wal-Mart, Dell, and Cisco are examples of market players powerful enough to “incentivize” their suppliers to participate in such a reorganization. In industries lacking a dominant player with enough market power to
incentivize other actors, some speculate that nonhierarchical peer-to-peer network architectures à la Napster could be a solution.
Summation The history of the diffusion and adoption of emerging technologies indicates that new technologies often overturn existing social and business patterns. The chapters in this section demonstrate that when actors are able to reorganize the value chain or control the industry platform, they may subsequently achieve significant market power over the other actors in the chain. The ways in which Internet technology ultimately affects industries will likely vary by country. But even across countries, the winners are likely to be existing firms that already have complementary assets such as strong linkages to customers, dominant industry knowledge, and a critical node in the value chain. It is not yet clear whether the outcome in these sectors will be winner-take-all or whether some system will evolve for collectively sharing the benefits of new industry configurations. An example of the former outcome is Wal-Mart’s ability to use IT to grab an increasingly large share of the retail consumer’s dollar. In contrast, in the trucking industry, online markets have benefited smaller trucking firms by allowing them access to technological tools similar to those available to larger operations, and thereby allowing them to operate more efficiently. The transformation of interfirm relations is poised to have greater economic and political impact than online sales in industries with more visible B2C interactions. Let us now briefly turn to each of the chapters in this section.
The Textile and Apparel Industries Faced with complex issues surrounding touch and feel, Hammond and Kohler argue in “E-Commerce in the Textile and Apparel Industries” that the apparel industry faces unique e-commerce transformation challenges. Unlike books or compact disks, highly customized products manufactured and distributed through diverse channels constrain and enable the possibilities of web-based applications for the sector. The authors see the largest potential benefits in B2B advances that extend previous rounds of EDI efficiency gains.
The apparel industry is comprised of three major segments: fashion products, fashion basics, and basic goods. Fashion products have relatively high production costs and short product life cycles, while basic apparel competes on cost, allowing long lead times in order to reduce transportation costs. Technological advances, including new EDI systems, bar codes, and automated distribution systems, have produced lean retailing systems that allow the industry to meet rising demand for more relatively low-cost fashion garments. Better information flows through the channels enhance forecasting and production planning by firms, allowing for more frequent replenishment orders and reducing inventory costs. These lean retailing advances help manufacturers maintain direct contact to customers through catalog systems and reduce fixed costs for retailers who are no longer forced to hold large inventories. The physical and customized nature of the products limits the capacity of firms to transform the front end of the industry. Touch and feel issues along with color concerns will prevent radical expansion of B2C. Return rates of between 12 and 35 percent demonstrate the barriers to impersonal apparel purchases. Traditional catalog companies have found adaptation to the new medium easiest, but few new entrants or traditional players have maximized the capabilities of the Internet with regard to the consumer interface. Some firms are experimenting with real-time fashion advice and online models. Attempting to push the industry toward indirect ecommerce, firms are exploring mass customization, which would allow customers to send in their measurements to company plants for made-toorder products. The success of these more radical innovations has yet to be tested, but they could transform the sector if effective. Perhaps an unexpected opportunity for the front end lies in software innovations that help overcome touch and feel issues. E-Color’s “colorific” software claims to enhance color accuracy and consistency. Similarly, Hewlett-Packard has developed zoom technology, adopted by many online retailers. These developments demonstrate that innovative business opportunities lie within e-commerce challenges. Hammond and Kohler, though, see the greatest chance for innovation in back-end reorganization. Web-based supply chain advances building on earlier EDI advances between manufacturers and retailers will produce large efficiency gains and may impact market structures. The world’s most powerful retailers have developed several exchange platforms, including WorldWide Retail Exchange and Global NetXchange, which are expected to radically increase retailer performance. Sears predicts that ordering costs
will fall from $100 an order with current EDI systems to $10 with webbased systems. These savings result from decreased communications and tracking costs, enhancing forecasting and information exchange within the supply chain. The unfolding nature of the story inhibits the authors’ ability to make conclusions regarding the significance of these changes for the power dynamics across the sector. Hammond and Kohler conclude that the industry structure and nature of the product constrain the effects of e-commerce innovations. Although some firms like E-Color have opportunities, many retailers will confront difficulties in B2C markets because of the nature of the product. The authors see clear benefits to firms in the efficiency gains possible through B2B advances, extending earlier lean retailing successes.
The Food Industry Stressing that technology has no simple linear impact on business but that business organization and technology interact, Frances and Garnesy adopt a systemic perspective of the food retail industry to track the impact of ubiquitous digital networks and tools on the sector in “Lean Information and the Role of the Internet in Food Retailing in the United Kingdom.” The authors analyze the impact of pre-Internet information and communications technologies on business organization and market structure and consider how Internet-based tools could further transform the sector. Their overall conclusion is that the Internet has so far not strengthened suppliers or consumers vis-à-vis the big retail corporations and that the latter continue to benefit from favorable information asymmetries resulting from their role as industry coordinators. Following substantial consolidation in the 1980s, the United Kingdom’s food retail industry has been oligopolistic. A handful of dominant retail corporations invested heavily in EDI systems to improve supply chain efficiency and in electronic point of sale (EPOS) systems to increase customer management. By the early to mid-1990s, U.K. food retailers had thus built electronic links to both ends of the value chain, and they used the power stemming from their role as information managers to coordinate the industry in a manner consistent with Alfred Chandler’s thesis of the “visible hand.”3 3. Alfred D. Chandler, The Visible Hand: The Managerial Revolution in American Business (Cambridge, Mass.: Belknap, 1977).
Exploiting their control over information and logistics, U.K. food retailers achieved many of the advantages of vertical integration without incurring any accompanying costs. At the same time, the market structure has remained stable, as big retailers competed largely on product differentiation rather than cost. Given that U.K. food retailers had accomplished much of what ecommerce promises—efficient supply chain management and the real-time integration of sales data into the procurement process—what is the likely impact of Internet-based tools on the sector? As in many other sectors, the Internet enables low-cost, nonproprietary EDI systems as well as the ability to broadcast data into a network rather than having to rely on point-topoint communication. While many observers have suggested that Internet EDI will lower barriers to entry, improve the ability of suppliers to selforganize, and thus diminish the market power of retailers, Frances and Garnsey submit that the Internet has so far not fundamentally changed market structure in the U.K. food industry. The increased availability of information and the availability of that information in real time in particular, however, have forced retailers to address the problem of information overload. To maximize the benefits of being the hub for information flows through the value chain while limiting the threat of information overload, U.K. retailers have opted for rationalization among suppliers and have increasingly attempted to standardize information. This strategy fits nicely with category management, a business innovation consistent with the emphasis on quality competition. Retailers pick brand name suppliers in particular to manage entire product categories, thus reducing confusing product complexity and increasing retailer-supplier coordination. On the distribution side, the U.K. food retail sector has not seen the emergence of new players entering the business of grocery delivery as exemplified by Webvan in the United States. Instead, the big retail corporations have capitalized on their superior information and logistics management to offer home delivery to customers ordering online from a reduced list of products. While incumbent retailers thus appear to have fit the Internet somewhat successfully into existing business models, and while some selected brand name suppliers seem to have benefited from the possibility of even closer cooperation with retailers, neither suppliers as a whole nor consumers have been able to change the marketplace substantially through the employment of Internet-based tools. Frances and Garnsey do, however, provide an
example of an exception: supplier self-organization among Irish mushroom growers. In sum, an originally oligopolistic market remains firmly in the hands of a few incumbents. Supplier self-organization is slow, and increases in information complexity may in fact strengthen the role of the retail corporation as market coordinator within the value chain. Retailers have capitalized on early successes with pre-Internet technologies to further streamline their supply chains and improve customer management. As barriers to entry remain high, significant change may have to originate from within the existing structures. In this light, Wal-Mart’s recent arrival through its purchase of the ASDA supermarket group—the United Kindom’s third largest retailer—could be significant. By unleashing a round of hitherto unseen price competition, the acquisition could force incumbents to explore new technological possibilities more aggressively and lead to more substantial change in the sector. Kinsey examines the changes wrought by the Internet and information technology on the retail food industry in the United States in “Electronic Systems in the Food Industry: Entropy, Speed, and Sales.” She focuses on the retail supply and demand chain for food products, addressing the impact of both business-to-business and business-to-consumer ecommerce on the industry. While IT has thus far had the greatest impact on the organization and transparency of the value chain, Kinsey asserts that the recent acquisition of pure-play Internet food retailers by established brick and mortar companies bodes well for the future success of B2C grocery delivery business models. The retail food industry traditionally has been highly fragmented, with segmentation occurring both vertically and horizontally. Manufacturers and retailers have often been adversaries, with each attempting to extract the maximum profit margin from the final selling price of a product. Furthermore, because food manufacturers’ raw material is seasonal and/or perishable, holding inventory is risky. As a result, each member of the value chain can be expected to push the inventory burden to other nodes on the chain. With low inflation and the rising cost of storage space, such a supply push system has become a financial burden on the industry in general and retailers in particular. Among retailers, fierce price competition has also ensued as the share of consumers’ stomachs held by retail food stores has declined by an average of 0.1 percent a year in the twelve-year period between 1987 and 1999.
Into this environment came Wal-Mart, which attempted to subvert the high costs of holding inventory by establishing a more efficient product replenishment system. By the early 1990s, Wal-Mart had achieved exactly that; it built a tightly integrated EDI-based network that established a continuous communications loop between its retailers and manufacturers. By allowing manufacturers access to real-time sales data, Wal-Mart was able to have products manufactured and delivered to stores according to consumer demand rather than manufacturers’ output capabilities, thereby allowing it to hold less inventory and increase profit margins. Wal-Mart’s competitors have turned to the Internet and information technology in an attempt to catch up. Established retailers responded to WalMart’s challenge by institutionalizing efficient consumer response (ECR) initiatives and category management, whereby principal suppliers assume responsibility for maximizing profit across an entire product category. Often, however, this occurs at the expense of variety and consumer choice. The movement toward a common industry platform for tracking and disseminating POS data predictably has been met by resistance from manufacturers and retailers. Without a dominant supply chain member—such as Wal-Mart—capable of incentivizing other members, investment in costly EDI systems has been slow, as many firms are hesitant to adopt a system whose benefits may not be realized until network effects take hold. Perhaps more important, the pervasive culture of distrust among retailers and manufacturers has led to a resistance to data sharing, indicating that technology alone cannot improve efficiency if impeded by organizational obstacles. If back-end innovations were motivated by a concern for efficiency, then technological advances have driven the business-to-consumer side. Software developments enabling workable and unique electronic catalogs allowed entrants such as Peapod and Webvan to initially define the market for home-delivered groceries. Because these firms used the inventory and distribution systems of brick and mortar retailers, groceries could be sold only at a substantial markup, rendering this business model untenable and unprofitable. A second wave of B2C firms thus attempted to establish proprietary warehouse and logistics operations. However, the inability to recoup delivery costs have thus far precluded these businesses from capturing market share beyond a small group of consumers who value convenience significantly more than low price and variety. Motivated by the prospect of capturing substantial market share in this indirect e-commerce market, brick and mortar companies have begun to
acquire pure-play food retailers, resulting in a “bricks and clicks” model Kinsey considers viable given the established brand awareness and coordinated logistics operations traditional food retailers can bring to the market.
The Trucking Industry The chapter by Nagarajan, Canessa, Mitchell, and White, “E-Commerce and Competitive Change in the Trucking Industry,” focuses on how the Internet is affecting the trucking industry both directly (through changes in information brokerage) and indirectly (through increased demand by customers and shippers for lower prices and increased service). The trucking industry traditionally has been both highly segmented and extremely fragmented, with carriers specializing according to the distance, weight, type, and geographic location of shipments. The industry has also been highly competitive, with operating ratios indicating that, on average, trucking firms make an operating profit of just over 5 cents to every operating revenue dollar. In such an environment, load-matching services provide valuable information that pairs available shipments with trucks that have available cargo space. Load matching thus increases trailer use, reduces empty back-hauls, and decreases downtime at warehouses. Far from simply allowing incumbent firms to offer more efficient service, the Internet has given rise to new web-based intermediaries and new and evolving roles for existing firms. For example, the authors argue that freight forwarders are threatened by new information aggregators such as Transplace.com and Freightquote.com. These firms have begun to use the Internet to manage the coordination of information and freight in order to increase load matching and obtain volume discounts. Information aggregators exemplify a new type of business model in the trucking industry in which the infomediary owns no assets and its sole service is entirely web-based. The authors find that incumbent trucking firms are using the Internet either to expand their existing services incrementally or to redefine the boundaries of the services they offer. Because of the demand for integrated services, many trucking firms are attempting to become more efficient “vehicles of e-commerce” by increasing variety and customization and recasting themselves as one-stop shopping solutions for transportation and logistics. UPS, for example, has partnered with Nike to expedite Nike’s order-to-delivery process by stocking selected Nike products and fulfilling
customer orders hourly. Furthermore, UPS plays a direct role in the order process since a UPS call center handles Nike.com customer orders. Offering such integrated service has allowed UPS to become the dominant shipper of Internet-ordered goods while at the same time enabling its customers to achieve quick sales turnaround. Because many incumbent trucking firms do not possess the competency to offer integrated services, the authors identify an increasing number of acquisitions in the industry involving firms that provide complementary services. The rationale for complementary vertical combinations is twofold. First, acquisitions and mergers allow firms to quickly refine existing services and offer new services. Second, such combinations provide for greater transparency and coordination of activities than could be achieved through alliances. The authors believe that the Internet will have a profound impact on the trucking sector provided that U.S. antitrust policy does not prevent greater vertical integration in the industry. Mergers are necessary for firms to offer the business innovations and integrated logistics services needed in the broader economy. The authors argue that antitrust merger policy must develop the sophistication to recognize most trucking industry mergers as opportunities to improve and innovate, rather than to inhibit competition. Because of the increased costs of offering integrated services, the Internet has not reduced operating costs for firms in the trucking sector. Due to greater efficiency and coordination, however, productivity in the sector has increased. As a result of the multiplier effect, it is this increased productivity that will show up in the general economy: more efficient logistics services will make other sectors of the economy more efficient as well.
11
Electronic Systems in the Food Industry: Entropy, Speed, and Sales
A new technology of this magnitude and this pervasive nature is likely to touch on every institution of daily life.
industries make up 9 percent of the gross domestic product; 60 percent of that comes from wholesale and retail activity. The industry employs more than 14 percent of all workers; 71 percent of them are in wholesale and retail establishments. Retail food stores plus restaurants and bars sell more than $890 billion of food and drink each year. About half of these sales are in grocery stores; between 1 and 2 percent of grocery sales are over the Internet. This chapter concentrates on the supply and demand chain for the grocery half of the retail food industry, which engages in three major types of e-commerce: (1) Internet shopping; (2) business-to-business e-commerce for market discovery, price comparisons, and expedited orders and deliveries; and (3) business-to-business relationships that share information, reduce costs, and increase efficiencies in the vertical supply chain.
F
Food Supply Chain before E-Commerce Before electronic information technology was available, the food supply chain was segmented in several separate business sectors or silos. The agricultural production system was made up of farm input supply businesses, farm producers, and first-line handlers and shippers. The next sector in the supply chain was composed of first-line processors, often referred to as the agribusiness sector. Their role was, and is, to process raw commodities into ingredients that generally need further processing or cooking to make edible food for human beings, or to package fresh produce that is sold directly to stores. Next in line, food manufacturers turn ingredients into finished food products that typically are sold to wholesale warehouses or retailerowned distribution centers. Finally, right before consumers, is the retail sector, made up of retail food (grocery) stores and food service establishments (restaurants and quick service places). Since food manufacturers’ raw material is seasonal and perishable, inventories tended to build up in the manufacturers’ warehouses. They offered wholesalers discounts to “forward buy.” If the wholesalers’ inventory built up, they, in turn, offered retailers a discount and other services to also “forward buy,” pushing the costs of holding inventory farther down the food chain. This system, known as a supply push system, relies heavily on well-known manufacturers’ brands and national advertising to move products. It worked well for wholesalers to hold excess inventory in an era of high inflation because they could buy at a discount and sell at inflated prices before the shelf life of the product expired. With low inflation and rising costs of space, labor, and management, bulky inventories, like the elephant in the python, became a financial burden. Traditionally, manufacturers and retailers have been adversaries, each trying to extract the maximum profit margin from the final sale price of the product. Wholesalers and retailers often charged, or manufacturers offered, (slotting) fees to introduce and handle new products. Retailers relied on manufacturers’ advertising and promotion campaigns to help sell the products in their stores. Retailers, whose business territory is considered to be three to five miles around their store, competed with each other on price and consumer services. They were secretive about their merchandising strategies and operations. Competition within the industry for a bigger share of consumers’ food dollars is fierce. In the twelve-year period between 1987 and 1999, inflation-adjusted sales in eating and drinking places grew an average of
2.2 percent a year, while similar sales in retail food stores decreased an average of 0.1 percent.1 Consumers can obtain food everywhere these days, including vending machines, drive-through places, and the Internet. Retailers soon realized that their biggest competitors were not other grocery stores but quick service restaurants, convenience stores, and other food take-out places. This has been an industry dominated by many small independent businesses at both ends. There were many farmers, many retailers, and relatively few manufacturers and distributors. Farmers and retailers each felt that the larger companies in the middle of the supply chain were profiting at their expense. The farmers’ defense was to organize into buying or selling cooperatives and lobby for government price supports and access to foreign markets. The retailers’ strategy was to buy low and sell as high as possible, consistent with growing customer sales in a highly competitive sector. There was little vertical integration or organization. Perhaps this is because the businesses at each end of the food chain need to be widely dispersed across the landscape. Agricultural production is tied to the appropriate climates and land bases, and grocery stores have to be located in each community where people live. Whatever the reason, these many independent business operators valued their independence and believed in their value to society. Into this fragmented and fiercely independent system came a single formidable competitor called Wal-Mart. It was able to lower retail prices by developing an integrated supply chain driven by the sharing of information about retail sales with suppliers in real time. Electronic technology made the collection, analysis, and transmission of data possible, but Wal-Mart developed and perfected the information system. It is fair to say that it is forcing the rest of the industry to adopt e-commerce for business practices and to build new relationships with suppliers. Ironically, food retailers have owned, and largely ignored, a key resource for improved efficiency in this supply chain since the mid-1970s. The data scanned every day at their checkout counters is the beginning of the information chain for businessto-business e-commerce relationships. Now they are learning to use that data. How it is changing the old supply push food system into a demand pull system is explored in this chapter after a brief discussion of the use of e-commerce to match buyers and sellers through the Internet.
1. The Food Institute (1999, pp. 71–72).
Consumers, the Internet, and Food Internet ordering of food is a modern version of an old practice in retail food, a practice that was abandoned because it was too expensive. In the first half of the twentieth century, small-town or neighborhood grocery stores carried customers’ credit accounts, took phone orders, and delivered food to their homes. But with product proliferation, consumers needed to see new products in order to make choices. With automobiles, consumers became mobile, and it was less expensive for them to stop and shop as they traveled around the suburbs for other purposes. Consumers used their own time to provide “free labor” for shopping and delivering groceries to their own households. Groceries became cash and carry stores, then suburban supermarkets, and now supermarket chains and supercenters where customers often bag and haul their own groceries. With the Internet, customers can once again purchase on credit and have food delivered to their homes. Alternative models of shopping include phone and fax orders that might be delivered or simply bagged and held for the consumer to pick up. What has changed to make home-delivered groceries attractive once again? The advent of time-starved consumers and their access to the Internet make home delivery look like a solution to a modern consumer problem. Most surveys show that consumers do not like grocery shopping, considering it a drudgery task. This type of shopping is ripe for Internet competition. In contrast, weekend or occasional shopping is leisure.2 It is entertainment, fun, an adventure, and a social event. The Internet cannot compete with this activity by selling and delivering products to the household’s doorstep. Internet shopping for food represents enormous opportunities to take products and services to consumers in a most convenient way. Morgan Stanley Dean Whitter of New York estimates that its sales will double in 2001 to $2.5 billion and hit $17 billion by 2004. This is still only about 3 percent of national grocery sales.3 At least one-third of Americans were online in 2000. Of those, 77 percent believed that going online made their lives better. The average time an America Online user spends online each day went from 14 minutes to 55 minutes in a few years.4 Consumers are supposedly finding online services convenient. Ordering and shopping 2. Hughs and Ray (2000, p. 2). 3. Chris Knight, “When Food Goes Postal,” Wall Street Journal, November 3, 2000, p. W1. 4. Pittman (1999, p. 6).
(browsing) any time of the day or night, saving on transportation to a mall, and receiving products in one’s home or office is convenient, and convenience is a primary quest for time-pressed American consumers. The advent of e-commerce for home shopping increased competition for some traditional retail stores and offered a new form of business to others. Initially, Internet sellers partnered with bricks and mortar retailers, using their stores or distribution centers as a source of the food they picked and delivered to households. The partner stores may even have gained some business in these cases. But picking groceries from a retail store, with its own markup already on the product, only raised the cost of Internet selling—a cost that consumers were not willing to pay. Finding a way to lower the costs of commercially shopping (picking) and delivering food to homes that competes with the free labor provided by consumers is a challenge to profitability for Internet stores. Some Internet sellers established their own distribution centers where cost of goods sold is lower and groceries could be picked faster. But the capital investments in real estate, inventory, and equipment also added to their costs. So Internet food companies kept losing money even as they increased sales and revenue. The fixed and variable costs of procuring and servicing every new customer was far greater than the revenue generated. One reason it is so hard to make a profit in this business is that Internet food companies lack the power of volume buying that is enjoyed by large food retailers. Another is that consumers become dissatisfied with a merchandise mix that does not have enough variety, is not delivered, or is not delivered on time. Finally, a delivery charge of $10 an order covers only about 60 percent of the delivery costs. Financial analysis in April 2000 by the Boston Consulting Group estimated the costs of obtaining each new customer to be $82 for an Internetonly retailer, $31 for a store-based Internet retailer, and $12 for a traditional retailer during 1999.5 By the second quarter of 2000, the same consulting firm estimated that e-tailers spent only $40 to obtain each new customer.6 Table 11-1 shows an estimate of the percent of sales revenue devoted to covering costs and the gross and net profits of supermarkets, retail Internet companies that have their own distribution centers, and category killer Internet companies that buy directly from manufacturers. This 5. Rebecca Quick, “New Study Finds Hope for Internet Retailers,” Wall Street Journal, April 18, 2000, p. A2. 6. Martha H. Hamilton, “Survey: E-tailers Struggle for Profit,” Minneapolis Star and Tribune, September 4, 2000, p. D1.
Table 11-1. Possible Operating Statements for Retailers Percent Cost or performance category Sales Cost of goods sold Gross margin Store
Order entry and customer service Warehouse/depot Delivery Marketing and advertising Operating profit Depreciation and taxes Net profit
Internet sellers with own depot
Category killer Internet sellers
100 70.6 29.4a 21.4 (includes 7–11 for labor) ...
100 76.0 24.9 ...
100 64.0 36.0 ...
...
1.5
2.5 1.0 1.5 2.1 1.1 1.0b
3.1 10.9 3.1 2.6 1.0 1.6
2.7 14.8 12.5 3.0 0.6 1.6
Supermarkets
Source: Frank Dell II, “E-Commerce Economics” (www.ideabeat.com/exchange/cuttingedge/ ce_article.html [April 2000]). a. For Wal-Mart, gross margin is 35 percent. b. For Wal-Mart, net profit is 3.0 percent.
picture indicates that positive net profits can occur in Internet companies, assuming they have fewer assets and lower real estate and capital costs. The category killer can buy merchandise directly from manufacturers but spends considerably more in marketing and advertising. The delivery costs for Internet companies approximately offset the labor costs for stores.7 This has led to the traditional bricks and mortar companies buying up Internet food retailers. David Ignatius, a reporter for the Washington Post, calls it the “revenge of the dinosaurs.”8 On April 14, 2000, Ahold USA, a Royal Dutch parent company of Giant Food, bought 51 percent of Peapod with the right to purchase up to 75 percent. On April 17, Safeway announced it would buy 50 percent of Groceryworks.com. In a model similar to large pharmaceutical companies purchasing patents for new drugs 7. Frank Dell II, “E-Commerce Economics,” CMC 2000, Ideabeat.com (ideabeat.com/ Exchange/Cutting Edge/CE_article2.html). 8. David Ignatius, “Revenge of the Dinosaurs,” Washington Post, April 19, 2000, p. A27.
that come out of basic research laboratories in universities, large traditional food retailers are buying up Internet companies with proven software and logistics systems that were financed by venture capitalists and developed by technology wizards. The large bricks and mortar retailers can spread the costs over far higher volumes, and they have the brand recognition and consumers’ trust that Internet retailers do not enjoy. It is widely believed that business-to-consumer Internet sales will be dominated by “bricks and clicks” companies in the future, but finding the right mix of integration and separation will be a challenge to individual companies.9 One study found that half of the most-visited sites are online adjuncts to older established companies like J.C. Penney Company or Sears Roebuck and Company. Also, for every dollar spent online, more dollars were spent offline at the same company. Though none of the products listed was food, the ratio of offline dollars spent to online dollars ranges from 2.92 for clothing to .68 for books.10 This implies that much of online shopping is just that— shopping (searching) for items, descriptions, and prices that enable a consumer to go to an offline source and buy. If this is the best use of the Internet, then a bricks and clicks combination looks like a viable format. In a demand-driven system of food sales, long-term success will depend on consumers’ adoption of Internet shopping. In economic parlance, if it reduces their search costs and increases their utility (delivers superior quality products at lower time and money costs), it will be used. If it costs them time, hassle, choice, or variety, they will return to the bricks and mortar store or they will obtain food from a food service place. Economic theory of consumer behavior predicts that as household incomes rise and the value of time increases, consumers’ willingness to pay for the costs of food delivery will increase. But the value of the service and the quality of the food delivered via an Internet seller must exceed that which can be had by shopping for oneself if it is to be a sustainable business. In a study of why Internet shoppers come back, the number one reason was level and quality of customer service, followed by on-time delivery. Price was the last of eight other reasons.11 Another study found that consumers’ top reasons for shopping online were: (1) they can shop anytime; (2) it takes less time than going to stores; and (3) they dislike holiday crowds. Reasons they were disappointed with the online service were (1) holiday merchandise arrived late; (2) they had to pay extra to ensure 9. Gulati and Garino (2000). 10. Susan L. Hwang, “Clicks and Bricks,” Wall Street Journal, April 17, 2000, p. R8. 11. Timothy Hanrahan, “Price Isn’t Everything,” Wall Street Journal, July 12, 2000, p. R20.
on-time delivery: and (3) they received only part of an order.12 All of these reasons indicate the importance of saving consumers’ time and delivering quality products and services to retain Internet shoppers. The mix of products and services being offered by Internet retailers was examined in detail by Heim and Heim and Sinha.13 The taxonomy developed for electronic retailing focuses on the digital content of the productservice offerings and the market segment it intends to serve. It sets a standard for how to examine this type of business, and it informs us about the state of the art in electronic food retailing. The studies find that even heavy users of dynamic digital content on their Internet sites do not deliver many value added services to their customers. Unlike some of the business-tobusiness Internet exchanges, these retailers do not aggregate diverse suppliers of food and make them available to consumers, at least not more so than a typical grocery store. If anything, the variety available is less than in a bricks and mortar store. In 2000 most of the business-to-consumer food companies went out of business or were taken over by bricks and mortar retail firms. Among them were the oldest and largest of the dot-com food sellers: Peapod, Streamline, Priceline, Groceryworks, and Shoplink. One could argue that their basic business plans were flawed, they did not understand the perils of delivering perishable products, or they did not understand consumers’ shopping behavior and motivation. Household economics and the value of time would constitute a useful framework to study consumers’ likelihood of adopting Internet shopping and the mode of delivery they choose. Outside of taste, the biggest driving force in consumers’ food acquisition is arguably convenience, a need that grocery delivery companies attempt to fill. Though it seemed like the right idea, pieces of the convenience factor often overlooked were the ease of using the shopping software and the ease of receiving delivery. Both require a change in consumers’ shopping behavior, and the change itself is slow to develop.
Market Discovery, Business-to-Business E-Commerce The second growing use of e-commerce is for market aggregation and market discovery between buyers and sellers in the supply chain. These “online 12. Rebecca Quick, “The Lessons Learned,” Wall Street Journal, April 17, 2000, p. R6. 13. Heim (1999); Heim and Sinha (2001).
market makers” are fundamentally different from the retailer and supplier relationships that are discussed in the next section. There, an intimate business relationship develops using information technology to cooperatively manage the inventory flow through the store. The relationship is contractual, renewable, and expected to last through the next selling period. Online marketplaces facilitate shopping by buyers at all stages of the supply chain. Kaplan and Sawhney call them e-hubs.14 They aggregate together a large number of small suppliers (forward aggregators) or a large number of diverse buyers (reverse aggregators) for the purpose of matching buyers with sellers and facilitating their trading goods and services for money. These market makers rarely own any of the merchandise that is traded through them. They simply help buyers find the best price or value available and help sellers identify buyers. Since there are a large number of diverse producers at one end of the food supply chain and a large number of diverse retailers at the other end, this model fits the food industry very well. Farmers may shop for seeds and fertilizer online. Farm input suppliers may have their own web page to sell online, or dot-com companies that specialize in selling farm inputs will arise and act like a supermarket for these types of products. In fact, many agricultural dot-com companies have sprung up. Recognizing them as competition, four large companies (Cenex Harvest States Cooperatives, Cargill, Archer Daniels Midland, and Dupont) announced an alliance to create Rooster.com through which to market their products to farmers and others. The vulnerable businesses from this type of Internet selling are the local farm supply store, seed and feed mill, and smaller start-up companies trying to sell supplies online. Cargill has been called the “most diverse and comprehensive e-commerce player in corporate America.”15 It is involved in e-commerce businesses all along the food supply chain, including EFS Network for food service delivery. A company called Freemarkets conducts auctions online for businesses to find and purchase parts, ingredients, and equipment. Numerous alliances of companies seeking to sell their products to other businesses or consumers are developing in order to take advantage of economies of size and skill in setting up such sites. Many are announced, few were operational at the end of 2000. One example where such a market maker worked well for a consumer product was in the sale of Quisp, an almost outdated cereal by Quaker 14. Kaplan and Sawhney (2000). 15. Lee Egertrom, “Cargill Is Betting Big on E-Commerce,” St. Paul Pioneer Press, September 24, 2000, p. D1.
Oats Company. Baby boomers who could not find this old favorite cereal in their local stores started buying it on Quisp.com and selling it for extraordinary prices on Internet auction sites.16 Consumers discovered a product on the Internet, and its sale increased beyond all expectations of the manufacturer. Market discovery is one of the fondest dreams of Internet companies, as witnessed by their aggressive advertising budgets. An example of a reverse aggregator with a reverse auction market for consumers was Priceline.com, which brought together consumers and products in cooperating retail grocery stores at a lower than average price. Consumers bid online for the lowest price the exchange would accept for given grocery items. Once their bids were accepted, consumers paid Priceline with a credit card, received a printout of the purchases, took this to a participating local retail food store, and picked up their items. The retailer was paid by Priceline. This was clearly geared toward the low-price shopper who had a credit card and the time to shop for an item twice, once online and once in a store. Like many new e-hubs, its longevity has been tested; apparently, it did not add value to the shopping experience of consumers or bring extra revenue to stores. It stopped operation in November 2000.
Market Coordination, Business-to-Business E-Commerce Business-to-business e-commerce is nothing less than a new “way of doing business.” It tends to follow a reverse product cycle, where process efficiency gains come first, followed by quality improvements to existing products and services, and then the creation of new services and products.17 It is a platform from which new institutions and products emerge. The food industry has traditionally created new (branded) food products within the research and development divisions of food manufacturers, tested their sales in selected markets, advertised them heavily, and offered them to retailers at deep discounts or with slotting fees. It is a system with high transaction costs and risks of rejection by consumers. With ecommerce systems, consumers’ interaction with the sellers helps create the products. 16. Jonathan Eig, “How the Web Rescued Quisp from a Cereal Killing,” Wall Street Journal, April 24, 2000, p. B1. 17. OECD (1999).
Business-to-business e-commerce, as it is being adopted by retail food stores and their suppliers, has focused mostly on ways to save labor costs, speed up ordering, delivery, and invoicing. The goal is to move products through the system as fast as possible. The latest innovations have occurred because of new electronic technology, information management systems, and new competition. A hypothesis explored in this chapter is that the competition for a larger share of the consumer dollar has forced food stores and their suppliers (wholesalers and manufacturers) to learn how to exploit the power of information available from point-of-sale (POS) scanner data and reorganize the way they do business. They are behind other industries in adopting continuous replenishment of inventory. The automobile industry adopted just-in-time delivery channels two decades ago, and general merchandise and clothing retailers adopted “quick response” in the 1980s. Even though food retailers were early leaders in the development and design of universal product codes (bar codes), they are among the last to realize the payoff from their universal adoption and use.18 A major motivation for learning how to use the information and information technologies for business-to-business transactions is the example set by the first mover, the early adopter, Wal-Mart. By the early 1990s WalMart and some of its suppliers had designed an information logistics system to harness the POS data. With compatible computer systems and the willingness to share data with suppliers, the information about what was moving over a scanner in a store could be transmitted directly to WalMart’s own distribution centers, aggregated, and sent on to suppliers and manufacturers. Manufacturers could, in turn, adjust their supplies (or production lines) according to consumer demand aggregated from each store. Theoretically, by making information about sales at all retail stores available to both the retailer and its suppliers simultaneously, a continuous loop was created whereby information about sales flowed in one direction and products flowed back, just in time to match the retail demand. The concept of sharing information about sales with vendors and developing a continuous and coordinated flow of products was introduced to the rest of the retail food industry under the banner of Efficient Consumer Response (ECR) in 1992. It was institutionalized by a coalition of trade associations such as the Food Marketing Institute and the Grocery Manufacturers of America, food manufacturers and suppliers such as Proctor and
18. Walsh (1993); Kinsey and Ashman (2001).
Gamble, and a few big retail grocery chains such as Kroger Co. It had little to do with the consumer, except that its goal was to track POS purchases and share that data with suppliers so they could tailor the delivery of goods to match the volume being sold. The stated goal of ECR was to have “a responsive, consumer driven system in which distributors and suppliers work together as business allies to maximize consumer satisfaction and minimize cost. Accurate information and high quality products flow through a paperless system between manufacturing line and check-out counter with minimum degradation or interruption both within and between trading partners.” Fully implemented, it was projected to take over $30 billion out of the distribution costs.19 The real goal of ECR was for each food store and food chain to behave like Wal-Mart; to implement electronic data interchange (EDI) to order goods and slim down the offerings in each category to streamline delivery and the costs associated therewith. This led to “category management,” which has had considerable success even though it may conflict with a goal of providing variety and service to consumers. In 1998, 24 percent of stores responding to a survey by the Food Marketing Institute reported using EDI with at least some suppliers. Of those who did, 53 percent used a third-party, value added network (VAN). This is a network that connects different members of a retailer’s supply chain using web-type technologies and interfaces. Only 17 percent were using the Internet, and the rest used both.20 A major stumbling block to adopting management practices advocated under the umbrella of ECR is that EDI requires compatible computer systems that are expensive to set up and operate. ECR suffered from a lack of what economists call “network effects.”21 As the number of users of a network grows, the benefits to each user grow above the price that user paid for joining the network. In economists’ terms, when the social benefits rise above the price paid by users, there exists a classic case of positive externalities, and the network begins to look like a public good. Like the oldfashioned phone lines, the network could provide the compatible, everready, and seamless communications between retailers and manufacturers or other suppliers. In 1992 establishing a set of individual, workable communications networks with computers at all 130,000 retail food stores that could commu19. Kurt Salmon Associates (1993). 20. Food Marketing Institute (1999). 21. Belleflamme (1998); Katz and Shapiro (1994).
Table 11-2. Adoption of ECR Practices and Productivity in Pilot Sample Level of adoption
Weekly sales/ square foot (dollars)
Annual sales growth (percent)
Annual inventory turns
Sales per labor hour (dollars)
High Middle Low
6.88 6.15 5.27
11.9 2.6 2.7
20.0 18.6 14.4
98 87 89
Source: King (1999).
nicate with the computers of over 9,000 suppliers was asking more than the industry could deliver. The technical problems of incompatibility and a cultural resistance to sharing store-level data with suppliers led to entropy. Adoption was slow. As expected, the largest chains adopted electronic relationships first. Large chains had their own distribution centers, so they did not have to communicate with a wholesaler standing between them and a manufacturer. Many of the large manufacturers were already part of an electronic network for ordering and replenishment with Wal-Mart, so they were ready to operate in this type of business environment.
Evidence of ECR Benefits Data collected in 1998–99 from 100 stores in the Supermarket Panel at the Retail Food Industry Center at the University of Minnesota shows that those stores that had implemented more of the data management and coordination activities associated with ECR are larger, have greater productivity, and have more sales (see table 11-2). In the 2000 Supermarket Panel, with 344 representative stores from across the nation, the highest performers were stores that ranked highest on a supply chain index that measures the percentage of electronic technologies adopted along with the complimentary new management practices (see table 11-3). Stores in the second quartile of the index had the highest annual growth rate in sales. With one year of data each presented in table 11-2 and table 11-3, one cannot say which came first, the adoption of information technology or a well-arranged and progressive organization, but they are highly correlated. There are two components to the supply chain index: technology and relationships. Results show that the single-store retailers had adopted the fewest technology practices, and the self-distributing retailers had adopted the most. Self-distributing chains were only slightly ahead of the multistore, nonself-distributing chains
Table 11-3. Adoption of Information Technology and Management Practices and Productivity Measures in National Sample Level of adoption
Weekly sales/ square foot (dollars)
Annual sales growth (percent)
Annual inventory turns
Sales per labor hour (dollars)
7.80 8.03 6.71 6.35
1.8 2.1 2.6 1.3
20 17.5 17 16
114 104 96 96
Highest Middle high Middle low Lowest Source: King (2000).
on the building of relationships with suppliers.22 These findings support the positive “network effects”: using information technology both allows and demands larger-size organizations, and effective networks lower costs and increase logistic efficiencies.
Large Chains Lead the Technology Smaller retail stores were simply not willing or able to participate in electronic data interchange to the extent necessary to achieve an efficient response relationship with suppliers. But apparently the largest chains, already in supplier networks, believed that there were industrywide economies of scale to be gained if more retailers and suppliers could be convinced to join. In 1999 several large retailers, including H. E. Butt, Kroger Co., and Wal-Mart, went to the Uniform Code Council (UCC), which had originally negotiated the design of the bar code, and asked if they could help design an Internet platform that would allow virtually any retail store to communicate directly with its suppliers without having to invest in special hardware and software. The UCC responded with UCCNet, a wholly owned subsidiary of the nonprofit UCC. It is designed as an open-format, electronic Internet platform for retailers to use to build a business-tobusiness relationship with their suppliers. It was launched in July 2000, with seventy-five companies using the industry-designed, standards-based foundation for secure electronic commerce.23 Although it is in its infancy, 22. King (1999); King, Wolfson, and Seltzer (2000). 23. David Ghitelman, “UCCNet: Quick Adoption for Standards for B2B Likely,” Supermarket News, July 31, 2000, pp. 1, 44.
UCCNet provides access to e-commerce to small and large companies alike with its single computing language, eXtensible markup language (XML). Belleflamme points out that in the initial stages of such network enterprises, large firms join in order to lower their costs of goods sold, ensure reliable and steady delivery, and manage inventory.24 As more firms join and the network becomes more efficient, everyone’s costs decline—and then firms begin to compete with each other again. At this point they begin to differentiate their products and search for more profitable market niches. UCCNet facilitates vertical business-to-business e-commerce, the type that builds an intimate relationship between retailers and manufacturers. They do not see the horizontal networks like GlobalNetXchange announced by Sears and Carrefour, a proprietary supply network, or WorldWide Retail Exchange, a cooperative started by an alliance of Kmart, Target, Tesco, Marks and Spencer, Albertsons, Safeway, and others, as competition but as users of the UCCNet.25 Wal-Mart originally declared that it would not join with any of these supply chain alliances since it has its own system that has been in place since 1991; in 2000 it already had 9,000 vendors participating with it in business-to-business e-commerce and supply chain management.26 Collaborative Planning, Forecasting, and Replenishment (CPFR), again pioneered by Wal-Mart, takes the 1992 ECR vision and implements it through the use of better information technology that allows a vertical exchange of information between retailers and manufacturers. Sharing POS information with a food manufacturer on a daily basis provides the basic data for this system. With the historical record of consumer sales, the manufacturer and the retailer each forecast sales over some future time period, share their forecasts, and negotiate anticipated future sales if necessary. Manufacturers agree to deliver merchandise on a prearranged schedule and manage the inventory of their products in each store. This system obviously demands accurate scanning information and some Internet interface over which data can travel securely. It also demands a willingness to share data and the responsibility for the products on the shelves. An Internet connection for ordering, invoicing, and communicating between retailers and suppliers does not necessarily imply a full-blown CPFR program. But using an 24. Belleflamme (1998). 25. Calmetta Y. Coleman, “Big Retailers in U.S., Europe Form Exchange,” Wall Street Journal, April 3, 2000, p. B19. 26. Clare Ansberry, “Let’s Build an Online Supply Network,” Wall Street Journal, April 17, 2000, p. B1.
electronic network is a necessary step to establishing a CPFR relationship with suppliers. “The whole intent of CPFR is to establish trust between retailers and manufacturers.”27 Wal-Mart is using CPFR with over 200 of its key suppliers.28 Shulman suggests that this system is a B2B2C system since the information truly starts with consumers’ purchases and responds to their purchases with replenishment that matches.29 It is a system in which manufacturers produce to meet consumer demand, not to meet the capacity of their plants. It is truly a new way of thinking and doing business all up and down the supply chain. With CPFR each party faces less risk of excess inventory or stock-out, and sales tend to increase.30 In 1999, 26 percent of retailers and 43.5 percent of wholesalers were planning to try a CPFR system.31 Forester Research projects that some form of business-to-business food commerce will grow from $22.5 billion to $211.1 billion by 2004 and comprise 12 percent of the value of transactions.32 Lest this seem easy, caution from Andrew Grove (CEO of Intel) implies that we should be careful about what we ask for. Business-to-business ecommerce network systems involve nothing short of reengineering the business process, changing the culture, and integrating data from one place to another—from a retailers’ sales floor to a decision system that involves a manufacturer, somebody’s warehouse, and a transportation system—and being able to evaluate and change options on the fly.33 He further says that if the markets become as efficient as planned, it will be a very hard way of life. There will not be as many profits to go around, and managers will have to find new ways to make money in a super competitive world. This is consistent with the theory of network creation and network effects.34 As everyone’s costs decline in a large efficient network, competition will increase and new networks will arise to define unique niche markets. 27. Alan Robinson, “The Circle Is Broken,” Food Logistics, June 15, 1999, p. 43. 28. International Grocery Distribution (1999, p. 95). 29. Richard Shulman, “What B2B Can Learn from Amazon,” Supermarket Business, June 15, 2000, pp. 41–42. 30. Ronald Margulis, “One More Acronym: CPFR Takes a Quick Response to Next Level,” ID, August 1999, p. 33. 31. Adam Blair, “Supermarket Technology: The Big Picture,” Supermarket News, February 23, 1999, pp. 1A–19A. 32. Douglas A. Blackmon, “Where the Money Is,” Wall Street Journal, April 17, 2000, p. R30. 33. Andrew Grove, “Inflection Point,” interview by David Hamilton, Wall Street Journal, April 17, 2000, p. R48. 34. Belleflamme (1998).
Retailers’ Response to E-Commerce As retailers using business-to-business e-commerce observed the economies of scale and scope that were realized by large companies, they began to merge in unprecedented numbers. In August 1998 Albertsons acquired American Stores, becoming the first nationwide grocery chain. The West Coast had already been consolidated under the company Fred Meyer, and in 1999 Kroger bought Fred Meyer to become the largest nationwide grocery chain. The rationale for these mergers was the increased buying power that larger chains would have with manufacturers. This could drive down the cost of goods sold, allowing competition with Wal-Mart. The national concentration ratio of the top four retail food chains had been remarkably stable for decades at about 16 percent of sales; in 1999 it rose to 34 percent. The top five made up over 40 percent of U.S. retail food sales in 2000. The concentration ratio at retail is rising, and in individual cities it is not unusual to have ratios of 70 to 85 percent. With bricks and mortar retailers buying up Internet companies, that can only increase. In the end, whether this will lead to more efficient supply chain management and lower cost of food for retailers and consumers remains to be seen. Many studies have shown that when concentration grows, the average retail price of food grows also.35 Translating efficient supply chains into price reductions for consumers requires sufficient competition. Factors that can inhibit competition in an e-commerce era are sector-specific transaction structures (CPFR-type of Internet alliances or closed horizontal exchanges), the advantages gained by the first-movers, and differences in the regulatory environment between regions or cities.36 Widespread disintermediation is occurring as business-to-business ecommerce allows consumers and retailers to communicate directly with manufacturers. The most vulnerable types of firms are the wholesalers and brokers. They too have been consolidating, disappearing, or changing their business portfolios. Traditional third-party wholesalers’ share of the intermediary business has dropped from around 42 percent in 1990 to 38 percent in 2000. Wholesale firms such as Flemming, Nash Finch, and Supervalu provide wholesale services to thousands of smaller independent 35. Kinsey (1998). 36. OECD (1999).
stores scattered all over the country. In order to survive an increase in manufacturers’ direct store delivery (DSD) activity (28 percent of the intermediary sales in 2000) and the growth in the portion of the market taken by ever-growing self-distributing chains’ distribution centers (34 percent of intermediary sales in 2000), wholesalers have merged and begun to acquire more retail stores as part of their business.37 Some of the largest, such as Supervalu, consider themselves to be “virtual chains.” Some stores are company-owned, some are franchised, and some are simply supplied and serviced in a traditional fashion. These large wholesale suppliers have established their own e-commerce networks for ordering and category management—with limited success. They face the same constraints that prevented electronic data interchange between retailers and their suppliers in the earlier days of ECR. Small independent stores have neither the capital nor the incentive to invest in electronic data systems, and easy access networks are not yet available to them. The economics of self-distributing retail chains’ distribution centers is convincing. A study by Kochersperger shows that the labor expense at a distribution center is 1.73 percent of sales compared to 2.17 percent in a third-party warehouse.38 Total expenses at the third-party warehouse are 3.22 percent of sales compared to 2.5 percent in a distribution center of a self-distributing chain. As retailers continue to merge, more and more of the intermediaries will be distribution centers belonging to retail food chains. They can deal directly with manufacturers on behalf of all their stores and can manage inventory more efficiently. The manufacturers that engage in direct store delivery are some of the most active in developing CPFR relationships with retailers. When there is an agreement with a manufacturer to replenish shelves according to the sales demand, many manufacturers choose to deliver it themselves, stock the shelves, manage the inventory, and (in some cases) retain ownership of the product until it is sold, that is, scanned upon purchase. In this case, the retailer has no cash tied up in inventory, owns fewer assets, saves on labor, and has more cash to use between the time the item is sold and the manufacturers’ invoice must be paid. Salty snacks, beverages, and some bakery products are most likely to be delivered directly and handled on a “scanbased trading” basis. It could be argued that this is one way for manufacturers to regain control over their products and retrieve some of the bar37. Kochersperger (1998, p. 24). 38. Kochersperger (1998, p. 24).
gaining power that flowed to retailers as they held the key to systemwide inventory management—namely, the scanner data. Compiling and keeping POS data for each retail store allows customer loyalty (frequent shopper) programs to be operated; currently, about half of retail stores do so.39 Loyalty programs are based on analyzing scanner data for shopping and sales patterns and rewarding customers in some way for their loyalty to stores. Point of sale data is also used to implement category management whereby the slowest moving items can be eliminated or their facings reduced, especially if they are not in demand by loyal customers. In order to receive the benefits of a loyalty program, consumers must divulge some information about themselves and their household. Some do this gladly, others consider it a violation of privacy. It is but one example where e-commerce demands the sharing of information in order to be effective and a reevaluation of who the information belongs to.
E-Commerce: Research Challenges E-commerce is a relatively new phenomenon. Use of the Internet penetrated over 25 percent of U.S. households in fewer than five years; the personal computer took fifteen years.40 Ways to study the rapidly evolving products and services that emanate from the use of the Internet are not immediately obvious because, as in the reverse product cycle where the process comes before the product, the business models, start-ups, failures, and successes have to occur before academics can know how to think about them. One economic model that fits the processes made possible by the Internet is the economics of public goods. Here the positive externalities of large numbers of sellers and buyers cooperating and interacting with a standard protocol produce large network effects. It is consistent with the motivations of business-to-business e-commerce, either to aggregate and market goods or to build vertical alliances. The formation of vertical alliances where capital investments are expensive (developing hardware and software to collect and use very large sets of data), highly specific to the industry (food sales in grocery stores where all 39. Ashman (2001). 40. Michael Cox, data collected for the Federal Reserve Bank of Dallas (www.dallasfed.org/ [2000]), cited in “The Economy: A Higher Safe Speed Limit,” Business Week, April 10, 2000, p. 242.
goods do not have bar codes), and the level of uncertainty about the quality of a product is high (seasonality of fresh produce, inability to judge quality by inspection, and a huge number of small, diverse, and uncoordinated sellers) might well be explained by the economics of transaction costs.41 The development of CPFR, scan-based trading, and other processes that demand the sharing of data between levels of the supply chain can be examined with transaction cost theories. Sharing sales data reduces the asymmetry of information and moral hazard (incentive to cheat), building a higher level of trust and reducing transaction costs. Much has been written and spoken about a shifting of power within a food supply chain as electronic data interchange and its successor, ecommerce, develop.42 The general consensus is that with the advent of scanner data and customer loyalty programs, combined with retail mergers and consolidation, the bargaining power of retailers increased. This outcome is supported by studies using transaction cost theories; lower search costs for buyers shift the power in their direction.43 Many retailers fear that sharing POS data with suppliers (manufacturers or third-party wholesalers) will diminish their negotiating power in the supply chain and shift it toward manufacturers. Wholesalers have the same concerns, as they can be bypassed altogether by private EDI between retailers and manufacturers or the sharing of information through Internet exchanges. The fact that EDI is being used for ordering and invoicing, monitoring product movement, maintaining item price changes, and announcing promotions, even though full-blown sharing of scanner data is extremely limited to date, indicates that there must be some mutual benefits to this cooperative arrangement. Economic theories of principal-agent behavior might be used to explain the sharing of risk between retailers and manufacturers. Principal-agent models examine how the principal party (the one with the power—say, manufacturers) induces the agents they sell to (say, retailers or wholesalers) to behave in a way that maximizes the returns to the principal. Ownership patterns and who has the right to benefit from ownership is the purview of property rights theory. It might be used to assess whether integration of two or more segments under common ownership will improve systemwide performance.44 Owning the right to 41. Venturini and King (2000). 42. Krishnan and Soni (1997); Lohtia, Ikeo, and Subramaniam (1999); Nakayama (2000). 43. Malone, Yates, and Benjamin (1987). 44. Venturini and King (2000).
information (or other assets) usually implies the right to benefit from its sale to another party. When information is shared and used by parties on both ends of the supply chain, the party with the right (or the capacity) to benefit and how those benefits are shared or redistributed could be critical to the cooperative sharing of information. Nakayama, using another approach called interorganizational relations (IOR) and rational exchange theory, found that the sharing of data by wholesalers with manufacturers (giving up some of their property rights to the information) led to a perception that manufacturers were gaining power.45 However, in order to retain whatever advantages the wholesalers’ data offered them, the manufacturers were found to provide incentives to wholesalers, specifically price breaks, sufficient to assuage their fears of losing power. In the midst of mixed evidence about whether the sharing of information shifts power in the food supply chain, this behavior implies that if sharing product movement information shifts property rights toward manufacturers, they respond with offsetting incentives to wholesalers and retailers in the form of price bargains and a variety of valuable services. This scenario is not inconsistent with the arguments that vertical consolidation and cooperation lowers transaction costs and that consolidation at the retail end results in greater bargaining power and lower prices. Perhaps retailers and wholesalers who share information up the supply chain retain bargaining power, not so much because they have superior knowledge based on POS information, but because they can withhold this information from manufacturers if they do not receive favorable prices and services. In the end, the result is the same. With a more cooperative and efficient system, distribution costs and food prices decline. E-commerce networks are developing outside of the trading partners themselves, and they create value that can be spread across many other firms by way of standardized protocols for communication. The very purpose of UCCNet and other e-commerce facilitators in the food supply chain is to build capabilities that can foster cooperation and trust. Trust dampens moral hazard and opportunistic behavior that creates barriers to sharing of data and cooperative planning for inventory replenishment. The value created by firms with unique network resources, how this value is shared, and how it is passed on to consumers may be studied under resource-based and capability theory, another economics approach.46 45. Nakayama (2000). 46. Venturini and King (2000).
New Ways to Conduct Business; New Ways to Live How has the development of e-commerce in the retail food industry influenced the location of control and competitive advantage along the food supply chain? Changes in the market structure imply that businesses will be larger and be involved in more strategic alliances to control and coordinate the supply chain for production inputs and final food products. Alliances of major suppliers of raw materials will sell to manufacturers over Internet platforms that allow information to be shared with its members—or, at least, between buyers and a group of sellers. These large alliances are being formed to short-circuit several small start-up Internet brokers, from farmers to manufacturers to the retail distribution centers (warehouses). The human and physical (electronic) capital necessary to participate in one of these large alliances encourages and demands that firms grow larger. They favor the larger retailers and encourage further consolidation into selfdistributing chains. Large self-distributing chains are in a better position to form partnerships with large manufacturers to share information about consumer sales, cutting transaction and distribution costs in the system. Exchange networks will probably create barriers to entry for nonmainline grocery businesses, but in this market there is plenty of room for niche, neighborhood, and regional players. Food is needed in every locale. People who select local, fresh, and natural foods are willing to pay more for them. Low-cost operators do not excel at providing entertainment or services along with food. The personal touch, the local flair will be provided by local food retailers and the food service sector. However, to the extent that there is profit to be made in selling local, fresh, and natural foods, large retailers have an incentive to form buying alliances or contract with producers who will guarantee these quality characteristics. Who has the competitive advantage in the e-commerce supply chain? The lure of business-to-business e-commerce is largely to increase efficiencies at the back end of firms’ operations. It is cost-driven and Wal-Mart– driven. It provides an ongoing drama in the struggle for control of the supply chain. It has been widely believed that with the consolidation of retailers in the United States and the dominance of retailers in the United Kingdom that the retailers have considerable bargaining power in the supply chain. The ECR model and its successor CPFR assume that the information that drives the supply chain resides in the POS scanner data at retail stores. In arranging to receive that data in real time from retailers,
manufacturers are in a position to reclaim leadership, to become the chain captains, as they were in an earlier era when national brands dominated food sales and retailers were diverse, heterogeneous, and largely unorganized. On the other hand, those retailers who develop the electronic capacity to be able to transmit POS data to manufacturers can also develop their own data warehouses, and with analytic talent, they can control ordering, payments, and customer loyalty programs that give them extraordinary power in controlling their inventory, the quality of their products, and their customer service. The balance of power, translated by the distribution of net profits among the parties in the supply chain, remains to be seen. On the business-to-consumer side, much of the activity was originally driven by technology, not a quest for efficiency. Computer programmers enamored with the challenges of building unique and workable electronic catalogues initiated companies like Peapod and WebVan. They became a bottomless pit for financial capital and the ultimate lure for optimistic entrepreneurs. The evolution of this part of e-commerce is slow but sure. Again, alliances with bricks and mortar retailers, new (lower-cost) models of delivery, more efficient distribution centers, better targeting of consumers, and pricing of services will likely make this a viable part of future food retailers’ portfolios. E-commerce alters the concept and use of time. There are faster production and delivery cycles, and consumers can shop at all hours of the day. Understanding the changing concept of time and how activities of work, household tasks, and leisure blend together will be another challenge for meeting consumer tastes and preferences in this new market. Being continuously connected to your work, your friends and family, and the marketplace provides unlimited opportunities for interaction but little time for reflection and synthesis. Internet communication is very demanding. One needs to make quick and precise decisions, often with minimal information and uncertain outcomes. With fast communications, one feels pressure to respond fast. New status symbols involving freedom from the pressures of continuous connection will surely arise. Just-in-time delivery models, born of business-to-business e-commerce and information sharing, have their own time constraints. The physical infrastructure—congested highways, unpredictable weather—make lean, efficient delivery systems vulnerable. For example, Machalaba reported that the two-hour inventory of seats and other car parts preferred at Ford Motor Company plants has been increased to a four-hour inventory to ensure
that they have enough parts to run the plant continuously.47 Trucks running late due to a variety of infrastructure problems made this change necessary. It forced some manufacturers to resort to air freight and to keep loading docks open through the night so trucks can arrive during low traffic times. At the other end of the supply chain, the costs of running out of stock at a retail store are borne first by the consumer and then by the retailer, who loses a sale and maybe a customer. Documenting these costs is just beginning. What is clear is that efficiency in the logistics of the supply chain and the speed of Internet communications cannot compensate for existing infrastructure constraints. Increases in productivity in the entire economy are evident since the advent of Internet and computer technology. Going from a predictable and desirable 2 percent increase in productivity to a run of years with over 3 and 4 percent speaks loudly for the new ways of doing business. Exactly where these increases are coming from, which firms or which sectors, is less clear. There is some evidence that costs are lower and sales higher in retail and wholesale food firms that have adopted advanced electronic and information technologies and that have built strategic alliances with their suppliers and customers. This is not to say they are necessarily more productive or that they bring greater satisfaction to their consumers. Measuring productivity as well as changes in general social welfare is a challenge with the advent of the Internet.
E-Commerce: Public Policy The use of the Internet for e-commerce has heightened concerns about antitrust policies. Consolidation of control at all levels of the supply chain is raising questions of monopoly power and a potential increase in consumer prices. Exploitation of private information about consumers and the potential for electronic fraud open new public policy and regulatory possibilities. The general direction of public policy and regulatory authority is toward greater federal control as opposed to states’ rights. Just as the network effects of joining large, standardized national and international business networks push for more global standards, being able to provide national standards of regulation allows more efficient operations. One 47. Daniel Machalaba, “As Economy Hums, Congested Freeways Exact a Heavy Toll,” Wall Street Journal, August 30, 2000, p. A1.
example is the current states’ rights in setting standards for how food stamp recipients can obtain benefits through electronic transfer systems. Without a national standard, recipients are often prohibited from purchasing food across their state line because the electronic benefits system in the neighboring state will not talk to the system in their home state. With a federal mandate to move all food stamp benefits to electronic transfers by 2002, the interstate inoperability issue suggests a need for federal standards.48
Conclusion The multiple impacts of the Internet and its use in e-commerce will evolve for decades. It is likely to lead to more consolidation, more homogeneous markets, and, simultaneously, more fragmentation among niche markets. We know that the Internet is changing our concepts of relationships, speed, and time. Knowledge-based firms will streamline their labor force. Retail food business will still demand service workers and an increasing number of high-technology experts. Major impacts are likely to be an increase in consolidation, vertical coordination, and cooperation. Network effects predict them, and the technology both facilitates and demands these types of changes in industrial structure. It will be a new and tough world. With consolidation and truly efficient operations, there will be fewer margin dollars to share in the system. Antitrust concerns will arise if there are barriers to entry or if collusive or monopoly pricing occurs. Many large marketing exchanges have announced their intent to operate a market discovery exchange. Most have not materialized and only a few will survive. A common language will become necessary, and currently UCCNet seems to be in the best position to offer this platform. Others may emerge. Internet sellers of groceries will probably remain a small niche in the total retail food market. An enormous amount of energy is being used to reinvent, reengineer, and reorganize the way business is conducted. Supply chains are becoming demand chains, with retailers responding to consumers’ preferences. Fisher, Raman, and McClelland call it “rocket science retailing” and question how fast retail firms can adopt it.49 Two barriers to adoption appear to be a lack of accurate POS scanner data and retailers’ reluctance to share it with their 48. Quinones and Kinsey (2000). 49. Fisher, Raman, and McClelland (2000).
suppliers. E-commerce in the food industry is slowly moving from entropy to efficiency.
References Ashman, Sara. 2001. “Consumer Choice Models with Customer Loyalty Programs in Retail Food Stores.” Ph.D. dissertation, University of Minnesota. Belleflamme, Paul. 1998. “Adoption of Network Technologies in Oligopolies.” International Journal of Industrial Organization 16 (4): 415–44. Fisher, Marshal L., Ananth Raman, and Anna Sheen McClelland. 2000. “Rocket Science Retailing Is Almost Here, Are You Ready?” Harvard Business Review 78 (4): 115–24. The Food Institute. 1999. Food Industry Review 1999. Fair Lawn, N.J. Food Marketing Institute (FMI). 1999. Electronic Marketing Survey of Food Retailers, 1998. Washington. Gulati, Ranjay, and Jason Garino. 2000. “Get the Right Mix of Bricks and Clicks.” Harvard Business Review 78 (3): 107–14. Heim, Gregory R. 1999. “Management of Technology and Quality in Electronic Consumer Service Operations: Applications to Electronic Food Retailing.” Ph.D. dissertation, University of Minnesota. Heim, Gregory R,, and Kingshuk K. Sinha. 2001. “Service-Product Configurations in Electronic Retailing: A Taxonomic Analysis of Electronic Food Retailers.” Manufacturing and Service Operations Management (forthcoming). Hughs, David, and Derek Ray. 2000. “Developments in the Global Food Industry: A Twenty First Century View.” Working Paper. University of London, Wye College Food Industry Management Group. International Grocery Distribution. 1999. Wal-Mart in the U.K. Letchmore, England: Heath Watford. Kaplan, Steven, and Mohanbir Sawhney. 2000. “E-Hubs: The New B2B Marketplaces.” Harvard Business Review 78 (3): 79–103. Katz, Michael L., and Carl Shapiro. 1994. “Systems Competition and Network Effects.” Journal of Economic Perspectives 8 (2): 93–115. King, Robert P., ed. 1999. “Supermarket Panel Report to the Board of Advisors.” University of Minnesota, Retail Food Industry Center (October). ———. 2000. “Supermarket Panel Preliminary Results.” University of Minnesota, Retail Food Industry Center. King, Robert, Paul Wolfson, and Jon Seltzer. 2000. 2000 Supermarket Panel, First Annual Report of the Supermarket Panel. St. Paul: Retail Food Industry Center. Kinsey, Jean. 1998. “Concentration of Ownership in Food Retailing: A Review of the Evidence about Consumer Impact.” Working Paper 98-04. University of Minnesota, Retail Food Industry Center (August). Kinsey, Jean, and Sara Ashman. 2000. “Information Technology in Retail Food Industry.” Technology in Society 22 (1): 83–96. Kochersperger, Richard H. 1998. 1998 Food Industry Distribution Center Benchmark Report. Washington: Food Marketing Institute and Food Distributors International.
Krishnan, Trichy V., and Soni Harsh. 1997. “Guaranteed Profit Margins: A Demonstration of Retailer Power.” International Journal of Research in Marketing 14 (1): 35–56. Kurt Salmon Associates. 1993. Efficient Consumer Response, 1993: Enhancing Consumer Value in the Grocery Industry. Washington: Food Marketing Institute. Lohtia, Ritu, Kyoichi Ikeo, and Ramesh Subramaniam. 1999. “Changing Patterns of Channel Governance: An Example from Japan.” Journal of Retailing 75 (2): 263–75. Malone, T. W., J. Yates, and R. I. Benjamin. 1987. “Electronic Markets and Electronic Hierarchies.” Communications of the ACM 30 (6): 484–97. Nakayama, Makoto. 2000. “E-Commerce and Firm Bargaining Power Shift in Grocery Marketing Channels: A Case of Wholesalers’ Structured Document Exchanges.” Journal of Information Technology 15 (September): 195–210. OECD. 1999. The Economic and Social Impact of Electronic Commerce: Preliminary Findings and Research Agenda. Danvers, Mass. Pittman, Robert W. 1999. “E-Commerce: Taking the World (Wide Web) by Storm.” Food People (August): 6. Quinones, Ana R., and Jean Kinsey. 2000. “From Paper to Plastic by 2002: Retailers’ Perspective on Electronic Benefit Transfer Systems for Food Stamps.” Working Paper 00-06. University of Minnesota, Retail Food Industry Center (August). Venturini, Luciano, and Robert P. King. 2000. “Vertical Coordination and the Design Process for Supply Chains to Ensure Food Quality.” Paper prepared for the Eighth Joint Conference on Food, Agriculture, and the Environment, cosponsored by the Center for International Food and Agricultural Policy, University of Minnesota, Padova University, and the University of Bologna, June 12–14. Walsh, John P. 1993. Supermarkets Transformed: Understanding Organizational and Technological Innovations. Rutgers University Press.
12
Lean Information and the Role of the Internet in Food Retailing in the United Kingdom information and communication technologies (ICT) is known to have an impact on productivity, jobs, and economic growth. It is less clear what constitutes the precise nature of the mechanisms transmitting this impact. In this chapter, we identify the dynamics of this transmission process in a key sector in an advanced economy: the food retailing sector in the United Kingdom. However, we found that rather than a linear transmission of impact, from technology to business structure, there was an interaction between the two sets of factors, so that the system as a whole can be viewed as enacted by participating agents. Here changes in the interorganizational structure and dynamics of an industry can be related to such factors as information processing, costs of production and interaction with supply chain, and returns to scale. These in turn influenced both the efficiency and productivity of the food retail sector and the autonomy of consumer relations. There was a rapid consolidation of food retailers and manufacturers in the United Kingdom with the advent of electronic point of sale data. This occurred a decade earlier in the United Kingdom than in the United States (as described by Kinsey in chapter 11 of this volume); consolidation is facilitated in the United Kingdom by centralization and lesser regional differentiation. We begin by examining the earlier impact of ICT on this sector. We go on to use this analysis as a basis for examining the specific impact that the Internet and
T
..
the advent of e-commerce have had in recent years on the food retailing system. In addressing the question of the effects of the availability of the Internet on food retailing in the United Kingdom, we examine the role of earlier proprietary IT systems and accounting information in replacing arm’slength market relationships and the coordinating role taken by supermarkets in relation to their suppliers. The second part turns to the impact of the ubiquitous Internet. We argue that despite its benefits, the Internet could overwhelm the grocery sector with information. Information flows operate in certain respects analogously to just-in-time (JIT) production and distribution flows.1 What the supermarket requires is “lean information”: just the right amount of information, of the right quality, in the right place, and at the right time.
The Supermarket Story before the Internet In the 1990s U.K. supermarkets emerged as the major success story of British business. They had captured 80 percent of the grocery market and achieved an average net margin of between 5 and 7 percent.2 This was up to treble the level achieved by their European and North American counterparts.3 One important reason for this is that in the early 1990s supermarkets radically reorganized business processes on the basis of quick response partnershipping (QRP).4 QRP made extensive use of new information systems, eliminated waste,5 and improved synchronization of activities throughout 1. For a discussion of the origins and implementation of JIT, see Womack and Jones (1996, p. 58). 2. Thompson (1992, p. 51). 3. Wrigley (1993). The U.K. grocery sector, which includes nonfood sales, is exceptionally concentrated. The top four supermarket chains—Tesco, Sainsbury’s, ASDA, and Safeway—capture around 65 percent of grocery sector sales. Tesco is the leader with 22 percent market share in the year 2000. The top twelve grocers in the United Kingdom account for 85 percent of sector sales. In 1996 the U.K. food market alone (which includes soft drinks and confectionery but not alcohol and tobacco) was worth £48.5 billion. Consumers purchase most of their food from grocery stores. In 1996 a total of £37.6 billion was spent on food in grocery stores; this represents 77.6 percent of the total spent on food. A further 18 percent was purchased through alternative multiple retail outlets such as co-ops, butchery chains, and bakers. It is estimated that small independent food retailers are left with a market share of 5 percent. Corporate Intelligence on Retailing (1998). 4. Whiteoak (1993, p. 3). 5. We use the term waste in the Japanese sense of muda to mean “specifically any human activity which absorbs resources but creates no value” (Womack and Jones [1996, p. 15]).
the supply chain. The concept of waste applies as much to information flows as it does to material flows with cost implications. When we examined the literature on the processes and factors that had led to the high concentration of U.K. supermarkets and their subsequent domination of the U.K. food retailing market, we found no coherent account of these developments. The literature was functionally based and fragmented in analysis. On the basis of evidence drawn from information systems, logistics distribution, retail, marketing, management, environmental issues, and planning, we argue that this success has above all been the result of the creation by the supermarkets of close interdependence with their suppliers. We suggest that to understand the new organizational configurations, it is necessary to examine relationships with suppliers from a systemic perspective.6 This reveals how the capacity to improve accountability in the network was achieved. The dominant buyer in a business network pressing in all directions for reduction in waste could harness the new information technologies and accounting techniques to tighten interlinkages, reduce costs throughout the system, and gain increased influence through positive feedback effects. Large retail food corporations used accounting techniques as control mechanisms beyond their own boundaries and ensured coordination in the organization of production and distribution; in doing so they have created barriers to entry into food retailing. One effect has been considerable influence by food retailers on consumption patterns. Consumer choice is not entirely autonomous but is influenced by interactive processes of this kind. This suggests the need for systems thinking in which relations between players in the system are seen to extend beyond a logistical perspective. The pre-Internet system of that time had been enacted not only by the supermarkets but also by suppliers and consumers motivated by a variety of incentives and constraints. We begin by showing how changes leading to quick response partnershipping were introduced into U.K. grocery retailing. We go on to look at the system of relations between supermarket suppliers (specifically growers and packers)7 and consumers from the stand-
6. Garnsey (1993, p. 229). 7. Growers provide supermarkets with fresh fruit and vegetables with value added in the field or packing house. In 1993 supermarkets retailed 48 percent of fresh produce sales. Their predicted share for 2000 is 70 percent. Fruit and vegetables are the supermarkets’ most profitable lines; accounts show a retail margin of between 35 and 45 percent (Grower, March 11, 1993).
..
point of the supermarkets and from the perspective of other actors in the quick response partnership. We review evidence that shows how U.K. supermarkets achieved advantages in the 1990s by using information and communication technologies as the critical enabler in processes of organizational change.
Food Retailing in the United Kingdom The 1980s were an era of widespread restructuring for grocery retailers. Between 1977 and 1987 outlets fell by 37 percent from 75,000 to 47,000, and the grocery market became dominated by the supermarket chains. A decade later, in 1997, there were an estimated 32,000 grocery shops trading in the United Kingdom with the top twelve supermarkets operating from 4,900 outlets. By 1997, almost 70 percent of the U.K. grocery sector was concentrated in five companies: Tesco (22.7 percent), Sainsbury’s (18.7), ASDA (12.0), Safeway (11.1), and Somerfield (5.2).8 Competition between supermarkets became increasingly concerned with space; the struggle for superstore sites (“store wars”) intensified. In 1995 an average edge-of-town superstore covered 30,000 square feet and offered the consumer over 20,000 lines from which to choose. In the decade from the early 1980s to the early 1990s, U.K. food retailers found ways to use to advantage their direct contact with a mass of consumers provided through the new information technology generated at the checkout: electronic point of sales (EPOS) data. By the mid-1990s, U.K. supermarkets were recognized as the most aggressive grocery chains in the world in introducing EPOS computing systems in stores and electronic data interchange (EDI) ordering systems with suppliers. (And by this time U.K. supermarkets had achieved the highest rates of profit and levels of market share among food retailers in the world.9) This led to a total rethinking of their own warehouse and stock replenishment systems. The supermarkets, enabled by the new technologies, were able to extend their influence over other sectors in the supply chain and affect their performance by replacing open market relationships with a system of interfirm networks coordinated by IT—an efficient vehicle for managing large 8. Corporate Intelligence on Retailing (1998). ASDA was sold to Wal-Mart in 1999. 9. Corporate Intelligence on Retailing (1998).
numbers of discrete transactions. This created new interdependencies between participants in previously separate networks.10 Before the introduction of the new technologies, suppliers unintentionally accelerated their loss of power in two key ways. First, through their willingness to produce own-label goods for supermarkets, they marginalized their own identity.11 Second, in the era of rapid supermarket expansion, suppliers readily transferred accounting information on merchandising and stock control to supermarkets in expectation of increased sales. But “once the skills were transferred . . . suppliers had lost more degrees of freedom.”12 At the same time, suppliers continued to give supermarkets generous credit. As Loasby pointed out, supplier credit extended “often for a longer period than the goods remain unsold. They also sell for cash. Supermarket chains have negative current assets.” U.K. food retailers operate in oligopolistic markets, burdened by the continual need to differentiate what are effectively the same products. At the same time, they have to avoid destabilizing the market and recognize their interdependence.13 In the early 1990s, quick response partnershipping was adopted with the aim of cutting inventory levels. The origins of the concept can be traced back to work carried out by Kurt Salmon Associates in the United States, originally for the apparel sector and later the grocery sector.14 Quick response partnershipping includes the harmonization of order management, inventory replenishment, physical handling, transport, and the exchange of information with the customer through EPOS and with the supplier through EDI. In 1993 Terry Leahy, then marketing director of Tesco, gave a graphic description of the quick response partnershipping in practice:
10. Loasby (1991, p. 99). 11. Of the four top supermarkets, Sainsbury’s has 67 percent of sales through own-label products; Tesco, 56 percent; Safeway, 46 percent; and ASDA, 43 percent. Own-label products account for 15 percent of sales in U.S. supermarkets (Fiddis, 1997). 12. Loasby (1991, p. 99). 13. It is acknowledged in the trade that U.K. supermarkets use the retailer Marks and Spencer (M&S) to ascertain an upper price the market will bear. M&S holds 4.9 percent of the U.K. food market and is atypical of major food retailers in style and location. M&S trades mainly from the high street, does not provide parking, and stocks 100 percent own label, but has a reputation for quality—along with “quality profits” and high prices. Davies and Brooks (1989); Fernie (1990). Since 1999, the fashion division of M&S has had poor returns, but the food division continues to do well and hold market share. 14. Fernie and Sparks (1998).
..
We have linked our ordering to our electronic point of sale system. And we’ve linked our ordering system to our suppliers with electronic data interchange. Now when we sell a sandwich for example, the sale is registered by the scanner which automatically speaks to the ordering system, which orders a replacement. This is transmitted to the supplier straight into the supplier’s production planning system; automatically calculating the raw ingredients required, the amount to be produced on the next shift, the labor needed the line capacities, the dispatch and distribution details and so on. Out go the lorries into the distribution center depots, deliver straight to stores, back on the shelf, back in the trolley and across the scanner within forty eight hours.15 This is an example of the use of an Extranet, an EDI system that includes customers, suppliers, and other strategic partners and is identifiable as business-to-business e-commerce as early as 1993. By 1995 Tesco had sales of around £10 billion, making it one of the largest grocery chains in the world and number two in the rankings of the top four major grocery retailers in the United Kingdom. Quick response partnershipping demanded the harmonization of EDI software systems to supermarket specification at the suppliers’ expense. Moreover, suppliers usually supplied more than one supermarket and had to fund and run different software packages—with resultant strain on their own business process. Customized inventory management software can provide indirect control over the value chain. In the supermarket literature, the system was described as a customer-oriented, integrated process that harnessed information and communication technologies to intensify closeness to the customer. “A logical extension of this concept (quick response partnershipping) is that the whole activity becomes a single, common, shared process.”16 The dominant player controls the information flows in this process. . As a special case study, we chose to explore in detail supermarket relations with suppliers of fresh produce, the growers. The short shelf life of fresh produce does not tolerate the delays incurred by trading on an open market. The lessons learned by the supermarkets in fresh product supply—that is, the need for customer-driven 15. Leahy (1993). 16. Whiteoak (1993).
systems in place of inventory-based systems in order to reduce waste— have permeated all parts of the supply chain. Relationships between suppliers of ambient goods and the supermarkets resembled those between growers and supermarkets in certain central respects. Quick response partnershipping, enabled by EPOS and EDI, allowed the supermarket to treat all suppliers as if their product had the fragile shelf life of the mushroom—that is, a matter of hours. By the late 1980s, the top four U.K. supermarkets had radically altered their replenishment processes by moving toward daily orders for all fresh products and for many items with a long shelf life. This revolution came about through the supermarkets sharing their professional advice about customer choice and real-time purchasing activities of customers, leading to more accurate stock forecasting. In return, suppliers were expected to develop shorter replenishment cycles and gain efficiency by eliminating forecast and delivery errors. Furthermore, supermarkets tracked the business outcomes of their suppliers—through EPOS data—and ranked their profitability by sector. Good suppliers, those who provided delivery and orders with zero defects (“a quality product in just the right quantity at just the right time”), were rewarded with custom and advice. Supermarket patronage for a preferred supplier resulted in an increase in volume supplied, compensating for a decrease in payment made per unit.17 The network using quick response partnershipping for sourcing fresh produce for supermarkets is usually represented as in figure 12-1. This shows the information flows between supermarket customers, the stores, head office, the growers, and warehouse and distribution. Figure 12-1 shows the way EPOS and EDI are used to help control the range of operations between partners within the retail food sector network. The new technology has led to increased efficiency and profitability for the partners but not to parity among them. The suppliers’ dependence on the custom of a major supermarket was much greater than the supermarket’s dependence on any one supplier. The supermarket controlled the crucial resource of access to large numbers of consumers, while the supplier provided products that were normally also available elsewhere. The potential substitutability of the supplier contrasted with the key access to mass custom controlled by the supermarkets, 17. Suppliers are prepared to incur costs of promotions and recover these through volume sales and steady demand throughout the year (confidential telephone interview with the marketing manager of a multinational food manufacturer, June 2000).
..
Figure 12-1. Quick Response Partnershipping in the U.K. Grocery Sector 1
2
1
Supermarket HQ
Stores
E P O S
3
4
Suppliers
Warehouse and distribution
Customers
which made them indispensable outlets. In 1994 Sainsbury’s and Tesco had between them 18 million customers a week. In return for opening its market to a supplier, the supermarket demanded of them: —customized product or priority access to available stock; —supplier-funded EDI ordering systems; —supplier accountability for value adding features: bar coding, supermarket packaging, and labeling; —maintenance of prescribed “supermarket quality” as defined at any time by the supermarket. This process bears all the hallmarks of a lean supply system that begins with “a ‘market price minus’ system rather than a ‘supplier cost plus’ system.”18 The outcome of implementing lean processes using harmonized EDI systems between suppliers and supermarkets was that reviewing efficiency and effectiveness became a continual process. This reduced the need for and size of safety buffers for all parties with a subsequent saving all around, but the control remained firmly with the supermarket. The higher the performance target reached by the supplier, the higher the target it had to
18. Womack, Jones, and Roos (1990, p. 148).
F
Figure 12-2. Interaction between Participants in the Supermarket Retailing System
e or
st ca
2 Supermarket HQ (forecasts based on EPOS and transferred by EDI)
DI sE
3 U.K. suppliers (growers)
Advice
Forecasts EDI
Advice
5 Consultants (financial, marketing, technical)
6 Global suppliers
4 D e li
verie s
JI T
7 Labor agency (labor scheduling based on growers' data from HQ)
Warehouse and distribution
1 Stores
ies JIT Deliver
11
1 E P Customers O S
Household sector
8
9
10
Agricultural workers (casual)
Labor (part time)
Labor (full time)
Resource and information flow
reach. However, after a certain level, more intensive performance began to incur costs that suppliers and distributors sought to off-load. Figure 12-2 provides a fuller representation of the agents in the supplierretailer system of food retailers. In addition to the actors shown in fig-
..
ure 12-1, there are global suppliers, labor agencies, casual workers (both part-time and full-time), and households. . EPOS generates moment-by-moment information regarding inventory replenishment need and the buying patterns of consumers. Indeed, when the customer makes the choice between the purchase of one or two bunches of spring onions, so sensitive is the stock replenishment information system that a job may be at stake. Not only does EPOS carry data about what is needed to go back on the shelf, but over time the data on customer demand creates a long-term and highly differentiated data stream on supply requirements. U.K. supermarkets are now aware of different consumer demands by geographical region, individual stores, days of the week, and hours of the day. Add to this the introduction of the smart card, and consumer activities are identifiable by customer name, age, sex, other members of the household, and social class (based on zip codes). The retailers use this knowledge not only for the management of goods from source to checkout but for product innovation. They alone are in the position to gather this data, which they sell to data companies that in turn feed back information to their clients, the manufacturers. By 1996 supermarkets had created a virtually cost-free, information-rich product emanating from their checkout counters.19 , , . The management of the business process is the domain of the supermarket headquarters, and its role is to keep the supply chain transparent and to prevent the formation of barriers at any of the organizational boundaries. In order to control quality to suit their known customer base, the supermarkets became as technically knowledgeable and competent as their suppliers. With trusted “preferred suppliers,” the supermarkets gave: —technical advice, showing suppliers how to achieve quality products; —financial advice on how to access financial backing; —advice on how to update plant (particularly refrigerated storage and hygiene standards);
19. IRI Information Services claimed that U.K. fees for supermarket EPOS data peaked at six times those paid in France. The market research firm A.C. Neilson claimed it was ten times greater. Both companies have been trying to persuade the retailers that the fees they are charging are unsustainable. When the contract with Safeway ended, Neilson placed a “take it or leave it” offer on the table; Safeway walked away, leaving Neilson to estimate data through consumer panels. Neilson argued that “it should be a two-way street between retailers and manufacturers.”
—marketing advice to help stimulate product innovation and harmonize suppliers and supermarket promotions.20 For example, during the 1990s Marks and Spencer maintained a team of seventy science and food technologists. Within this team were technical groups that worked with growers and suppliers to produce world-class products with procurement detail specified by M&S; supermarkets adopted this approach with their suppliers.21 The delisting of a supplier did not create a serious problem because supermarkets had the knowledge of products, finance, and marketing required to enlist new suppliers as needed. Embedded within quick response partnershipping is the idea of “commitment both ways” as a means of encouraging bilateral accountability.22 Supermarkets emphasized the mutual dependency between themselves and suppliers, playing down the hierarchical control of the network from head office. Each favored partner had a corporate strategy and in theory professional autonomy; however, supermarket programs prescribed their business practices. EDI also helped to integrate the consumer-contact “front end” process of retailing with the “back end” process of distribution and in so doing created a role for third-party–dedicated composite warehousing and distribution agencies that handled all categories of goods, including ambient, chill, and frozen. Contracting out did not mean that the supermarkets diminished their control over logistics; the reverse was the case. Third-party operators were assessed on their ability to fit the supermarket system.23 . The global market is used in two ways: as a resource for both supermarkets and growers and to put pressure on U.K. supermarket suppliers. Initially, global producers were used to provide substitutes for local producers that were not meeting the supermarket requirements. Moreover, purchasing on the global market meant that supermarkets could represent themselves as price takers, divesting themselves of the 20. A supermarket two-week promotion can typically involve the supplier in producing a whole year’s supply in advance. It is in the field of supermarket promotions that the Internet is beginning to play a key role in real-time information sharing between supermarkets and suppliers, and we return to this point later. 21. McCracken (1995). 22. Helper and Sako (1994). 23. “Europe’s Largest Composite Store” (1991, pp. 15–18). For example, in 1989 Tesco had fortytwo depots, of which twenty-six were temperature-controlled. By the late 1990s, Tesco had nine composite regional distribution centers, each serving about sixty stores. Of the nine centers, four are run by Tesco, two by Wincanton, two by Excel Logistics, and one by Hayes. This mix has enabled Tesco to compare centers and the subcontractors and draw up a league table of performance.
..
responsibility for price setting. However, dealing with unknown, unaccountable sources of supply required managerial effort and incurred transaction costs. Growers, too, made use of global markets. They developed special relationships with compatible growers around the world to sustain the integrated system. For example, if the grower could not match the supermarket demand for green salad, a communication to California supplemented supply. The U.K. grower took responsibility for the product achieving supermarket quality. Supplementing supply in this way carried greater transaction costs, and supermarkets encouraged suppliers to develop methods to provide continuity of supply locally. In 1995 Tesco planned not to import any carrots; indeed, it aimed for growers, in conjunction with U.K.-based supermarkets, to become food exporters to U.K. supermarket bases overseas. Sainsbury’s had a base in the United States (“Shaws”), and Tesco in France (“Catteau”). Through the internationalization of grocery retailing, U.K. supermarkets have created new opportunities for preferred suppliers in the system. Thus the lessons from U.K. methods with their specialist accounting and IT techniques are diffused beyond the U.K. market. . The influence of the supermarkets on the business processes of their suppliers extended beyond the formal logistics of scheduling, delivery, and their attendant bargaining dynamic into the organization of the labor process of their suppliers. Figure 12-2 shows the impact of quick response supply chain partnershipping on labor and how labor was used as buffer within the system. Labor is an area that has not received much attention within the retailer-grower relationship, yet the ways in which EPOS and EDI influenced labor scheduling in other areas of retailing is well known. Despite deficiencies in agricultural data, it was clear that growers were integrated and integral to quick response partnershipping and that labor was essential to turn the supermarket programs into realities. Increasingly, small growers, although defined by the yearly agricultural census as “family owned farms,” in practice often form networks of small growers. A major grower would organize the network (sometimes in the region of thirty small growers) to supply the volume and quality of product specified by the supermarket program. In order to match the flexibility in supply, growers had to use equally flexible labor. Being part of a bufferless supply network brings particular problems for employers. The expense of maintaining a permanent “just-in-case” work
force to meet EPOS-generated orders is untenable. Growers usually employed full-time supervisory staff to maintain quality of product and some permanent part-time workers, but frequently the majority of the employees were hired on a casual basis through the use of a labor agency. One of the key roles of the labor agencies was the provision of transport for workers, as fields and packing houses are usually sited in remote areas. Another key function of the agency is to organize payment of labor. Casual labor was often hired out for only part of the day. Casuals were usually paid by the piece—in some cases the rate was determined individually, but frequently the rate was calculated for the “gang” as a whole.24 If the supermarkets, on the grounds of not achieving the designated quality, rejected produce, then in some cases the gang responsible took a cut in pay. When crops were difficult to handle and weather conditions unfavorable for field gangs, total earnings by piece rates became depressed. The agency then renegotiated with the grower to fix an hourly rate. . Supermarket production programs for growers specified the volume of crop required by the supermarket. The prices to be paid for the produce were not included at the planning stage between supplier and purchaser. The final agreement to purchase, based on quantity and price, was confirmed a few days or hours in advance of delivery. Thus although the quantity and quality of goods produced by the growers was specified directly by the supermarket, the supermarket was not bound by any contractual arrangement to purchase the goods. If the supermarket rejected the goods that had been dedicated to it, the grower’s choice of alternative retail outlets was limited. Supermarkets controlled almost 70 percent of the fresh produce market. They were able to distance themselves from any direct involvement in the growers’ labor costs. At the same time, their formulas for managing their growers provided them with the overall control of vertical integration without the risks. . Labor is drawn from households, and the same households are among the supermarkets’ customers. Field and packing-house workers meet arrangements justified in the interests of cus24. At the time of the study, the Agricultural Wages Board Order of June 1994 fixed the hourly rate for regular part-time and full-time agricultural workers in the United Kingdom at £3.72 an hour, £2.76 an hour for casuals. A good piece rate worker earned £5.00 an hour, but many earned less, and not all packing-house workers were covered by the Agricultural Wages Board Order.
..
tomers, who include themselves. Our research showed that casual workers identify strongly with the corporate image of the supermarket for which they are harvesting or packing produce, not the grower, whom they may never meet or know by name. Agricultural workers do not associate their low levels of pay with either the supermarket or the grower but with the labor agency. Households are the source of both consumers and labor; they are the connectors enabling the system to function as it does, no less important to integrating the circuitry, the flows of information and resources, than are supermarket headquarters. However, members of households are reactive rather than proactive in taking up consumption opportunities in a system in which change is largely initiated by the strategies of the supermarkets. . Oligopolistic competition has centered on product differentiation aspects of retailing rather than pure price competition. In our case study, market control has been achieved by U.K. retailing organizations as a result of financial systems control over the supply chain, through the use of IT in stock control, the monitoring of suppliers’ performance coupled with knowledge of suppliers’ production processes, and massive buying power. Oligopoly has been essential to the current system, providing key players with the resources required for costly investment. Monopoly would reduce incentives to innovate, but the number of competitors with equivalent power must be small for the current system to operate, since a reduction in market share would reduce buying power, decrease control over the supply chain, and put in doubt the ensuing control over costs. In any one residential area in the United Kingdom, there are currently few—if any—competitors, because of planning restrictions (zoning) and because the superstores have been located to achieve access to maximum custom.25 . The supermarkets’ use of EDI programs and EPOS information illustrated how technological integration has promoted organizational integration across boundaries. The continuous flow and analysis of data on customer behavior has resulted in smaller orders and less wastage for stores. The benefit in production and distribution is the reduction in need for and size of reserves for all the parties and a subsequent saving all around. Table 12-1 summarizes the major points relating to IT, accounting and performance techniques, and control made in this chapter. 25. Raven, Lang, and Dumonteil (1995).
Table 12-1. Supermarket Coordination of Supply Chain Performance IT-generated data stream —Continuous information flow via EPOS, smaller and more frequent orders, allow for product differentiation —EDI programs focus on volume and quality —EDI influences labor scheduling for suppliers —Distributed information aids scheduling integration —EPOS aids innovation
Accounting and performance —Capacity to stipulate and assess performance of producers and distributors —Alignment of order process —Elimination of forecasting and delivery errors —Forecasting, monitoring, and management of goods from source to checkout —Achieve shorter cycles —Specialist accounting techniques diffused beyond the U.K.
Control —Consumption under the influence of the retailing system —Synchronization of the business process throughout supply chain —Control over reserves reduced or eliminated from system —Asymmetrical access to information —Monitoring and assessment of suppliers’ performance —Price and market control
The supermarkets could not have achieved their performance improvements in the 1980s and 1990s had they not provided stores to which consumers actively responded. The U.K. shopping public helped to enact the system, enabling it to function as it did. Determined strategic priorities and centralized buying and distribution channeled the uncoordinated behavior of consumers. Alternative forms of food retailing have largely succumbed to competition from the new configurations. A major new development, however, has emerged in the form of the global Internet, to which we now turn.
U.K. Supermarket Information Systems and the Internet Age It is widely predicted that the Internet will stimulate much higher EDI diffusion across all industry sectors. Internet EDI (I-EDI) provides an inexpensive infrastructure for data transmission compared with the original proprietary EDI software.26 There are diverse views on the likely impact of 26. Unitt and Jones (1999).
..
the Internet; one is that the Internet can limit the role of the large corporations in coordinating market relations: “The availability of the Internet is now taking the power away from arrogant hub companies who used to dictate the terms of an electronic relationship, usually skewing the power balance in their favor.”27 A key function of the Internet is its ability to make available accurate real-time data, the weakest link in supply networks. The notion that information sharing and timely communications across systems enabled by the Internet will open up a whole new democratization in supplier-retailer relationships is based on the following arguments.28 E-commerce via the Internet involves more symmetrical information flows that will bring the balance of power embedded within supplier-retailer relationships into equilibrium. EDI, the technology that kick-started the e-commerce revolution, facilitating computer-to-computer exchange of business documents in standard machine processible format (with “zero touch” between and among interorganizational trading partners), represents proprietary forms of information exchange. The high costs incurred in using EDI are sustained by suppliers and create barriers to entry to the grocery market. The Internet is seen to be inclusive because of the low costs of entry to Internet trading. Earlier we indicated that computer-to-computer technologies (including EDI) streamlined supplier-retailer business processes but that this was not a wholly technologically determined process. To establish trading partners, personal relationships also had to be formed; though technologically enabled, the system had to be enacted by active agents. We would suggest that this applies also to Internet trading systems.
The Internet and Information Flow The Internet broadcasts simultaneously to all points in the supply chain, offering spontaneous coordination among trading partners. In examining these issues, it is critical to conceptualize the problems of coordinating information flows in an organization. From this analysis, some keys to supermarkets’ ability and strategies to control grocery retailing information in the Internet age may emerge. There are two solutions to information overload: either reduce the amount of information to be processed centrally or increase capacity to 27. Angeles (2000, p. 45). 28. Our discussion is indebted to Angeles (2000).
process information.29 The first solution involves accepting a reduction in the interdependence between parts of the organization—reduce information processing by reducing synchronization. This can be done through the creation of resource buffers or slack in the system, which prevents difficulties in one part of the system from affecting the rest. Supermarkets reject the idea of living with waste for themselves but acknowledge that their business processes mean that many of their suppliers work with buffers provided by massive finished goods inventories.30 As an alternative to increasing capacity in order to manage information, management may choose instead to rely on more elaborate storage, retrieval, and compression of information from point of origin to decision point.31 Even within the efficient integrated supermarket network, information systems persist in being resource-hungry. The 1990s have witnessed a plethora of data: loyalty card schemes, scanning data, data warehousing, and data mining that have facilitated understanding of the customer and improved category management initiatives. EDI allows the transmission of data back up through the supply chain, especially forecasting information. Item coding and database management systems need to be standardized to ensure that the information sent is comprehensible to other partners in the supply chain. Peter Jordan of Kraft Jacobs Suchard claims that “a lot of companies are throwing electronic data at each other and are not fully understanding the meaning of the data.”32 Alongside the use of better-managed information, Galbraith called for the creation of lateral relations to keep decisionmaking close to the information source.33 Since the mid-1990s, supermarkets have embraced this strategy through efficient consumer response (ECR) and category management, discussed in detail later. 29. Galbraith (1974). 30. Frances and Garnsey (1996). Tesco’s daily replenishment system has reduced the average stocks on hand (in stores and regional distribution centers) from 21 to 12.8 days and for faster moving items to between 3 and 5 days. “However as Tesco did this they learned the limits of what can be accomplished in one firm alone. Specifically . . . suppliers . . . have been fulfilling Tesco orders nightly, justin-time, but from massive finished goods inventories” (Womack and Jones [1996, pp. 46–47]). 31. For example, in one promotion Tesco used information gathered and analyzed from customer loyalty cards to communicate with their customer base of 9 million using 87,000 different offerings tailored to individual customer requirements. The benefits of managing information at this micro level remain unclear, given that Safeway has decided to abandon collecting customer information through the use of individual loyalty cards and will rely on amalgamated EPOS data. 32. Mitchell (1997, p. 34). 33. Galbraith (1974).
..
. The reduction in information processing requirements through the introduction of uniformity is very important for supermarket management. When increased variety is introduced in the system, the problems of coordination increase. Here problems of information and implementation arise— how are managers to ensure that they have the information for coordination? How are they to enforce implementation? With expansion, the demands of direct supervision become too time-consuming, and rules and procedures are instituted based on what has worked in earlier experience. This should allow decisionmakers to concentrate on sorting out exceptions that do not fit the rules and procedures. But it requires considerable planning to specify all input and throughput requirements, including detailed analysis of input requirements, work tasks, and work flow specification. Though this may be possible in simple stable conditions, under conditions of rapid change—such as the flux in the system caused by promotions and new product introductions—the information processing requirements for the management hierarchy can become overwhelming. Dealing with exceptions can absorb much of managers’ time. Implementation can become too complicated. How do the supermarkets get over these problems? At the end of the 1990s, supermarkets had successfully begun to reduce the volume of information flow required for efficiency by reducing the supplier base. This enabled them to increase their capacity to process information by remaining close to the sources of information: customers, suppliers, and distributors investing in new technologies and relational contracting. At the same time, supermarkets gained further standardization of business processes with all partners in the network (see figure 12-2). Below we look at issues raised by information processing and standardization to the supermarket retailing system in the millennium. First, the concept of standardization within grocery retailing is complex: supermarkets deal with up to 30,000 different product lines, with sales dependent on time of year, time of day, geographical location, and marketing promotions by the manufacturer. Variety of product offering coupled with accurate and effective physical distribution is the supermarkets’ competitive advantage along with product promotions and new product introductions. To enhance the retail offer by service provision, supermarkets standardize their procedures throughout the entire supply chain. Standardized integration within and between companies (food producers and distribution) has led to the replacement of product flows by
information flows enabled by EDI and the new technologies—with subsequent improved financial and risk flows.34 Second, supermarkets do not seek mutual accommodation with suppliers based on symmetrical information. This could challenge their ability to implement a market price minus system with suppliers. Nor do they want the unpredictability of market exchange relations. Instead, they have over the past thirty years developed relational contracting.35 Relational contracting is characterized by —long-term patterns of trading between suppliers and retailers; —continuing discussions and negotiations between retailers and suppliers over product characteristics; —continuing discussions and negotiations over the development of new products; —sales-based ordering and the absence of written contracts. The close relationship developed through relational contracting contains professional negotiation and organizational management between retailers and suppliers, but it does not change the asymmetrical information flow and power relationship, which remains firmly weighted in favor of the retailer. The supermarkets have instigated “along side the use of technology [the] creation of lateral relations to keep decision making close to the information source.”36 Third, supermarkets have sought to coordinate information flow by policing their suppliers through imposing rules and regulations to promote customer interests. The late 1980s and early 1990s saw supermarkets devote considerable resources to specifying all inputs and throughput requirements (see table 12-2), including detailed analysis of supplier performance with the onus on the supplier to deliver to specification, on time, every time. However, by the mid-1990s, the supermarkets were aware that it was costly to maintain a large number of close relationships with suppliers at the level of intensity outlined in table 12-2. From the early 1980s to the beginning of the 1990s, there had been a rationalization of suppliers based on the lowest total cost of order. Suppliers were evaluated on quality, delivery, flexibility, service, and price and allocated points for each. These points convert to a cost value for every order processed. By the mid-1990s, clear 34. Fernie and Sparks (1998). 35. Bowlby and Foord (1995). 36. Galbraith (1974).
..
Table 12-2. Supermarket Specifications for Fresh Produce Suppliers Input requirements Supermarket specification of —animal feed —type of seed —farming methods
Work tasks
Work flow
Sanctions
Supermarket ethical audit of —materials sourcing —labor policies —hygiene of facilities
Supermarket implementation of —EDI —quick response partnershipping —just in time production and delivery
May impose financial sanctions on or withdraw orders from suppliers falling short of any specification
pictures emerged as to which suppliers could deliver best value, and the retailer sought to single source many items from the best suppliers. Preferred suppliers are allocated the responsibility for providing the capacity required. To structure this form of allocation enables the supermarkets to sustain their control and influence over their suppliers. Single sourcing in grocery retailing does not make the supermarket vulnerable for three key reasons: —the supermarket has the technical knowledge and expertise to create new suppliers should the relationship break down; —the global market can always be called upon if there is a crisis in supply; —consumer loyalty is not undermined if the store fails to provide one item among those required. The customer will not go elsewhere for a shopping expedition simply because there was stock-out in the mushroom section.
The Impacts of the Internet: Diffused and Distributed Information The Internet offers the possibility for information to be deliberately routed on a self-organizing basis, setting off further developments.37 This suggests ways of avoiding resource costs of hierarchically managed information. 37. The reorganization of information of increasing complexity can be achieved without requiring a central hierarchy, on a distributed network basis. Holland (1995); Kauffman (1993).
Can the Internet, with its capacity to allow information to route itself through diffused intelligence and distributed networks, help supermarkets to solve some critical problems of information management? Among other benefits, the Internet offers the possibility of spontaneous synchronization of the constituent processes of the major production process, so that there is entrainment within the network. That is, the various rhythms of the subproduction process may synchronize themselves, stimulating the selforganization of a quasi-organic system in which information processing can bypass hierarchical control. This resolves the dilemma presented by Galbraith, whereby there is either reduced synchronization as a result of reduced information processing or increased need to process information.38 To look at ways in which practical versions of these possibilities (if not conceptualized as such) are being considered by supermarkets and suppliers, we need to understand the grocery retailing system as it currently operates. How was a new level of synchronization to be achieved by the grocery sector in the United Kingdom? Table 12-3 summarizes the evolution of information synchronization of grocery retailing from 1970 to 2000. A pioneering efficient consumer response (ECR) project was developed in the United States between Wal-Mart and Proctor and Gamble. It was described as a distress call by the grocery industry to replace inefficient and misdirected practices, particularly the failure of retailer-supplier relationships and the mismanagement of data. ECR took root in Europe in the mid-1990s with the establishment of a European executive board defined as “a global movement in the grocery industry focusing on the total supply chain—suppliers, manufacturers, wholesalers and retailer working closer together to fulfil the changing demands of the grocery consumer better, faster, and at less cost.”39 In essence, ECR was an attempt to deal with asymmetrical information that limited suppliers’ knowledge of outcomes to the stores. It did this by adopting category management and providing a standardized management framework within which retailers and suppliers could more equally and effectively coordinate timely information and activities. Evidence from the early 1990s showed that partnerships were not working because of the adversarial nature of existing relationships (see
38. Galbraith (1974). 39. Fiddis (1997).
..
table 12-2).40 In particular, problems with suppliers arose concerning product promotions, the distancing of suppliers from data required to understand their customer base, and the supermarkets’ unwillingness to share EPOS data. New methods emerged to overcome these problems— one was category management.41 Category management can be defined as a retailer-supplier process for managing categories as strategic business units through enhanced customer value. A category is represented by its proponents as a manageable group of products and services that customers perceive to be interrelated and suitable in meeting a consumer need.42 Category management has been described as bringing about a transformation of the interface between retailer and supplier and the coordination of supply and demand information flow. The basis for forming a category is to maximize market share for bundles of goods and services based on consumers’ lifestyles and associated product requirements, as shown by Kinsey in this volume (chapter 11). For example, data mining has led retailers to understand that for consumers, ice cream can come into the same category as cookies, fruit, and yogurt— a dessert choice. In contrast, retail logic would divide these products across departments: frozen foods, cakes and pastry, fresh produce, and dairy. Management functions that disregard consumer lifestyle create tensions between consumer logic and retail logic, leaving consumer demands unfulfilled and stock unsold.43 The Internet is a technology that enables business partners in a network to operate across organizational boundaries in real time. This is evidently a means of facilitating category management. Category management is achieved through placing responsibility for a given category in the hands of a single supplier. This changes the fundamental role of selling. Instead of seeking to gain market share at the 40. O’Sullivan (1992); Hogarth-Scott and Parkinson (1993). 41. “The failure rate of new product introduction is increasing. In 1995, 16,000 new items were introduced in the grocery industry in the U.K., an eightfold increase in 20 years. The life expectancy of the products has declined from 5 years to 9 months in this time and 80 percent of the 16,000 items lasted less than a year” (Mitchell, 1997, p. 109). 42. ECR Europe Category Management Best Practices Report (1997). 43. Evidence from a category management project between a global manufacturer and a retailer showed that to establish categories and new forms of work organization took six months, equal to 9,000 hours of work (McGrath [1997]). The qualitative evidence based on a semistructured interview with a global manufacturer supported this evidence. The start of the category management process had been time-consuming, but the learning curve had been rapid and the process was now viewed as efficient. Organizing around categories had involved a shift in organizational culture.
—Larger stores —Transfer of stock and stock control to regional distribution centers (RDCs) —Delivery by manufacturer to RDC —Weekly orders —Several deliveries each week —Introduction of computerized store replenishment systems —Stock levels reduced
Early 1980s
Sources: Frances and Garnsey (1996); Whiteoak (1998).
—High street supermarkets —Era of price wars —Direct delivery from manufacturer to store —Weekly delivery —Lead time 7 days —Inventory controlled by store manager, stock control erratic —5 weeks stock held in store
1970s —Edge-of-town superstores —Quick response partnershipping —Control of replenishment transferred from RDC to head office —Central visibility of RDC stocks —Bar codes introduced —Ordering based on EPOS data introduced —EDI reduction in lead times introduced —Daily deliveries —Stock levels fell to 1–3 weeks
Late 1980s: supermarkets implement enabling technologies in stores —Composite, multitemperature storage and distribution —JIT for perishable foods enabled by EPOS —EDI throughout the supply chain —Data mining/ warehousing with analysis sold back to suppliers
Early 1990s: supermarkets require suppliers to harmonize enabling technologies and business processes
—JIT for all foods, including packaged groceries —RDC stock reduced to minimum —Pick by line —Cross-docking —Composite networks —Daily deliveries —Very short lead times (< 24 hours) —Efficient consumer response (ECR) —Category management (CM) —Internet-enabled EDI (I-EDI)
Late 1990s–2000
Table 12-3. Evolution of Information Synchronization in Food Retailing in the United Kingdom, 1970–2000
..
expense of competitors’ brands, the category controller hopes to benefit by increasing the size and performance of the category, so that all brands (including the controller’s) will benefit. Information on customer behavior is critical to the operation of category management. Through the Internet, this information can be shared among channel members. The focus is on the total system in order to reduce costs and inventories for all parties in the network. For the system to be efficient, it needs a transparent pipeline across organizational boundaries that operates in real time, with real-time data, enabled by real-time technologies such as the Internet provides.
Supermarkets as Facilitators of Information Flow The Internet is used to post real-time data for suppliers to draw on, to enhance joint forecasting, and to monitor stock levels. Tesco developed Information Link and ASDA uses Retail Link, developed by Wal-Mart and based on extensive computing power, which allows suppliers to view competitor data and ascertain where in the system goods are currently located. The view is taken that the sharing of data does not constitute a danger to suppliers so long as information on profit margins and future promotions does not reach competitors. The data itself are not facilitative except to those with the necessary competence. Quick response partnershipping in the late 1980s and early 1990s was the means to bring about a shakeout of suppliers and control information flow through rationalization. Category management is the means to rationalize and control information flow as regards product lines. The proliferation of product range to 30,000 items in a single store is confusing to the customer and resource-intensive for the supermarket; the trend is to a reduction in excess variation of product. Information technology makes it easier to identify slow-selling items within a category, and in taking down the range and shelf space, a supermarket can use that space for faster-selling goods. Moreover, the supermarkets do not want consumers to limit themselves to habitual purchases, which could limit customer spending. The lessons learned from EDI on the automation of the entire supply chain can now be amplified through I-EDI. U.K. food retailers achieved their success by entrainment within the supply chain network:44
44. Entrainment occurs as the parts of the self-organizing system move into dynamic synchrony.
In the case of the large multiple retailers in the U.K. their aim is very much to run continuous replenishment programs, a process in which they retain control of the replenishment (in order to have to deal with a single unified process) and move towards daily call off on very short lead times. . . . Cross docking is a technique in which goods arriving at a regional distribution center are unloaded from the inbound vehicle and moved from the goods receiving area “across the dock” for marshalling with other goods for onward dispatch without being put away into stock. This technique has long been a necessity for very short-life, perishable products. . . . [L]anes are set out containing roll-cages to be delivered to each store served by the regional distribution center. As the goods arrive they are broken down and the appropriate quantity of each product line is loaded into the roll-cages for each store. Full pallets of single products are no longer necessary.45 The United Kingdom is still ahead of the United States in these respects, and single-item tracing is well advanced and facilitated by new technologies. The competence of U.K. giant food retailers to configure small, discrete, product-mixed orders on a large scale to serve online grocery shoppers is in place.46
Shopping Online The development of business-to-consumer (B2C) e-retailing in the U.K. food sector is expected to follow a different trajectory from that in the United States because of geography, demographics, regulatory systems, and familiarity with arm’s-length shopping modes such as mail order.47 The U.K. government aims to “switch off ” analogue television between 2006 and 2010, by which time 95 percent of households will have transferred to digital TV. “The television will bring the Internet to the mass market and digital television is the key. Services offered through the television have a greater potential for attracting the types of advertising revenues needed to make the new Web-based services commercially viable.”48 45. Whiteoak (1998). 46. We may even see redundant superstores converted to online distribution centers. 47. Retail E-Commerce Task Force of the Retail + Consumer Services Foresight Panel (www.dti. gov.uk/foresight). 48. Iain Stevenson, head of New Media, Ovum, quoted in the Retail E-Commerce Task Force.
..
It is conceivable that brand leaders with a reduced product range and own-brand labels may come to dominate the B2C offerings with Internet home grocery shopping. Tesco and Sainsbury’s have online grocery shops offering only 2,500 product lines.49 At present, the online groceries have a unique feature; in the words of one manager, “managing the web based offering is like choosing goods for a small shop.”50 E-retailer advantage in reduced product handling and reduced information flow is rewarded by an average online home delivery purchase of £90–£100, double the average spend from a normal store visit.51 Moreover, the assumption that online purchases would be for bulk goods and packaged low-margin groceries has not been substantiated; high-margin fresh foods have also been accepted. It is anticipated that a critical mass of online home delivery grocery consumers will emerge in the next five years. In fact, some believe that the use of Internet TV could even lead to superstore closures in the next ten years.
The Mushroom Industry, an Exemplar What takes place at the leanest end of grocery retailing, in perishable produce, is a good indicator of what is to follow in the industry as a whole. Our evidence to date is based on —relational contracting; —quick response partnershipping; —cross docking; —shorter replenishment lead times; —the absence of brand identity; —the identity of product being solely associated with the supermarket. All arose within the fresh produce sector and subsequently were applied to the chilled and ambient temperature goods. It is for this reason that studying the impacts of the Internet on the edible mushroom industry is revealing. 49. www.tesco.com; www.sainsbury.com. 50. Telephone interview with the category controller of a multinational food manufacturer, June 2000. 51. The 1985 company report for Sainsbury’s showed that the larger stores achieved on average a spend per visit three times higher than that of their smaller stores, a financial result that helped guarantee the growth of the superstore. The financial result from the value of online shopping (which shows similar behavior across all community sections) may be one of the key determinants to currently reducing superstore expansion as opposed to tightening planning regulations.
Category management in the fresh produce market has intensified lead times. For example, in 1995 orders to mushroom growers were confirmed at 14:00 hours for next-day delivery. In 2000 orders were confirmed at 22:00 hours for next-day delivery. Forecasting using efficient consumer response and category management is more accurate, and sales-based ordering will soon be on a real-time basis using EDI intranets. Any supplier of mushrooms to U.K. supermarkets needs to operate within these time scales and therefore needs a U.K. base. In the past, mushroom growers at times had difficulty fulfilling orders to match the supermarket forecast and used networks of suppliers to supplement orders, taking responsibility for supermarket quality. The role of the Internet in these networks is now becoming central. The problem of rolling out EDI to second- and third-tier suppliers was inhibited by cost. In 2000, 200 small independent growers based in very rural areas of Ireland could turn on their networked PCs and look over requests for supply from one of the four major mushroom suppliers to the U.K. supermarkets. This is an example of the way individuals and smaller firms may in the future be able to develop a wider choice of trading partners. Whether they are able to negotiate trading terms more advantageous to them than those that prevailed before the use of the Internet is not yet known. Nor is it known whether there is a propensity for the formation of e-commerce communities when groups of sellers unite to form online trading communities. The mushroom industry can be interpreted as an example of open access with easy entry and exit to trading, as envisaged by those who see the Internet as a means of democratizing the marketplace.
Summary In the United Kingdom, centralization has led to a higher level of consolidation in food retailing than in the United States. Since the 1980s, U.K. food producers and manufacturers accepted retailer-imposed proprietary EDI systems in their businesses and took on the financial responsibility for implementation and maintenance of the software. These were concessions they had to make in order to take part in a system of food retailing that conferred certain advantages on preferred suppliers. Indirectly, the use of EDI incurred costs for suppliers through the need to run multiple information systems. The migration to the Internet as the means of coordina-
..
tion of the supply network offers the possibility of further reduction of waste. If suppliers can implement such savings, this may keep down their costs, though they may have to cede margins to the supermarkets that coordinate the system as a whole. However, new possibilities for niche activity are opened up by the Internet together with new channels of access to customers in the sector. Thus though business-to-business interactions over the Internet have so far represented continuity through path dependence, openings for business-to-customer relations over the Internet offer future possibilities that have not yet been fully explored. Our concluding exemplar, drawn from a pilot study of the mushroom industry, reveals further possibilities for business-to-business relations through the Internet. We have shown that while the Internet greatly extends the possibilities for information management, new developments can threaten to overwhelm companies in costly information overload. It was formerly considered axiomatic that in order to deal with the challenge of information processing, companies must either improve their processing power or reduce the amount of information they have to manage, most commonly through various methods of standardization. However, a further strategy is to allow information to route itself through a distributed network, operating on principles of information flow on a distributed rather than a hierarchical basis. This is made possible through the simultaneous broadcast of information to all points in the supply chain through the Internet, allowing more transparency and inter-unit interaction. The key function of the Internet has been to make available to all parties in the system accurate, real-time data, the weakest link in the proprietary EDI-enabled supplier networks in this sector. Moreover, the history of food retailing in the United Kingdom exemplifies the way in which open systems are subject to shifts with knock-on effects. As the relative strength of the major players shift within the food retailing system, the dynamics of competition may alter in the system as a whole in ways that will be affected by new business-to-consumer channels. Such developments open up new possibilities for structure and performance, spurred on by supermarkets, which have been proactive in synchronizing the system as a whole. In the introduction, we outlined our aim to identify the impact of information and communication technologies, and in particular the Internet, on the food retailing sector in the United Kingdom. However, we have found that rather than a linear transmission of impact from technology to
business structure, there was an interaction between the two sets of factors, so that the system as a whole can be viewed as enacted by participating agents and open to change at multiple points.
References Angeles, Rebecca. 2000. “Revisiting the Role of Internet-EDI in the Electronic Commerce Scene.” Logistics Information Management 13 (1): 45–57. Bowlby, Sophie R., and Joanna Foord. 1995. “Relational Contracting between U.K. Retailers and Manufacturers.” International Review of Retail, Distribution, and Consumer Research 5 (3): 337–59. Corporate Intelligence on Retailing. 1998. Grocery Retailing in the U.K. London. Davies, Gary J., and J. M. Brooks. 1989. Positioning Strategy in Retailing. London: Chapman. ECR Europe Category Management Best Practices Report. 1997. “Europe’s Largest Composite Store.” 1991. Industrial Handling and Storage U.K. 13 (4): 15–18. Fernie, John, ed. 1990. Retail Distribution Management: A Strategic Guide to Developments and Trends. London: Kogan Page. Fernie, John, and Leigh Sparks, eds. 1998. Logistics and Retail Management. London: Kogan Page. Fiddis, Christine. 1997. Manufacturer-Retailer Relationships in the Food and Drink Industry: Strategies and Tactics in the Battle for Power. London: FT Retail and Consumer Publishing, Pearson Professional. Frances, Jennifer, and Elizabeth Garnsey. 1996. “Supermarkets and Suppliers in the U.K.: System Integration, Information, and Control.” Accounting, Organization, and Society 21 (6): 591–610. Galbraith, Jay R. 1974. “Organization Design: An Information Processing View.” Interfaces 4 (3): 28–36. Garnsey, Elizabeth. 1993. “Exploring a Critical Systems Perspective.” Innovation in Social Science Research 6 (2): 229–56 Helper, Susan R., and Mari Sako. 1994. “Supplier Relations in the Auto Industry: A Limited Japanese-US Convergence?” Sloan Management Review 36 (3). Hogarth-Scott, Sandra, and S. T. Parkinson. 1993. “Retailer Supplier Relationships in the Food Channel.” International Journal of Retail and Distribution Management 21 (8): 11–18. Holland, J. H. 1995. Hidden Order. Addison Wesley. Kauffman, Stuart A. 1993. The Origins of Order: Self Organization and Selection in Evolution. Oxford University Press. Leahy, Terry. 1993. “The Retailer as Supply Chain Innovator.” Grocery Market Bulletin (November): 1–4. Loasby, Brian J. 1991. Equilibrium and Economics. Manchester University Press. McCracken, Guy. 1995. Putting Customers First: Proceedings of the 49th Oxford Farming Conference. Oxford.
..
McGrath, M. 1997. A Guide to Category Management. Watford: Institute for Grocery Distribution. Mitchell, Alan. 1997. Efficient Consumer Response: A New Paradigm for the European FMCG Sector. London: FT Retail and Consumer Publishing, Pearson Professional. O’Sullivan, D. 1992. “Long-Term Partnership or Just Living Together?” Logistics Today (March–April): 24–26. Raven, Hugh, Tom Lang, and Caroline Dumonteil. 1995. Off Our Trolleys? London: Institute for Public Policy Research. Thompson, Keith. 1992. “The Serpent in the Supermarket’s Paradise.” European Management Journal 10 (March). Unitt, Mark, and Ian C. Jones. 1999. “EDI—the Grand-Daddy of Electronic Commerce.” BT Technology Journal 17 (3): 17–23. Whiteoak, Phil. 1993. “The Realities of Quick Response in the Grocery Sector.” International Journal of Retail and Distribution Management 21 (8): 3–10. ———. 1998. “Rethinking Efficient Replenishment in the Grocery Sector.” In Logistics and Retail Management, edited by John Fernie and Leigh Sparks. London: Kogan Page. Womack, James P., and Daniel T. Jones. 1996. Lean Thinking. London: Simon and Schuster. Womack, James P., Daniel T. Jones, and D. Roos. 1990. The Machine That Changed the World. New York: Rawson Associates. Wrigley, Neil. 1993. “Retail Concentration and the Internationalization of British Grocery Retailing.” In Retail Change, edited by Rosemary D. F. Bromley and Colin J. Thomas, 41–66. London: UCL Press.
13
E-Commerce in the Textile and Apparel Industries
an overview of e-commerce activities in the textile and apparel industries. We begin with a brief look at the current competitive landscape in the “bricks and mortar” apparel industry, highlighting the changes that have occurred over the past decade as retailers have adopted “lean retailing” business models in response to increased product proliferation and shorter product life cycles. With the advent of the Internet, apparel sales have started to move online. To understand how the growth pattern of online apparel sales might differ from that of other products, we outline some of the critical ways in which the apparel purchase decision differs from purchase decisions for other consumer products such as books and compact disks that have experienced rapid growth in online sales. In view of these differences, we characterize some of the new technologies and business practices that are being developed to facilitate online apparel purchasing. The chapter then focuses on business-to-consumer (B2C) business models that have emerged to sell apparel online. We explore a range of B2C business models from new “pure-play” Internet business models to online strategies developed by incumbent bricks and mortar retailers, catalog companies, and apparel manufacturers, highlighting some of the challenges relating to channel conflict and supply chain management that incumbent firms face as they enter the world of apparel e-commerce.
T
We then turn to an analysis of business-to-business (B2B) models that are beginning to surface, concentrating on the potential benefits these models offer to textile-apparel-retail supply chain operations. We also discuss some of the different models that are emerging and how they are related to disparities in channel power. The Internet has already affected the apparel and textile industries. Driven to provide consumer convenience, the majority of apparel manufacturers and retailers have created a virtual version of some aspects of their business. A few apparel manufacturers and retailers have used the Internet to go beyond their existing offerings, providing the consumer with a valueadded Internet experience using, for example, customized online apparel catalogs or offering custom-fit clothing. The potential impact of the Internet on the consumer—and on the industry—lies in how retailers and manufacturers leverage the Internet to meet both expressed and latent consumer needs. The technology now exists to redesign the textile-apparel supply chain to provide consumers with what they want, when and where they want it. The barriers to implementation lie less in the technology than in the willingness of the members of the supply chain to redefine their policies and practices to take full advantage of the technology.
Industry Background The apparel industry can be segmented in several ways that are useful for trying to make sense of its different business models.1 Cost is one basis for segmentation. A large segment of the apparel industry competes on cost. To achieve rock-bottom costs, manufacturers typically pursue production in low labor cost countries and endure the long lead times that usually result from the combination of distant suppliers, low-cost transportation, and attempts to gain operational efficiencies by manufacturing and shipping in large lot sizes. Lower-cost clothing is typically sold through mass merchants (such as Kmart and Wal-Mart) or lower-end specialty stores (such as the Limited). Other apparel firms choose to incur higher costs in
1. This section draws heavily from research in F. Abernathy, J. Dunlop, J. Hammond, and D. Weil, A Stitch in Time: Lean Retailing and the Future of Manufacturing: Lessons from the Textile and Apparel Industries (Oxford University Press, 1999).
Figure 13-1. The Fashion Triangle
Fashion products (28 percent)
Fashion-basic products (27 percent)
Basic products (45 percent)
Source: Abernathy and others, A Stitch in Time.
order to obtain better quality (look, feel, fit, and durability) or more “fashionable” goods. These firms are more likely to sell through department stores or higher-end apparel specialty stores. The degree to which garments follow the latest trends and fashions (that is, how “fashion-forward” the garments are) is the basis for a second type of industry segmentation. Garments can be roughly classified as basic, fashionbasic, or fashion goods depending on the length of the product life cycle and the degree of demand unpredictability for the garment, with nearly half of garments sold falling into the “basic” product category (see figure 13-1). In recent years, fashion attributes have infused nearly all garment types: product life cycles are shortening and product proliferation is accelerating even for the most basic garments. These trends have created increasing demand uncertainty that has changed radically the basis of competition in the textile-apparel-retail channel. Increasing demand uncertainty has led to
Figure 13-2. Forces Driving Lean Retailing Product proliferation
Short product life cycles
佡 佡
Lean retailing Forecasting uncertainty
佡 Inventory 佡 risk
—Weekly —replenishment —Small order —quantities
佡
Need for new supply channel management models
佢 Technological advances Barcodes and scanners • Product level identification • Shipping container identification EDI and World Wide Web Automated distribution centers
the advent of “lean retailing”: retailers that once purchased large quantities of each item far in advance of the selling season now avoid the risk of carrying inventory of increasingly unpredictable items by ordering smaller quantities of each product in advance and ordering replenishment quantities on a weekly basis of those products that have sold in the previous week. The forces driving lean retailing are summarized in figure 13-2. Lean retailing has caused changes in both information and product flow, resulting in the changes in manufacturing and logistics practices indicated in figures 13-3 and 13-4 below. Figure 13-3 shows the structure and dynamics of a more traditional channel, designed primarily to minimize production and distribution costs. Figure 13-4 depicts a channel associated with lean retailers, designed to lower the risk of delivering such a wide variety of apparel products to retail. Lean retailing practices have paved the way for e-commerce by requiring and exploiting the use of various critical technologies, streamlining the supply chain, promoting information exchange in the supply chain, and requiring smaller quantities of products to be manufactured and shipped in response to actual, rather than predicted, consumer preferences. Lean retailing has been facilitated greatly by the introduction and maturation of several key technologies:
Figure 13-3. Channel Structure: Traditional Retailer-Supplier Dynamics Apparel plant 1 Apparel plant 2
Manufacturer’s warehouse
Large bulk shipments
Retail store1 Retailer’s warehouse
Apparel plant n
Retail store 2 Retail store m
Low-frequency retail order
—bar coding and scanning devices for product identification, used to provide real-time, accurate information on which products are selling at the point of sale; —electronic data interchange (EDI), used by the retailer to place quick, accurate replenishment orders and transmit sales information; —more sophisticated, often automated distribution centers that allow manufacturers to pick and pack small replenishment quantities based on EDI orders. These technologies and the business practices associated with them form the underpinnings for many of the critical technologies and practices required for effective implementation of e-commerce strategies.
Figure 13-4. Channel Structure: Lean Retailer-Supplier Dynamics Apparel plant 1 Apparel plant 2
Manufacturer’s distribution center
Small replenishment shipments
Apparel plant 3
Retail store1 Retailer’s distribution center
Retail store 2 Retail store 3
Weekly orders
Weekly orders
Distinctive Aspects of the Textile and Apparel Industries: Factors Affecting E-Commerce Adoption Many distinctive aspects of the textile and apparel industries present challenges to implementing electronic commerce. First, and perhaps most important, is the difficulty of accurately characterizing the product online. Many of the characteristics of a garment that are pivotal to the consumer decisionmaking process—color, feel, and fit—are difficult, if not impossible, to communicate “virtually.” Moreover, unlike books, music, and consumer electronics, the difficulty in describing the product cannot be offset easily with customer reviews, reviews by industry experts, or comparisons based on independent performance evaluations. (Although for online purchases, as for catalog purchases, brand names help consumers infer certain aspects of quality or fit, especially for consumers making repeat purchases.) These obstacles likely will act more as a deterrent in the B2C segment of electronic commerce than in the B2B segment, since industry standards for characterizing color and fabric are more familiar forms of communication for business partners than for individual consumers. Compounding the difficulty in characterizing the product is the personal, often emotional nature of an apparel purchase. Apparel purchasing decisions are closely linked to individuals’ feelings about themselves: their body image and the image they wish to project. Clothing is the “skin” one chooses to wear to project one’s self-image to the public and hence is intimately tied to one’s sense of self. Thus the decision can be laden with emotional factors that are less important in decisions to buy books, music, groceries, and electronics. Ample evidence suggests that current B2C sites are unable to characterize their products adequately to allow consumers to make effective choices. Most compelling is the high return rate for apparel products purchased online, which mirrors the rate for catalog apparel purchases: by one estimate, returns for apparel bought from catalogs ranged from 12 to 35 percent, depending on the product’s style and how fashion-forward it was. Specifically, for casual apparel, such as from Eddie Bauer or Lands’ End, returns have been reported in the 12–18 percent range; for more fitted fashions, 20–28 percent; and for high fashion, they have been reported to have been as high as 35 percent.2 2. “Getting Less in Return,” Catalog Age, March 15, 1999, pp. 1, 18.
Another approach examined consumer propensity to buy certain product categories on the Internet. An analysis by Harris Interactive ecommercePulse computed the ratio of dollars consumers spent offline as a result of online shopping to dollars spent online. The greater the ratio, the more likely that online shoppers use Internet shopping sites to gather information about products rather than to make direct purchases. It is not surprising that apparel led the list: for every dollar spent on apparel online, consumers who visited online apparel sites spent $2.92 purchasing apparel from catalogs or bricks and mortar stores. Compare the results for products that are easier to specify: computer software (offline-to-online ratio $0.99); health and beauty products ($0.93); music/video products ($0.83); and books ($0.68).3 The accuracy of color on the web is of particular concern to consumers. A web-based survey conducted by InfoTrends Research Group indicated that 88 percent of consumers would prefer to shop at an Internet site that could guarantee “true and accurate” color.4 Most of the consumers polled in the survey already use the web to purchase nonapparel products that are not dependent on color. However, the respondents indicated that they rarely purchase apparel online, “largely because of their insecurity about getting what they expect.”5 The report indicated that many consumers who purchase apparel online refer to printed catalogs for more accurate depictions of color. The degree to which the difficulty in characterizing apparel products inhibits online consumer purchases differs by garment type. Basic products are selling well online, according to Forrester Research.6 These products have a number of characteristics that make them more amenable to online purchasing. First, they are fairly familiar products, making their descriptions easier to understand. The touch and feel of basic garments are quite familiar and are fairly similar across brands, which makes the buyer less hesitant to purchase them “sight unseen.” Purchases of basic products also create fewer “surprises” when the garment arrives. (One industry observer noted that you need to “sell consumers twice”—first when they buy the item online, and second when they open the box and compare the
3. “Clicks and Bricks,” Wall Street Journal, April 17, 2000, p. R8. 4. “Holiday Shoppers Wary of Color Online; Web-Based Survey Reveals Shoppers Wary about Purchasing Color-Sensitive Items Online,” Business Wire, December 23, 1999. 5. “Holiday Shoppers Wary of Color.” 6. “Apparel’s Online Makeover,” Forrester Research, Inc., May 1999.
product to what they remember seeing on their screen.)7 Similarly, for more basic items, the fit of different garment styles tends to be better understood, making it easier to purchase online. In some cases, the cut of a basic garment may be more forgiving in that it can fit a wide range of body types. Products like men’s dress shirts and women’s hosiery, which have consistent, known sizing, are also amenable to online buying. Basic garments are typically relatively inexpensive, further contributing to a low level of perceived risk in an online purchase. And since basic products are worn for “everyday” events, their purchase usually evokes little emotion. Consumers perceive more fashionable items to be more risky to purchase online: the decision is more significant because of the increased importance of touch and feel, color and cost, and the increased emotional element associated with more fashionable clothing, which is often purchased for special events. However, the Internet is expected to penetrate the fashion segments of the market, in part because it will provide exposure and access to unique or unusual products that are difficult for consumers to find locally. The ability to customize clothing for fit, fabric, or style should also provide an impetus to increase online sales of fashionable garments. Several initiatives are under way to improve the ability of online sites to characterize their products and thereby reduce both the hesitancy of consumers to purchase apparel online and the return rates of those products. Table 13-1 shows some of the capabilities online apparel sites offered in 1999. The key challenges are representing color, fit, and the details of design and style. J. Crew is testing E-Color’s new “Colorific” feature, designed to increase online color accuracy and consistency.8 E-Color offers server-based software called True Internet Color to increase the accuracy of colors depicted online. Recent reports suggest that Bloomingdales.com, Jcrew.com, and others plan to adopt True Internet Color on their websites.9 Detail can be difficult to discern online. HP Open Pix and Live Picture offer zoom technology. According to Forrester, Bloomingdale’s and J. Crew are starting to use these technologies on their sites. Most online apparel sites plan to introduce zoom technology, as shown in table 13-1. A range of options is under development to help consumers identify the right size for apparel products they are considering. Some sites offer 7. “Apparel’s Online Makeover.” 8. “Apparel’s Online Makeover.” 9. “Holiday Shoppers Wary of Color.”
Table 13-1. Site Features Offered by Online Apparel Retailers, 1999 Percent
Feature Sizing information Fashion advice Lifestyle or entertainment content Outfit cross-sales Zoom technology Virtual model View items together Recommendations based on prior purchase Custom fit service Webcasting
Companies offering
Companies planning to offer
Total
80 43 38 25 23 13 8 5 5 3
15 13 10 18 55 15 10 28 5 13
95 56 48 43 78 28 18 33 10 16
Source: “Apparel’s Online Makeover,” Forrester Research, Inc., May 1999.
“fit calculators” to help consumers translate their measurements into sizes. Others (for example, Public Technologies Multimedia) offer more sophisticated software to map consumers’ measurements to appropriate brands, styles, and sizes. Still others are using two- or three-dimensional models to help consumers predict product fit. A firm called TheRightSize recently announced technology called “The Rosetta Stone of Fit” to reduce the rate of size-related returns in the apparel industry. The company plans to offer the technology for use in Internet, catalog, and in-store shopping.10 BodyMetrics Ltd provides software that recommends the best fitting size for the garment selected and on-screen visualization of clothing on a customized mannequin. Body scanners for taking measurements have been developed, but Forrester Research suggests consumers may prefer to purchase products shown on an attractive model rather than seeing it draped over the consumer’s true (but imperfect) body dimensions. Additional problems with body scanning may also inhibit adoption and use, such as the tendency for people to strike unnatural poses in the scanning machine, producing measurements that will not lead to good fit. 10. “TheRightSize Will Shrink the High Cost of Size-Related Merchandise Returns; New Businessto-Business to Focus on Fit,” Business Wire, April 11, 2000.
Table 13-2. Apparel-Accessories Sales by Channel 1998 Sales Channel Discounters Specialty stores Department stores Major chains Off-price retailers Factory outlets Catalogs Online Unreported Total
1999 Sales
Billions of dollars
Percent
Billions of dollars
Percent
34.7 38.4 32.9 29.4 11.2 6.6 17.0 0.4 2.4 173.0
20.1 22.2 19.0 17.0 6.5 3.8 9.8 0.2 1.4 100
36.9 40.4 34.4 29.2 11.4 6.7 17.2 1.1 2.5 179.8
20.5 22.4 19.1 16.2 6.3 3.7 9.6 0.6 1.4 100
Source: NPD Reports, Women’s Wear Daily, February 22, 2000, p. 26.
Apparel Distribution Channels: Industry Trends and Current Status of Online Sales During the past decade, apparel sales through discount and specialty stores have grown at the expense of department store sales: in 1998 apparel sales through department stores represented 19 percent of all apparel sales compared with 22 percent in 1990.11 Table 13-2 shows the volume of apparel distributed through major types of distribution channels in 1998 and 1999. Although online sales are the lowest volume channel, they totaled $1.1 billion in 1999, up nearly a factor of three from the previous year. Estimates for future online sales of apparel are optimistic. One forecasts online sales at over 7 percent of sales in 2003.12 A more aggressive projection estimates that online apparel sales will account for 9 percent of apparel sales in 2000, and 18 percent of sales in 2010.13 What rate of growth prevails will rely heavily on the implementation of some of the technologies discussed above. 11. NPD America Shoppers Panel (consulting report). 12. “Apparel’s Online Makeover.” 13. KSA Soft Goods Survey 2000 (consulting report).
Emerging B2C Business Models in the Apparel Industry Apparel websites have been launched by established apparel retailers and manufacturers as well as by new entrants. Forrester divides the B2C firms into five categories: —catalog companies (retailers that derive the majority of their revenues from catalog sales); —bricks and mortar retailers (retailers that derive the majority of their revenues from physical stores); —pure manufacturers (apparel manufacturers that sell products only through stores owned by others); —hybrid manufacturers (manufacturers that sell both in their own stores and in stores owned by others); —pure-play firms (retailers that sell apparel only online).
Entry of Incumbents Catalog companies have experienced the easiest transition to B2C apparel retailing since their “back-end” systems for inventory management and order fulfillment are well suited to selling and delivering products to individual consumers. The most successful of these have realized, however, that they must exploit the Internet’s capabilities to add value beyond simply putting their catalog online. For example, Lands’ End provides customized fashion models the customer can use to “try on” products virtually. Bricks and mortar retailers have varying levels of functionality on their sites. Most offer store locators, product displays, and product news; some offer online ordering. Apparel manufacturers have been slow to actually sell products online; most have sites displaying merchandise or referring consumers to retail stores or online sites that sell their products. Many have been hesitant to sell online due to fears about channel conflict, concerns about setting prices that differ from in-store prices, and lack of infrastructure to support direct sales. The experiences of Levi Strauss offer insight into some of the challenges manufacturers experience with online selling. In November 1998 Levi Strauss launched online sales of Levi and Dockers brands. According to a former Dockers marketing director, the top consumer request at dockers.com was for direct online sales of Dockers products.14 Potential channel conflict was foreseen at the time of launch: 14. “Commerce in Store for Levi’s,” Adweek, November 9, 1998.
Advertising Age noted that Levi was moving “into the potentially treacherous waters of channel conflict with this . . . launch of its first major online selling effort.”15 Within a few months of the launch, Levi declared exclusive rights to sell Dockers and Levis online, prohibiting other online retailers from selling the brands. In June 1999 Levi discontinued online advertising for its website, shifting advertising money into traditional media to drive traffic to the site. Levi claimed that the typical customer order of $56–$120 per customer was not sufficiently high to make its online advertising pay off.16 In November 1999 Levi announced that it would discontinue selling Dockers and Levis from its website, noting: “Right now the cost of running a world-class e-commerce business is unaffordable considering our competing priorities.”17 Industry observers cited channel conflict as a key reason for the decision. Currently, Levi uses its site as a merchandising vehicle with links to key retail partners’ sites, JCPenney.com and Macys.com, for consumers wishing to purchase online.
Order Fulfillment and Distribution To date, most B2C players have not made significant changes in their distribution strategies to meet the needs of online consumer orders. As Levi Strauss learned during its foray into online selling, manufacturers incur excessive distribution costs in distributing products to individual consumers through channels designed for larger volumes. Catalog firms have an advantage in distribution, as their existing systems are better suited to e-commerce requirements. Third-party providers attracting B2C concerns include JCP Logistics, Keystone Fulfillment (founded in 1998 as subsidiary of Hanover Direct), SubmitOrder.com, and UPS Logistics. UPS e-Logistics, a subsidiary of United Parcel Service, provides supply chain management services to both pure-play e-commerce businesses and e-commerce divisions of established companies. Its service offerings include “warehousing, inventory management, order fulfillment (pick, pack and ship), inbound and outbound transportation, returns management, customer call centers and management reporting.”18 15. “Levi Strauss Begins First Online Sales Effort,” Advertising Age, November 23, 1998. 16. “Levi’s Goes Offline to Plug Web Stores,” Advertising Age, June 14, 1999. 17. “Morning Briefcase,” Dallas Morning News, November 30, 1999, p. 2F. 18. UPS Logistics website (www.upslogistics.com [December 18, 2000]).
Whereas most third-party providers manage inventory and order fulfillment for their customers, there are now some that actually purchase the manufacturer’s stock for online fulfillment. For example, SureSource buys and stocks a manufacturer’s entire product line, including replacement parts and accessories, and takes delivery from manufacturers in full cases, like traditional wholesale customers. The company sets up open case inventory of the manufacturer’s products in its distribution center and asserts that it will ship to any consumer in North America within twentyfour hours of receipt of orders. Orders can be placed through a toll-free number or through a manufacturer-specific website. No apparel companies are currently using SureSource. The costs and complexities of merchandising, carrying inventory, and handling and distributing goods can be significant. In April 2001 WalMart dropped apparel from its online store, citing low price points and high handling costs. Similarly, bluelight.com, relaunched in April by Kmart Corporation, decided not to include apparel in its online offering. A Kmart spokeswoman noted: “Apparel is really hard to merchandise. You have to have everyone’s size available, it’s hard to pack and ship, and you have to have special storage.” Overstock.com, which specializes in liquidation of excess merchandise, estimates that it costs between $12 and $15 per item for picking, shipping, and restocking returns. As a consequence, Overstock is paring its line and focusing on selling apparel in bulk.19
Mass Customization Today’s consumers, offered a nearly endless variety of apparel options, have increasingly high expectations that their individual needs and preferences will be met. Customers are savvy about products and prices—and they have many choices to have their needs fulfilled. To date, the major impediment to getting what you want is simply being unable to find the product you’re looking for. The Internet has streamlined the search process, which further heightens customers’ expectations that they will find exactly the product they want. Increasingly, that expectation has come to include customized fit and design. In recent years, companies like Levi Strauss have started to use mass customization techniques to offer custom-fit products at a relatively low pre-
19. “Retailers Find Web Apparel Unprofitable,” Wall Street Journal, June 25, 2001, p. B6.
mium over off-the-rack prices. Mass customization, a combination of “mass” production and “custom-made” production, is rapidly becoming a guiding business principle for certain firms. Although customized, tailormade apparel products have existed since the advent of the industry, in recent times, only an elite few could afford to pay the significant price premium to hire a seamstress or tailor to make customized clothes. In the mid-1990s, Levi began to offer custom-fit blue jeans for women. Women were required to go into a store, be measured by store personnel, and then try on a sample pair of jeans whose measurements were “close to” their own. Orders were then sent to a Levi plant where the individually sized jeans were cut and sewn and shipped to the customer. Similarly, Brooks Brothers customers can be measured in a store for a custom-made shirt. The measurements are sent to a Brooks Brothers factory where the product is made up and shipped to the customer within days. Today, the Internet provides a more convenient vehicle for mass customization. Brooks Brothers’ new custom suit program is called the “emeasure” initiative. The new program will use body-scanning techniques to take customers’ measurements and then transfer the data electronically to a suit manufacturer. The manufacturer (Pietrafeso) will be able to transmit those data directly to its internal systems to cut and sew the product, with heavy reliance on automation. Pietrafeso plans to get its delivery down to two weeks.20 Levi Strauss is also exploiting Internet technology. It recently expanded its custom jeans program. Customers in a Levi’s store are measured using a tape measure and then can go to an in-store interactive kiosk to select preferences such as color and fly styles.21 Both Brooks Brothers and Levi Strauss are also using the Internet as a back-end tool to facilitate collaboration and coordination throughout the supply chain. The Internet is also being used as a front-end tool for mass customization. Firms such as IC3D (Interactive Custom Clothes Company Design) and MyTailor.com provide customers with instructions on how to take their own measurements and give a menu of options for each garment. The customer interface allows consumers to “design” and designate their specific style preferences. 20. “Brooks Brothers 2000: Repositioned for Success,” Apparel Industry Magazine, December 1999, pp. AS16–AS24. 21. “Clothing That Fits—Concept of the Future? Body Scanners Could Make It Profitable Enough to Be Practical,” Kansas City Star, May 30, 1999.
Although the use of the Internet as a front-end tool for B2C apparel sales has great potential, most agree that the real power of e-commerce in the apparel industry is in the opportunities for significant improvements in supply channel management through B2B initiatives.
Business-to-Business E-Commerce Models in the Textile-Apparel-Retail Channel The potential benefits from successfully leveraging web-based B2B models in the textile and apparel industries are significant. With increasing product proliferation and shorter product life cycles, these industries incur significant excess costs in the form of inventory carrying costs, stockout costs, and markdown costs. As suggested in figure 13-2, the very factors that led to the implementation of lean retailing also compel the industry to adopt B2B models that facilitate supply channel integration. Indeed, we can interpret many aspects of certain web-based B2B models as extensions of supply channel management practices brought about by lean retailing.
B2B Exchanges in the Textile-Apparel-Retail Channel We believe that the first link in the channel to be exploited by B2B firms will be the link between manufacturers and retailers, mirroring the implementation of EDI and other technologies required by lean retailing. Indeed, a number of B2B exchanges that focus on the apparel manufacturer-retailer interface have been launched. Leading retailers have founded two major exchanges in the year 2000: Global netXchange and WorldWide Retail Exchange. Sears Roebuck and the French hypermarket Carrefour launched Global NetXchange in late February 2000. The partnership now also includes Kroger Co. (United States), Metro AG (Germany), Coles Myer (Australia), Pinault-Printemps-Redoute (Europe), and J. Sainsbury (United Kingdom). The exchange has chosen Oracle as its software partner.22 WorldWide Retail Exchange, announced in April 2000, includes forty-one powerful retailers, representing over $700 billion in global sales. Members of the exchange include Albertson’s (United States), Auchan (France), Best Buy 22. “Sears Recruiting Another Retailer for GlobalNetXchange,” Dow Jones News Service, April 19, 2000.
(United States), Boots (United Kingdom), Casino (France), CVS (United States), Delhaize (Belgium), Dixon’s (United Kingdom), Gap (United States), H. E. Butt (United States), J. C. Penney (United States), Jusco (Japan), Kingfisher (United Kingdom), Kmart (United States), Longs Drugs (United States), Marks & Spencer (United Kingdom), Meijer (United States), Publix (United States), Radio Shack (United States), Rite Aid (United States), Royal Ahold (The Netherlands), Safeway, Inc. (United States), Safeway plc (United Kingdom), Seibu (Japan), Supervalu (United States), Target (USA), Tesco (United Kingdom), Toys ‘R’ Us (United States), Walgreens (United States), Wegmans (United States), and WinnDixie (United States).23 All eyes are on Wal-Mart, of course, which has yet to join either partnership. Wal-Mart has its own Internet-based purchasing system that it uses with its vendors. Several B2Bs focus on selling excess apparel inventory and overruns. These typically require a lower level of capability, since they tend to focus on one-time buys rather than ongoing replenishment. Thus the length and accuracy of lead times tend to be less important than in a standing relationship in which smaller, more frequent orders are placed. Firms focusing on apparel overruns include: —Virtualrags.com, whose site says it competes on ease of search, quality of image, and accuracy of information. The site offers 1,800 items and asserts that it has 8,000 registered buyers; —Tradeweave, working with QRS, Dillards, Donna Karan, and Leslie Fay, offers apparel overruns, providing “retail intelligence, authenticated trading, staged exchanges, trading tools, and integration with backend purchase orders and invoicing”; —Apparelbids.com competes by offering large, one-of-a-kind purchases; —BuyTextiles.com is an online auction site launched in September 1999 by the American Textile Manufacturers Institute, the national trade association representing the U.S. textile industry. BuyTextiles.com offers “fabric, yarn, home textiles, thread, apparel, textile machinery, and other textile goods.”24 23. “WorldWide Retail Exchange Announces Four New Members; with 41 Members on Board, Combined Sales Represent $596 Billion,” Business Wire, September 28, 2000. 24. Buytextiles.com website (www.ecompartners.com/buytextiles/index.htm [December 19, 2000]).
Two noteworthy B2B firms have been launched in Hong Kong. Texwatch.com offers products for the entire textile-apparel-retail supply chain. It has a marketplace for quota, garments, fabric, yarn, fiber, machinery, and accessories. Hong Kong–based apparel sourcing giant Li & Fung set up an e-commerce subsidiary, lifung.com, to open up a new market segment of overseas small- and medium-size enterprises.25 After announcing the new subsidiary, Li & Fung’s stock price jumped 14 percent. “It’s a empire-strikes-back story,” said Goldman Sachs’s head of investment banking Timothy Dattels.26 In mid-2000, Li & Fung rebranded lifung.com as StudioDirect.com; it began operations in 2001.27
Performance Impact of B2Bs B2Bs offer the potential to extend and improve many facets of supply channel performance that have become critical to meet lean retailers’ requirements. Internet-based communication allows companies to more closely collaborate on product design, inventory planning, and other valueadded activities. The potential benefits of B2Bs in the textile-apparel-retail channel are many. One is decreasing the cost of communication in the channel. Electronic data interchange (EDI) has been used to improve the speed, accuracy, and cost of transferring data between channel partners. B2B models rely on web-based data exchange, which has significant advantages, such as using a hub-and-spoke type of system rather than pair-wise connections. Having a hub-and-spoke design means that only one additional link needs to be added when a new firm joins a network rather than having to add one link between the new firm and each of the established firms in the network. By keeping the software, protocols, and so on centralized, changes or improvements can be made to processes in one main location rather than having to upgrade each link in the network. This can be a significant advantage, since maintenance and upgrading costs can be considerable for EDI systems. The use of centralized systems also improves standardization, which decreases costs by reducing or eliminating proprietary standards or requirements. 25. “Castling Group and Li & Fung Limited to Form Internet-Based Global Sourcing Play; Castling Group and Li & Fung Trading Announce Online B-to-B Play in Global Sourcing,” Business Wire, April 1, 2000. 26. “HK Shares End Lower; Li & Fung Placement Boosts Volume,” Dow Jones International News, March 29, 2000. 27. Studiodirect.com website (www.studiodirect.com [December 19, 2000]).
The magnitude of these savings can be huge. For example, the COO of Sears estimated that the cost of placing an average purchase order would drop from $100 (its cost using its EDI system) to $10 using the web-based exchange. As Sears handles about 100 million purchasing orders annually, savings were expected to total roughly $9 billion per year.28 A further advantage is increased visibility in the supply chain, thereby improving order fulfillment, inventory management, forecasting, and customer service. The use of EDI has allowed rapid transfer of information among channel partners. The information currently provided, including point-of-sale (POS) data, orders, forecasts, and information about inventory levels, has improved visibility in the supply chain considerably. However, using web-based B2B models can improve this visibility substantially. Instead of having to adhere to a set schedule (for example, transmit POS data and inventory levels weekly) or requiring one party to request the transmission of data from another, web-based models allow continuous visibility among channel partners. A retailer can access a manufacturer’s inventory data when and as often as it wishes to rather than having to specify each transfer of data. Morgan Stanley describes an example of the “win-win” nature of having order status on the Internet. Before placing order status data on the Internet, Dell Corporation received an average of three order status calls or questions from each customer who had placed an order. By putting this information on the web, Dell decreased its own costs (eliminating the need to answer e-mails or phone calls about order status) and improved customer service. Specifically, after the information was put on the web, Dell received eight inquiries per customer, suggesting that previously the customer had been underserved in terms of information. In support of this hypothesis was the fact that order cancellations decreased after the webbased implementation. Thus Dell was able to cut costs, improve customer service, and increase sales at the same time.29 One apparel B2B firm, Fasturn.com, describes its online tracking system as a “glass pipeline.” Fasturn plans to be the application service provider (ASP) host for 1,500 apparel factories in twenty-five countries and mid- to high-end fashion labels and retailers. Using Fasturn.com, buyers can specify what they want in terms of color, style, size, or country of origin and 28. “Sears, French Giant in Online Venture,” Chicago Sun-Times, February 29, 2000. 29. Charles Philips and Mary Meeker, “The B2B Internet Report,” Equity Research Report, Morgan Stanley Dean Witter, 2000, p. 55.
then negotiate prices, order sample shipments, and close transactions. Fasturn’s i2 Technologies marketplace software then tracks inventory, shipping, and delivery. In addition, Fasturn will have online “showrooms,” an auction site, and an off-price market for liquidated goods.30 One of the greatest benefits of B2Bs in this industry will be the potential to increase forecasting capability, which in turn will allow the firms in the industry to better match supply and demand. Few industries are as notorious as the apparel industry for having such difficulty predicting demand; it therefore incurs significant costs associated with stockouts, markdowns, and carrying inventory. Channel visibility, in concert with good collaboration among channel partners, will allow a much more streamlined supply chain. Other significant benefits would be reduced channel inventories and the improvement of design. Apparel and textile products currently have a high cost of carrying inventory due to rapid product obsolescence. In the area of design, as technology improves in representing products and components online, and as designers become more accustomed to “virtual” collaboration, the ability to design new products quickly should improve. In addition, access to a global supply network should increase apparel firms’ ability to locate the right fabric and components.
Expected Evolution of B2Bs in the Apparel Industry In some ways, B2B businesses are a natural progression of EDI and automated ordering systems, so some of the impact of B2B business will simply be an extension of current trends, such as shorter and more reliable lead times and smaller, more frequent orders. Internet technology has opened the door for the restructuring of some aspects of the textile-apparel-retail channel. Some functions—especially those related to information—currently performed by agents and other intermediaries may be transferred to the web, although other roles of supply chain intermediaries will be difficult to duplicate. The apparel industry has a number of characteristics that make it a prime candidate for the rapid adoption of B2Bs. First and foremost is the fact that demand unpredictability in this industry is high. Thus the need for improved channel information is considerable, and the potential benefits are significant. In addition, the high fragmentation and global disper30. Vanessa Richardson, “Fasturn Fashions $10 Million Funding” (www.redherring.com [March 1, 2000]).
sion of plants create a need for greater supply chain transparency. Currently, many smaller, regional players benefit from the general lack of good market information. In short, the “pain” in the industry is high; there is great room for improvement. The industry has other factors that will slow the rate of adoption. One inhibiting factor is the fact that many apparel plants are fairly small and relatively unsophisticated. In addition, as noted earlier, products and plant capabilities are difficult to specify. We expect these specification difficulties to decrease as standards for product characteristics are further developed and adopted. However, this is an industry that has often been characterized by a lack of trust, which suggests that buyers may prefer to continue to work with trusted suppliers or intermediaries rather than turn to unknown sources. These factors will make implementation challenging, especially in light of the relatively high complexity of interaction among channel partners. Communicating about product design, product quality, and plant capabilities involves significant subjectivity. These factors will make online communication of needs and capabilities difficult and will also make contracted agreements about certain product characteristics challenging to enforce. Intermediaries provide domain expertise and local knowledge that will be difficult to automate. Moreover, major intermediaries with reputations to maintain will continue to command a level of trust that cannot be imitated easily by new online exchanges. In general, we expect the strong suppliers to get stronger and the weak to get weaker as transparency in the global supply chain increases. Firms with relatively high prices or lower quality that have succeeded primarily due to buyers’ lack of market information will lose ground as market transparency improves. The rate of adoption in the industry should fall in the middle of the range: it will not be as swift as in some because of the complexity of interactions in the channel, but the motivation for improvement is high.
The Benefits of B2Bs: Depth versus Breadth of Coverage Many have cited high expectations for significant benefits from B2B implementations. However, it is our view that the short-term expectations for such benefits have been overrated and the difficulties of implementation underestimated. Specifically, many have envisioned retailers or designers readily sourcing product from any factory in the world using Internet exchanges. Although this may some day become a reality, the barriers identified above
Figure 13-5. Understanding the Value Proposition for Textile-Apparel-Retail B2Bs Depth of relationships
B2B auctions, exchanges, etc.
Where the hype is Where the benefits lie
Tightly coordinated supply chains
Breadth of relationships
will make this a long-term reality at best. We believe that many industry observers fail to differentiate the advantages (and implementation challenges) of tightly coordinating a supply chain with a small number of suppliers from those advantages and challenges that arise from allowing broad access to a large number of suppliers. Companies may be able to purchase overruns opportunistically at a low price, but they will not be able to realize the more significant benefits of supply chain coordination with suppliers selected indiscriminately. We believe that the advantages of access to a wide range of suppliers cannot be realized until supply chain participants learn to coordinate and manage their existing supply chains much better. If a company in an existing supply chain cannot use the Internet to better share information about demand, forecasts, inventory, and production planning, then it certainly will not be able to do so with an abundance of new supply chain partners selected on a moment’s notice from a virtually unlimited set of firms listed on the web.
First, supply chains will need to establish new processes that work well. This is likely to be done best with one or two trusted supply chain partners who are willing to invest to learn how to derive deep benefits from Internet communication. As companies learn to develop and adopt new practices, standards must be developed to allow them to implement the more advanced practices with a wider range of supply chain partners. This will take time, experimentation, and investment. But this depth of supply chain coordination is where the real value lies (see figure 13-5). Only when these practices have been developed will companies be able to benefit fully from access to a wide range of suppliers.
14
. .
E-Commerce and Competitive Change in the Trucking Industry
changing the competitive landscape of the trucking industry. Trucking firms are using the Internet’s strategic building blocks of distributed access to information, quick communication, and boundary-defying connectivity to exploit current resources and capabilities and to explore new Internet-enabled business opportunities.1 This chapter proceeds as follows. The first section outlines the trucking industry, concluding that it is highly competitive and volatile. The second section uses several examples to depict the impact of e-commerce on the industry, concluding that information availability is creating demands to exploit existing skills in order to increase efficiency as well as producing opportunities for exploration of new transportation services. The third section provides examples of changes in industry structure, concluding that alliances and acquisition activity will take place on horizontal and vertical dimensions as firms seek greater scale and coordination for both operating efficiency and service innovation. The fourth section reports a survey of firm-level Internet activities, concluding that the Internet is just beginning to become an important source of trucking activity, with intriguing impacts on services, organization, and performance.
E
We greatly appreciate comments from Steven Weber. 1. Sampler (1998); Rayport and Sviokla (1995).
Trucking Industry Background The trucking industry is the primary mode for freight movement in the United States and is central to the health of the economy. Over 11 billion tons of freight were transported in the United States in 1997. Of these shipments, trucks provided for 60 percent in shipment volume and 81 percent in revenue. The trucking industry has gone through a long cycle of regulation and deregulation, causing high industry volatility. The Motor Carrier Act of 1935 required that commercial interstate companies obtain operating authority from the Interstate Commerce Commission. Most carriers set prices through a collective rate-making process made legal by federal antitrust exemption. The federal Motor Carrier Act of 1980 initiated substantial changes in interstate services by allowing easier entry, providing greater pricing flexibility, eliminating restrictions on how many customers a contract carrier could serve, and reducing restrictions on private fleets. After the 1980 act, increased industry capacity quickly resulted from rapid expansion by entrants and incumbents. Competition has led to lower rates and high efficiency; typical operating ratios (operating expenses divided by operating revenues) average in the mid to high 90 percent range. About 48,000 carriers went out of business from 1980 to 1999, following deregulation, including seventy-four of the top 100 firms from 1984. The trucking industry is highly segmented and extremely fragmented. Within the industry, freight movement is distributed among truckload (TL), less than truckload (LTL), and private fleet segments. A reasonable approximation of the number of U.S. trucking firms is that there were almost 500,000 interstate motor carriers in 1998; this number includes about 30,000 for-hire carriers, while the remainder were private fleets. Of the 30,000 for-hire carriers, about 21,000 (69 percent) were TL specialists, about 8,000 (28 percent) handled both LTL and TL shipments, and 1,000 (3 percent) were LTL specialists. Most firms are quite small. Over 70 percent of the interstate carriers operated six or fewer trucks, and nearly two-thirds of the 30,000 for-hire carriers had annual revenues of less than $1 million.2 TL carriers specialize in hauling large shipments (typically over 10,000 pounds) for long distances. An owner-operator or a driver employed by a 2. See, for example, American Trucking Associations (1999); Standard & Poor’s (1999); Newport Communications, “The Structure of the US Trucking Industry,” 1999 (www.heavytruck.com/ newport/facts/structure [April 14, 2000]).
, , ,
TL firm will pick up a load from a shipper and carry the load directly to the consignee without transferring the freight from one trailer to another at a terminal. The TL segment moves about 45 percent of primary shipment volume and 37 percent of revenue (1997 figures). LTL carriers haul shipments that tend to weigh between 150 and 10,000 pounds for moderate distances (about 250 to 650 miles). LTL carriers operate networks of consolidation centers and satellite terminals. A pickup and delivery truck will transport an LTL shipment from the shipper’s dock to the trucking firm’s local terminal, where dock workers will unload and recombine the shipment with other shipments that are going to destination terminals (transporting shipments from one terminal to another terminal is a part of what the industry calls “line haul” operations). Once the shipment arrives at its destination terminal, the load is processed and then hauled to the consignee as part of the “pickup and delivery” operations. LTL shipments account for about 3 percent of shipment volume and 16 percent of revenue.3 Private fleets operated by manufacturers or distributors account for about half of U.S. volume (52 percent) and revenue (47 percent) of general freight shipments. Private fleets focus on medium to short hauls, outsourcing lengthier hauls. The private trucking share has been declining recently due to the availability of low-price alternatives and to the complexity of the logistics process involving increased imports and exports. The growing availability of Internet-based commercial trucking services may lead to further outsourcing if the services facilitate communication between shippers and commercial trucking firms. In summary, the trucking industry is critically important to the economy, is highly competitive, faces high demands for efficiency, has frequent entry and exit, and consists of many small carriers with a few larger firms. This is the industry setting in which the opportunities and challenges of e-commerce arise.
3. In the past few years, largely as a result of the emergence of the Internet economy, package express (PX) companies such as UPS have become a highly visible part of the LTL segment, and many of the larger package express firms have merged with “traditional” LTL companies. There are several key differences between PX and the rest of LTL. One is the equipment needed to move goods; PX has more items to deliver but the goods are lighter and PX pickup and delivery vehicle (PUD) drivers do not need to operate forklifts. Another difference is the volume of material to pick up and deliver; PX usually has ten to fifteen pickups or deliveries an hour, while LTL has about two to five an hour. PX and the rest of LTL share key similarities, though, in aggregating shipments from multiple sources and then disaggregating them to multiple consignees.
The Impact of E-Commerce on the Trucking Industry The Internet has both direct and indirect influences on the trucking industry. The direct impact centers on information brokers, which traditionally have managed the flow of shipment information in the industry. The indirect impact of the Internet arises because e-commerce is changing the competitive landscape in which the customers of trucking firms must operate, owing to the availability of greater information about goods and services, prices, and timing. The changes are causing pricing pressure in traditional transportation activities. Increased information is also leading to more finegrained market segmentation as well as to demands for new goods and services by trucking companies. With the dual demands for greater efficiency and innovative services, the Internet is causing substantial pressure on the capabilities of trucking companies. Firms are able to respond to some demands through incremental expansion of their existing expertise.4 Many changes, though, require major changes in routines and resources.5 In some cases, the new skills will destroy firms’ existing competencies, as in the case of freight brokers (discussed later in this section), causing internal resistance to change.6 In other cases, the new skills require routines that the firms cannot create from their existing repertoires, as in the case of integrated logistics services (discussed in the next section). 7 In either case, whether due to internal resistance or lack of existing routines, many changes are far beyond the ability of the firms to develop internally within the available time and cost constraints. As a result, acquisition and alliance activity is becoming increasingly common in the industry, with the interfirm activities arising from the need to gain access to new resources and coordinate the use of heterogeneous resources.8 Thus through combinations of internal development, alliances, and acquisitions, firms are attempting to exploit their existing skills while also exploring new business opportunities.9
4. Richardson (1972); Langlois and Robertson (1995). 5. Karim and Mitchell (2000). 6. Tushman and Anderson (1986). 7. Nelson and Winter (1982). 8. Capron, Mitchell, and Oxley (2000). 9. March (1991).
, , ,
Changes Driven by the Availability of Shipment Information for Freight Brokerage The key direct impact of electronic information involves load matching and volume discounts. The strongest challenges are arising for freight brokers, which traditionally have provided these services to trucking companies and their customers. Load-matching services provide information that matches available shipments with trucks that have available cargo space in order to increase trailer utilization and decrease waiting times. Load-matching information is valuable to small trucking firms and owner-operators as well as to large firms that are interested in increasing productivity by reducing empty back-hauls. Load matching traditionally has been the business of freight brokers (freight forwarders). New types of electronic brokers such as Transplace.com and Freightquote.com are threatening the future of traditional information brokers both from within the industry and through entry to the industry. Transplace.com is an example of how industry incumbents are gaining efficiencies by combining asset rationalization and information management. The company plans to create a high volume freight network that will increase equipment utilization for fleets and reduce waiting time for drivers. Transplace.com serves as an information aggregator in the fragmented truckload sector by helping both shippers and carriers to match loads and rationalize capacity. Transplace.com was formed when six of the largest publicly held TL carriers agreed to combine their logistics operations into the new Internetbased transportation logistics marketplace (Covenant Transport, J.B. Hunt Transport Services, M.S. Carriers, Swift Transportation, US Xpress, and Werner Enterprises). In addition to providing logistics services, Transplace .com will negotiate discounts for fuel, equipment, maintenance and parts, insurance, credit, and other services for its equity partners and other carriers. The founding firms plan to leverage their experience, physical assets, industry-specific information technology expertise, brand equity, and customer relations in the electronic marketplace. Such moves in the trucking industry parallel attempts to create market-specific mega-marketplaces such as those that are being established by auto industry firms, consumer products companies, and retail stores. While Transplace.com leverages its traditional transportation asset base in the e-business environment, a new genre of information brokers is
emerging on the Internet. In early 2000 there were at least twelve online load-matching exchanges either in operation or in prototype stages, most planning to charge small monthly usage fees. These exchanges arise owing to the geographic dispersion of the industry and the small size of most carriers. The new exchanges challenge traditional freight brokers in managing the coordination of information and freight. At the annual convention of the Transportation Intermediaries Association in March 2000, the dominant topic of discussion was the threat posed by the Internet and new loadmatching software. Freight brokers note that the Internet is challenging them in two ways. First, in brokerage substitution, the Internet enables many shippers to post loads and solicit competitive bids directly from carriers that use the Internet to identify back-hauls. This process combines load matching with competitive pricing. In the process, the shipper receives a low bid and the carrier increases productivity by reducing empty miles. In this scenario, the traditional freight broker has no role, because shippers function as their own brokers and deal directly with freight companies. Second, the Internet allows new intermediaries to aggregate loads and obtain volume discounts. Freightquote.com is an example of an Internetbased trucking industry info-mediary. Freightquote.com targets smaller shippers that do not have enough volume to negotiate discounts on their shipments. Freightquote.com leverages the volume generated from smaller shipments to gain discounted rates. On its Internet site, shippers can identify prices and order deliveries. Membership is free for shippers (membership information provides Freightquote.com with shipper and carrier data). Shippers pay a fee each time they use the service to ship goods. Freightquote.com handles arrangements for pickup, paperwork, and billing online. This scenario provides a critical role for freight brokers in the changing industry, unlike brokerage substitution, but requires brokers with new information technology skills and management abilities. To take advantage of the information-based brokerage opportunities, brokers and trucking firms require new skills. Key capabilities include information technology skills, including capability to use many types of hardware and to develop and deploy many types of software, and organizational skills, especially the ability to integrate information technology personnel and systems with other elements of the businesses. In parallel, trucking companies must develop skills that allow them to identify and negotiate shipments that are now available because of the increased information. The new information-based brokerages allow some incumbents to
, , ,
exploit new opportunities with their existing capabilities and open the door for entrants that are exploring new business models.
Responses to Changes in the Competitive Environments of Shippers and Consignees In addition to changing the competitive environment of freight brokerage in the trucking industry, shippers and consignees are placing new demands on trucking companies as they face changes in their own competitive environments. These new customer demands are arising in all customer industries as the increased availability of information has reduced distance constraints in terms of market reach and is transforming processes for the creation of goods and services. More specifically, the emergence of electronic commerce in many consumer and commercial markets has created new shipment requirements for customers of the trucking industry. The impact of changes in customers’ competitive environments on trucking companies is twofold. First, shippers and consignees are demanding refinements of existing trucking services. For instance, manufacturing firms are adopting practices such as just-in-time delivery and production and electronic data interchange (EDI), which in turn require the timely movement of raw materials to the production location and to appropriate distribution areas as finished goods. Similarly, wholesale and retail distributors are increasingly demanding more frequent delivery of goods and services, often supported by extensive EDI. Trucking firms have had to alter existing practices to ship goods more quickly, cheaply, and with increased service quality. Second, as an integral part of the supply chain, trucking firms are exploring new information-enabled opportunities to expand existing services and create new transportation services. Both types of changes require that trucking firms undertake a combination of exploitation and exploration as they improve their use of existing skills and acquire new capabilities. Arnold Industries and UPS are illustrative cases in which trucking companies are redefining the boundaries of the services they offer in the Internet-enabled economy. Arnold Industries has long been a profitable LTL company. Over the past decade, the company has expanded into the regional TL segment by acquiring TL firms. The company is now combining its trucking and warehouse operations to offer one-stop order fulfillment services for e-tailers and mail-order catalog companies. These services include order processing, inventory management, and small-
package shipping. In this process, the company has transformed its business to improve its ability to fill orders quickly and precisely. The firm has turned its warehouses into logistics hubs for the order fulfillment process: receiving goods from manufacturers or suppliers and processing, packaging, and delivering to customers. Arnold Logistics also provides value added services by comparing freight rates and handling customer returns. Further, Arnold Logistics takes online orders on behalf of its shippers and also provides live chat and e-mail support for customers. The traditional LTL and TL segments of Arnold Industries have benefited from the new business activities, because shipments to the logistics warehouses use TL and LTL services. In addition, and at least as important, the company has substantially expanded into new transportation services that emphasize information management rather than physical handling of goods. Thus Arnold Industries has transformed its definition of the transportation business to extend far beyond movement of freight. It has leveraged its knowledge and expertise to become an information transfer point for transportation services.10 Several examples involving UPS further illustrate how trucking companies are integrating themselves into the web of Internet activities. UPS dominates shipping from Internet retailers. For instance, UPS delivered 55 percent of the goods ordered online in the 1998 Christmas season. UPS’s relationship with Nike demonstrates the basis of UPS’s success. In order to expedite the order-to-delivery process for Nike.com, UPS stocks Nike shoes and warm-ups in its Louisville warehouse and fulfills customer orders hourly. Indeed, UPS plays a direct role in the order process as well as in delivery, because a UPS call center in San Antonio handles Nike.com customer orders. Consequently, Nike saves on overhead costs and, most important, achieves quick sales turnaround. A second UPS example is the company’s relationship with the fashion website Boo.com, for which UPS handles batches of supplier shipments, inspects the merchandise, and packs it in Boo.com-branded boxes for shipment.11 An initiative with the Ford Motor Company provides a third UPS example. In early 2000, UPS and Ford announced an alliance in which 10. D. P. Bearth, “Arnold Industries Hits Growth Vein in Logistics,” Transport Topics, January 24, 2000, pp. 10–12. 11. K. Barron, “Logistics in Brown: As Power in the Economy Shifts from the Movement of Atoms to the Movement of Bits, the Ultimate Winner Is a Company That Moves Both, United Parcel Service,” Forbes, January 10, 2000, p. 78. Boo.com declared bankruptcy in spring 2000 and was partially resurrected in fall 2000 as a division of fashionmall.com.
, , ,
UPS will oversee the delivery networks for Ford vehicles. A key element of the system that UPS will establish for Ford is a vehicle tracking system that will allow Ford to track the location of each vehicle from production through delivery. Eventually, customers may be able to use the system to track the vehicle that they have ordered. The alliance with UPS is an attempt by Ford to move from a mass distribution system to a virtual delivery plan for each vehicle. UPS will use its technological expertise and logistics capabilities to help create this transition. The two firms’ goal for the new delivery system is to reduce delivery times by about 40 percent while increasing reliability and reducing costs. In each of these examples, similar to the Arnold Industries case, UPS is extending its activities far beyond traditional movement of goods. Rather than simply being a package express shipper, UPS is undertaking many business processes: receiving customer orders, warehousing goods, and coordinating after-sales services. UPS, like other established package express carriers, is well positioned to take on additional outsourcing of activities that private fleets have traditionally handled themselves or to substitute for traditional LTL services. Such business transformations mark a redefinition of trucking industry boundaries. Until recently, most analysts omitted companies such as logistics providers and package express carriers from the trucking industry. As the boundaries between transportation services become increasingly blurred, firms that provide these services clearly are now central to the industry. At the same time, though, the nature and even the name of the industry have changed substantially. Rather than simply being “trucking” companies, many of the firms in the industry have become “asset-based transportation management” service providers. This new term means that the companies own and operate physical assets—such as trucks and other vehicles, warehouses, and information systems—but in addition they provide a broad range of information-based transportation management services that emphasize coordinating many steps in the production-to-customer value chain. These additional services range from warehousing goods to order taking to logistics management to after-sales services. Responses to the new demands of the changing competitive environment are arising both from entrants to the industry and from incumbents. New entrants are emphasizing relatively fine-grained services such as Internet-based information brokerage. By contrast, industry incumbents are taking the lead in the development of the more complicated set of asset-
based transportation management services. These early observations of industry dynamics are consistent with the fact that incumbents often lead the industry in developing and adopting new technologies as long as the technologies address customer needs within the value network in which they competed.12 As industry incumbents expand their services, enter new markets, and create new services, their diversification choices reflect the relative applicability of their resources in the new technological environment.13
Change in Industry Structure: Virtual Trucking and Consolidation Major changes in structure are taking place in the trucking industry as a result of changes in the Internet information environment. For instance, some industry analysts predict that the share of primary freight shipments carried by for-hire operations will increase, substituting for private fleet, and that package express shipments will increase drastically, possibly in substitution of LTL carriers.14 In the e-commerce environment, customers now often view transportation as a continuous value proposition with no regard to segments or length of haul. This view has led to at least three types of changes that are affecting industry structure. First, as we noted in our discussion of freight brokerage, some firms are exploiting the connectivity and access to information that the Internet allows by offering “virtual trucking” services, in which the transportation companies serve as system integrators.15 Second, faced with stringent demands for shipment time and quality, shippers would like to deal with one company for most or all of their inbound and outbound shipping needs. As we noted above, many incumbents in the trucking industry are restructuring to offer integrated transportation solutions by including logistics and other transportation options in their corporate portfolio of asset-based transportation management services. These firms are now offering suites of “one call, one carrier” services, including TL, LTL, logistics, package express, and intermodal services. Third, some incumbents are leveraging existing freight movement skills to explore new 12. See, for example, Penrose (1959); Singh and Mitchell (1993); and Christensen and Rosenbloom (1995). 13. Silverman (1999). 14. Standard & Poor’s (1999). 15. Sampler (1998).
, , ,
opportunities in related industries. These transitions are leading to substantial changes in industry structure through concurrent waves of consolidation, expansion, and entry.
Virtual Trucking: A New Organizational Form The Internet has given rise to a new breed of firm, “the virtual trucking company.” Such companies own no assets themselves; instead, they act as system integrators for asset-based companies. For example, FreightPro.com is an Internet venture started by a group of trucking executives backed by $3 million in venture capital. FreightPro.com uses the Internet to compete directly with asset-based transportation management service providers. The new firm contracts with independent carriers, warehouses, and drivers to provide LTL freight transportation while using web technology for services that range from rating shipments and scheduling pickups to tracking shipments and billing customers. The founders are veterans of the trucking industry and have extensive knowledge about the inefficiencies in the LTL environment—the new venture is attempting to exploit the inefficiency of the traditional LTL business model. These inefficiencies arise because most LTL freight consolidation takes place during the night, leaving the terminal facilities nearly empty the rest of the time, while most LTL warehouse operators are busy only during the day. FreightPro.com proposes to execute an efficient virtual trucking business by using existing public warehousing space to consolidate shipments, using the Internet and load-planning software to put together pickup and delivery and line-haul routes, and subcontracting the shipments and routes on a per shipment basis. The Internet allows virtual truckers to communicate with shippers and subcontractors. In turn, the business proposition permits the firm to use embedded knowledge and current industry characteristics to advantage. “Assetless” logistics providers have been a part of the trucking environment for the past two decades—since the widespread availability of computing hardware and software. However, limitations on immediate and widespread communication prevented these firms from expanding beyond offering very limited services. The Internet provides the foundation for innovative integration of complex logistics algorithms with aggregation of fragmented information, thereby providing a more seamless transportation alternative.16 16. D. P. Bearth, “Virtual Truckers Vie for Freight Business,” Transport Topics, March 6, 2000, pp. 1, 5.
Virtual trucking is an example of alliances in the trucking industry. Like acquisition activity, such alliance activity stems from needs for access to capabilities and coordination of activities.17 At the same time, the needs for coordination often will involve such complicated interactions among the firms that alliances will provide only partial solutions.18 Instead, it is likely that there will be increased reliance on acquisitions of complementary firms in order to undertake the substantial changes that Internet-based business will require.
Industry Consolidation and One-Stop Transportation Solutions Several trucking firms have developed portfolios of asset-based transportation management services through mergers and acquisitions. Examples include CNF Transportation, Caliber Systems, USFreightways, and CRST International. The operating units of CNF Transportation include a package express firm (Emery Worldwide), an LTL firm (Con-Way Transportation Services), and a logistics provider (Menlo Logistics). Similarly, the operating portfolio of Caliber Systems includes a package express firm (RPS), an LTL firm (Viking Freight), and a logistics provider (Caliber Logistics). Caliber, in turn, was acquired in late 1997 by Federal Express. USFreightways acquired Glen Moore Transport in 1998 in order to provide truckload services nationwide. USFreightways has expanded its primary business of providing regional LTL by acquiring a domestic and international freight forwarder, a reverse logistics firm, and a regional truckload carrier. Notably, such acquisitions require substantial postacquisition change and integration of the businesses. For instance, UPS is now undertaking greater vertical integration of its activities to provide better coordination of the types of services that we described earlier. Similarly, CRST International has recently restructured itself into a single transportation services company by combining its six units into one operating unit. In the past, each unit served customers separately in their niche markets. Through the restructuring, CRST International combines CRST for TL, Malone Freight lines and the Three 1 truck line for flatbed services, CRST Logistics for logistics services, and an express LTL service. According to the president of CRST, John Smith, “It didn’t take a genius to figure out it was better 17. Nagarajan and Mitchell (1998). 18. Capron, Mitchell, and Oxley (2000).
, , ,
approaching this as one team of professionals totally focused on the customer and making transportation as easy as possible for our customers.”19 Competitively, these organizations have to contend with the challenges posed in each of the segments in which the firms participate. Many of the firms began restructuring and consolidating when logistics software became widely available and increasing cross-border shipments necessitated shippers to require multiple services from their trucking vendors.20 As more firms present themselves as providing transportation management services, their formidable task is to achieve the close coordination that is required to capture the benefits of being a single entity. Many or most of the acquisitions in the industry involve vertical combinations of firms. Vertical combinations involve mergers of transportation providers and firms that provide complementary services, such as logistics firms and freight forwarders, rather than horizontal combinations of direct competitors. The rationale for the complementary vertical combinations involves, first, gaining access to capabilities that the firms require to refine existing services and offer new services and, second, providing greater coordination of changing activities than the firms could achieve through arm’s-length negotiation between independent companies.21 Such vertical consolidation might be only a temporary stage in the industry but will provide critical coordination at this stage of learning about integrated opportunities.
Exploring New Frontiers The Internet is leading to major changes in retailing and, in turn, presents an opportunity for the trucking industry. However, in order to exploit the growing potential of this environment, trucking firms have had to change their business practices in significant ways. These changes parallel our earlier discussion of asset-based transportation management. Bekins Van Lines, a subsidiary of third-party logistics provider GeoLogistics, created a new firm called HomeDirect USA to focus exclusively on services for online retailing of home furnishings. The company combines the trucks of Bekins Van Lines and the trucks of 500 other LTL firms to deliver furniture from manufacturing sites to local markets. To 19. Quoted in John Schulz, “A Transportation Solution,” Traffic World, November 17, 1997, p. 54. 20. Nagarajan, Bander, and White (2000). 21. Capron and Mitchell (1998).
further customize its support for online retailers, HomeDirect offers a “white glove” service option in which the driver and a helper carry the furniture inside the home, place it in the desired location, and dispose of packaging debris. The extra service costs the customer about $215, which is 72 percent above normal shipping rates and adds significantly to the firm’s profitability. According to Jim Greiger, vice president of strategic marketing and development for HomeDirect USA, “It is no longer just logistics, trucking, and warehouse. Customer service is now a core focus.”22 In this model, the trucking firm, as the only live contact between the retailer and the customer, now takes the role of the retailer’s company service representative. Other trucking firms have chosen to expand beyond their traditional segments to exploit the geographic reach of the Internet. For example, Consolidated Freightways hopes to boost its customer base by moving beyond its traditional LTL services. The company has created a website, CFMovesyou.com, to provide household moving services. While moving household goods is significantly different from moving bulk freight and other LTL shipments, the Internet has provided an opportunity for Consolidated Freightways to leverage its nationwide assets and use its expertise in logistics and freight handling to diversify into a new business opportunity. The cases described above are instances in which firms in the trucking industry are value innovators.23 These firms are addressing the opportunities provided by the transformation of the traditional trucking industry structure. Their actions address the exogenously created demands that arise from their shippers and consignees due to the impact of technological change. In addition, the changes that the trucking firms undertake are leading to further changes in their competitive environments by developing demands for new services.
Firm-Level Activities: Information from a Survey of the Trucking Industry E-commerce is leading trucking firms to transform their internal information-based operations. First, firms are attempting to align their structure, 22. Sean Kilcarr, “Delivering the Internet on Trucks,” Transport Topics, January 3, 2000, p. 10. 23. Kim and Mauborgne (1999).
, , ,
systems, and people with the new competitive environment. Second, firms have changed and improved internal processes by adopting information technology that emphasizes mobility and connectivity. The data in this section are based on reviews of trucking company websites plus a mail survey concerning the use and impact of information technology in the trucking industry.24 We summarize results from 177 respondents to the survey, with information that applies to late 1999 and early 2000. The respondents represent a cross section of the trucking industry, including operators in the TL (40 percent), LTL (71 percent), logistics (20 percent), package express (9 percent), and private fleet (30 percent) segments. About 55 percent of the firms participate in two or more segments of the industry. About 65 percent of the firms operate 100 or fewer power units.
Organizational Change: Structure, Systems, and People Trucking firms increasingly are adopting the Internet to accomplish their e-commerce initiatives and have transformed the business organizations to do so. Trucking firms are using the web for automating many of their exchanges with shippers and consignees, improving communications, acquiring new customers, and customizing services. Table 14-1, which we developed by reviewing the Internet sites of ten leading LTL carriers, provides a list of the ways in which firms use the web and an indication of the extent of use of these features by some prominent trucking firms. It is evident that the Internet offers the opportunity for trucking firms to improve processes, reduce paperwork, and reduce administrative overhead costs. At the same time, though, it is not yet clear whether the savings will be enough to offset the investment in new web-based technology and organizational change. We summarize five areas of trucking changes: the competitive environment, Internet use, sources of Internet technology, organizational changes, and early impact of Internet activities. First, the firms report that the competitive environment has changed in terms of time demands from customers since 1996. More than a third of the firms (37 percent) reported that they had more time-sensitive deliveries in 1999 than in 1996, while 24. We conducted the survey with the sponsorship of the University of Michigan Trucking Industry Program (UMTIP). UMTIP receives generous support from the Sloan Foundation and from trucking industry corporations.
time-sensitivity rarely declined (9 percent). Indeed, about 55 percent of the respondents classified over half of their total dispatches as time-sensitive in 1999. Many customers now require fast, frequent deliveries. Second, the survey suggests that the Internet had become a part of the trucking firms’ business by early 2000. Among the respondents, 75 percent report at least minimal Internet activity. At the same time, however, the impact is at its very early stages of both investment and customer activity. The firms on average devoted only about 12 percent of their investment in new technology on Internet-related projects. Internet sales activity is even lower; on average, the firms with Internet activity procured only about 5 percent of their shipments via the Internet in 1999. Thus although Internet applications are diffusing widely among trucking firms, they still account for only small parts of the firms’ business activities. Third, the trucking firms most often developed the Internet applications internally (78 percent), sometimes in conjunction with consultants. Our conversations with trucking firm managers suggest that the nature of the trucking industry requires company-specific knowledge to develop effective applications, at least at this early stage of Internet diffusion. Such knowledge includes detailed information about customers, routes, services, and physical assets that would be difficult or impossible for independent software companies to develop as “off the shelf ” applications. Fourth, the firms report substantial organizational changes since 1996, particularly for firms with the greatest Internet activity. Listed here are the most significant. —More cross-functional work: cross-functional work is more common at many firms (67 percent). Such cross-functional work helps integrate the activities of asset-based transportation management. —Increased span of control for dispatchers: dispatchers and other managers at many firms (65 percent) have greater span of control than they did three years ago. This increased span of control both increases productivity and provides more direct coordination of the complex delivery schedules and activities of asset-based transportation management. —Addition of line haul terminals: many of the firms (34 percent) increased the number of terminals they operate, although few firms (16 percent) reported significant increase in the length of haul. The increased number of terminals provides denser line-haul networks and allows the firms to provide quicker responses for time-sensitive deliveries. —Increase in hiring of drivers and IT professionals: employment changes reflected the changes in time-sensitivity and the growth of Internet
No No No Yes No No Yes No No Yes Yes No No No No Yes Yes No No No No Yes No No
Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
PC toolbar for the web (ABF ToolKit) Transparent links Create bill of lading Pickup request Generic status report Customized status report Rating, general Rating, customer specific Download base rates Tracking, single shipment Tracking, multiple shipments E-mail notification of shipment delivery E-mail notification of shipment status change E-mail notification of delayed shipment E-mail notification of shipment delivery with exception Document retrieval (BOL, POD) Document retrieval (weight certificate) Document retrieval (packing slip) Freight bill review with e-mail reply Direct points listing, view Direct points listing, download Routing search (by zip) Special service charges summary, view Special service charges summary, download
RDWY
ABF
Web feature—functional service items
No
No Yes No No No No No No No
No No
No No No Yes No No No No No Yes Yes No
YSFY
No
No Yes No No No No No Yes No
No No
No No No No No No Yes No No Yes Yes No
CFWY
No
No Yes No No No No No Yes No
Yes Yes
No No Yes Yes Yes No Yes Yes Yes Yes Yes Yes
AFWY
No
No Yes No No No No No No No
No No
No No No No Yes No No No Yes Yes No No
USF
No
No No No No No No No No No
No No
No No No No Yes No No No Yes Yes Yes No
No
No No No No No Yes Yes Yes Yes
No No
No No No No No No Yes No Yes No No No
Watkins OVINT
Table 14-1. Comparison of Websites of Leading Less than Truckload Carriers, June 1999
No
No No No No No No No No No
No No
No No No No No No No No Yes No No No
Estes
No
No No No No No No No No No
No No
No No No No Yes No No No Yes Yes No No
Con-Way
Rules tariff, view Yes Rules tariff, searchable on-line Yes Rules tariff, download Yes Terminal (service center) locator Yes Contact information, phone numbers Yes Contact information, general with e-mail Yes Contact information, specific with e-mail Yes Transit time information/calculation Yes Cargo claims filing In progress Cargo claims status check Yes Carrier forms and documents, view & print Yes Sailing schedules, caribbean No Sailing schedules, international No Time-limited special volume pricing No Live online customer service (chat option) No Choice of Windows or DOS bating programs Yes General marketing information Yes Specialty markets/services description Yes Shipping information/help, general Yes Company history Yes Frequently asked questions Yes Year 2000 compliance information Yes Company store In progress President’s message Yes Small business resource center No Extensive shipping tips information No Online survey/registration/guest book No
Yes No No Yes Yes Yes Yes No No Yes Yes Yes Yes Yes Yes No No Yes Yes Yes Yes Yes No Yes Yes Yes No
Yes Yes No Yes Yes Yes No No No Yes No Yes Yes Yes No No Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes Yes
No Yes Yes Yes No No Yes No No No Yes No
Yes
No Yes Yes Yes
No No No Yes Yes Yes No No No No
No Yes No No Yes No Yes No No No No No
No
Yes No No No
No No No Yes Yes Yes No Yes Yes Yes
No No No No Yes No Yes No No No No No
No
No No No No
No No No Yes Yes Yes No No No No
No No Yes No Yes No Yes No No No No Yes
No
No No No No
No No No Yes Yes Yes No Yes No No
No Yes Yes Yes Yes No Yes No No No Yes No
No
Yes Yes No No
Yes No Yes Yes Yes Yes No Yes No No
No Yes Yes No No No Yes No Yes No No Yes
No
No No No No
No No No Yes Yes Yes No No No No
Yes Yes Yes Yes Yes No Yes Yes No No No Yes
No
No No No No
No No No Yes Yes Yes No Yes No No
, , ,
applications. Increased employment was most common for line-haul drivers (66 percent of the firms) and pickup and delivery drivers (64 percent of the firms) in LTL operations, consistent with the finding that firms increased the number of terminals they operate. The need for more finegrained time-sensitive deliveries has led to greater need for people for these activities. The number of programmers and other information technology personnel most often stayed about the same from 1996 to1999 (53 percent of the firms), although a substantial minority of firms (39 percent) reported increased IT staff. Moreover, firms that invested in Internet technology were almost twice as likely (36 percent versus 20 percent) to hire more programmers and systems personnel than firms that did not invest in Internet technology. Clearly, firms often need to invest in new information technology personnel in order to develop Internet applications. —Organizational rationalization: organizational change was common for firms offering Internet services. Many respondents added new departments (48 percent) or eliminated old departments (63 percent). Firms using Internet applications were the most likely to add or eliminate departments. These trends suggest that organizational rationalization helps firms avoid a mismatch between the new services that they offer and their old organizational structures. —Increased alliance and acquisition activity: alliance and acquisition activity was common. Many firms created new alliances (51 percent), while a sizable minority acquired other businesses (28 percent). These interfirm changes are part of the process of offering integrated transportation management services, whether as “virtual” systems through alliances or integrated systems within single companies. In addition, many firms have ended old alliances (25 percent) or divested parts of their businesses (22 percent). Firms with Internet applications were slightly more likely to create new alliances or acquire new businesses. As in the case of departmental rationalization, interfirm rationalization activities are part of the process of changing the mix of services at the firms. Fifth, although it is still far too early to assess the full impact of the Internet, several initial observations about how the Internet may influence business changes are notable. —Overall, many firms (50 percent) reported that the Internet helped them manage change. —Firms using the Internet found that it had a substantial impact on exploitation opportunities building directly on existing activities. Existing
activities that commonly benefited include improving internal process quality (56 percent) and process time (43 percent); improving external relationships with shippers (52 percent), consignees (45 percent), and third parties (49 percent); improving service speed (42 percent), timeliness (32 percent), and dependability (34 percent); reducing costs (33 percent); improving dedicated services (32 percent); and analyzing customers (33 percent). By contrast, the Internet had little impact on shipment length or size (5 percent to 12 percent). —At the same time, the Internet also contributed to exploration opportunities, including adding new customers (58 percent), new services (38 percent), and new markets (34 percent). —The comparative results show that exploitation and exploration are complementary activities rather than substitutes for each other. Firms that reported higher mean exploration usage of the Internet also tended to report higher mean exploitation activities. That is, firms rarely sought exploration opportunities without also attempting to exploit existing skills in more depth. This joint emphasis reflects the competitiveness of the industry, which leaves little room for future-focused strategies that do not also pay close heed to improving current operations. The results serve as an indication of firm-specific actions in response to the emergence of the Internet environment in the trucking industry. They consistently show that competitive conditions are changing and that many firms are seeking both exploitation and exploration opportunities in the turbulent world of e-commerce.
Adoption of Web-Based EDI and Mobile Communication Many shippers are insisting that carriers provide visibility for the freight from the buy button on a retailer’s website to the customer’s door. Trucking companies have responded to this challenge in several ways, including mobile communications, Internet-based EDI, and information-based service quantification. Mobile communications products connect trucks with the office, thereby providing substantial transparency throughout the system. Schneider, the nation’s largest truckload company, was a pioneer in the use of Qualcomm’s mobile satellite systems that combine vehicle location with fleet management. The process improvements enabled by the information provided by the system and the extent to which shippers value carriers with
, , ,
“connected” fleets have prompted widespread adoption of three types of mobile communication systems: (1) cellular communications; (2) specialized mobile radio; and (3) satellite systems. Results from our survey on information technology use in trucking show that these three types of mobile communications systems are now very common among trucking firms. We find that almost two-thirds of the respondents (61 percent) had adopted at least one, and about a quarter (23 percent) more than one kind of mobile communication system. Products that use the Internet as the communications vehicle for EDI transactions have dramatically decreased costs. Traditionally, EDI services such as load tendering, status reporting, and invoicing cost thousands and tens of thousands of dollars to set up and run, while also requiring substantial ongoing effort to maintain interfirm system compatibility. These costs and difficulties inhibited adoption of EDI systems by small carriers; in turn, the limited adoption hurt the capability of small carriers to work with large shippers that mandated EDI transactions. Now some shippers are using systems that allow EDI transactions over an extranet, which is a secured Internet location that reduces set-up costs. Still in its nascent phase, web-based EDI systems require manual entry and have not yet been widely adopted. However, the potential low cost and standardized accessibility of EDI over the Internet levels the playing field for carriers that had been excluded from many freight opportunities earlier. The higher demand for logistics, vehicle and freight tracking, and other information increases the amount of information the trucking firm must process. The data can be used to quantify the quality of service the carrier offers. Details concerning on-time delivery, service records during surge periods, and low damage records provide competitive advantage. As we noted, the survey suggests that companies that invest in Internet development value the easy acquisition and exchange of information with their shippers, consignees, and third parties. Moreover, Internet applications may diffuse more widely than traditional dedicated EDI systems have done. Fewer than half of the firms in the survey (44 percent) report using traditional EDI systems, while, as we noted earlier, about 75 percent of the firms offer Internet services. Moreover, about two-thirds (65 percent) of the firms that do not use traditional EDI have introduced Internet services. Early indications, therefore, suggest that Internet applications may prove more widely accepted than traditional EDI systems, which are often dedicated to particular customers.
Conclusion Rich Hardt, director of systems development at Yellow Corporation, said as he announced that Yellow had signed on to be the transportation provider for eChemicals, “One Internet year is seven business years and twenty transportation years.”25 Trucking firms are having to adapt to the “Free, Perfect, Now” e-commerce environment or go out of business. Many firms in the industry have adapted by making major changes in business practices involving both exploitation of existing skills and exploration of opportunities that require new capabilities. Trucking firms are participating in the new economy by expanding existing resources, adopting new technologies to enable Internet-based communication with their customers, and improving processes to promote service and efficiency. The freer flow of information, the connectivity, and the opportunity to aggregate dispersed information have spawned new web-enabled businesses, and these new entrants are challenging many traditional assumptions and business practices. At the same time, trucking industry incumbents are using alliances and acquisitions to redefine themselves as asset-based transportation management companies. In this process, the industry incumbents have had to modify their existing organizational structures, systems, and people to execute the e-commerce strategies. The industry is facing its greatest challenge since deregulation two decades ago. The actions of the trucking firms will affect far more than the performance of the trucking services industry alone. The response of trucking firms to Internet-based opportunities and challenges will have major influences on the economy as a whole. The future of e-commerce depends on how physical goods are transported within the constraints of time, cost, and quality. As a result, the response of established and new trucking firms will play a significant role in determining the extent to which the full potential of e-commerce will be fulfilled in the economy.
References American Trucking Associations. 1999. American Trucking Trends. Alexandria, Va. 25. D. Bearth, “E-Commerce Chemistry Just Right for Yellow Corp.,” Transport Topics, June 7, 1999, p. 57.
, , ,
Capron, L., and Will Mitchell. 1998. “Bilateral Resource Redeployment Following Horizontal Acquisitions: A Multi-dimensional Study of Business Reconfiguration.” Industrial and Corporate Change 7: 453–84. Capron, L., W. Mitchell, and J. Oxley. 2000. “Recreating the Company: Four Contexts for Change.” In Financial Times Mastering Strategy: The Complete MBA Companion in Strategy, 384–90. London: Pearson Education Limited. Christensen, C. M., and R. S. Rosenbloom. 1995. “Explaining the Attacker’s Advantage: Technological Paradigms, Organizational Dynamics, and the Value Network.” Research Policy 24: 233–57. Karim, Samina, and Will Mitchell. 2000. “Reconfiguring Business Resources Following Acquisitions in the U.S. Medical Sector, 1978–1995.” Strategic Management Journal 21 (10–11): 1061–81. Kim, W. Chan, and R. Mauborgne. 1999. “Strategy, Value Innovation, and the Knowledge Economy.” Sloan Management Review (Spring): 41–54. Langlois, R. N., and P. L. Robertson. 1995. Firms, Markets, and Economic Change: A Dynamic Theory of Business Institutions. London: Routledge. March, J. G. 1991. “Exploration and Exploitation in Organizational Learning.” Organization Science 2: 71–87. Nagarajan, A., J. L. Bander, and C. C. White. 2000. “Trucking in U.S. Industry in 2000.” In Studies in Competitive Performance, edited by David Mowery, 123–53. Washington: National Academy Press. Nagarajan, A., and W. Mitchell. 1998. “Evolutionary Diffusion: Internal and External Methods Used to Acquire Encompassing, Complementary, and Incremental Technological Changes in the Lithotripsy Industry.” Strategic Management Journal 19: 1063–79. Nelson, R. R., and S. G. Winter. 1982. An Evolutionary Theory of Economic Change. Harvard University Press. Penrose. E. 1959. The Theory of Growth of the Firm. John Wiley. Rayport, J. F., and J. J. Sviokla. 1995. “Exploiting the Virtual Value Chain.” Harvard Business Review 73: 75–85. Richardson. G. 1972. “The Organization of Industry.” Economic Journal 82: 883–96. Sampler, J. L. 1998. “Redefining Industry Structure for the Information Age.” Strategic Management Journal 19: 343–55. Silverman, B. S. 1999.”Technological Resources and the Direction of Corporate Diversification: Toward an Integration of the Resource-Based View and Transaction Cost Economics.” Management Science 45 (8): 1109–14. Singh, Kulwant, and W. Mitchell. 1993. “Death of the Lethargic: Effects of Expansion into New Technical Subfields of an Industry on Performance in a Firm’s Base Business.” Organization Science 4 (2): 152–80. Standard & Poor’s. 1999. U.S. Freight Transportation Forecast . . . to 2007. DRI fourth annual report. Alexandria, Va.: American Trucking Associations. Tushman, M. L., and P. Anderson. 1986. “Technological Discontinuities and Organizational Environments.” Administrative Science Quarterly 31: 439–65.
III
What Comes Next? The Evolving Infrastructure
This page intentionally left blank
What Will the Next Generation of Tools, Networks, and Marketplaces Look Like? following the evolution of e-commerce is its emerging infrastructure—the sets of data networks and tools with which the new business practices and models operate and the possibilities they embed. As chapter 1 argues, the elements of the e-commerce system diffused much more rapidly than any comparable technology system in earlier eras.1 That was, in substantial measure, because the Internet and the addressing system of the WWW (which makes the range of new technologies widely accessible and applicable) could initially be built out over the existing telecommunications network designed for voice. The result was a tsunami of business experiments, and as is the case with experiments, most failed. The e-commerce revolution came upon us fast, both following and creating public awareness of the Internet. In those heady early days, the belief was that the new types of business, labeled “dotcoms,” were going to reframe business, displace industry leaders, and disrupt existing patterns of commerce. Those hopes, and the financing they induced, created new centimillionaires overnight.
T
The business question was: Could onscreen malls and Fed Ex substitute for Main Street? The fashionable thought was that the net would 1. See chapter 1 by Stephen Cohen, Bradford DeLong, Steven Weber, and John Zysman.
permit newcomers to shunt aside the established intermediaries that required something old-fashioned like stores to find their customers. The wealth of many faded as fast as it arrived as those cyberstores discovered that they had to deliver real goods, and that the economics of delivery were not so simple. As dot-coms closed, the paper wealth they created went to paper profit heaven.2 The dot-com story was, as suggested throughout this volume, just one business strategy for using new tools in innovative ways. The e-commerce era is just beginning. Since the technology continues to evolve, the question becomes what experiments and business models will the next generation of tools bring? The evolving network infrastructures and application toolsets generate continual experimentation with new business models and organizational strategies. The business models popular at any one moment are the experiments of a new era; the conventional wisdom and the favored models will evolve and shift; some will fade and fail. It is the experimentation, the interplay between new business models and evolving infrastructure and tools, that is at the core of the story. Part 3 looks ahead. What will be the directions of network evolution? And can we envision their implications for e-commerce? Even the glimpse ahead offered by the chapters in this section indicates that we “ain’t seen nothin’ yet.” The precise character of next-generation networks is not yet evident; neither are the types of services and businesses they will support. Certainly, the networks will support both broadband and mobile functionality. Lastmile broadband access to the home will spread as a mix of cable, DSL, low-speed wireless and even—in a not-too-distant future—high-speed wireless and fiber. The rapid emergence of optical networks will drive bandwidth prices toward zero while the radical transformation of network architecture permits new applications and innovative delivery of existing applications. But can we say much more than that? It is very difficult. Consider broadband. The present conventional wisdom is that an avalanche of network capacity coming online will drive bandwidth prices toward marginal cost, which is roughly zero. In that case, network providers will sell services that bundle bandwidth. The build-out of broadband services would then seem to be very fast, with the bandwidth avail2. Jerry Useem, “2000 Dot Coms: What Have We Learned?” Fortune, October 30, 2000, p. 82.
able at very low prices. But the long-run story may be different; the industry may be highly cyclical. If prices stay low, then two consequences are likely. Demand and next-generation services will surge because prices are low, but with prices low it will be difficult to add capacity. The result will likely be a bandwidth shortage. Adding capacity will be slow, and a real surge in prices would then result. High prices will bring added capacity, and a new cycle will be launched. Hence the market for bandwidth is likely to alternate shortages with price surges and excess capacity with radical price wars. Effective futures markets in bandwidth will almost certainly result, but these will not be conventional commodities markets. Capacity is not the issue, but capacity between particular points at particular times will be. More important to our story, in what form and on what terms will broadband capacity be available? In the last decade or so, there was a shift from voice-based networks to networks created for and optimized for data. Now the issue will be the characteristics of data networks themselves, built from the ground up and optimized for IP-traffic, a story Kleeman explores in chapter 16. In building data networks from the ground up, network engineers can take into account the specific requirements of sophisticated e-commerce that are not well served by the existing infrastructure. Specifically, sophisticated e-commerce increasingly requires (1) a medium for secure transactions; (2) the capacity to deliver large electronic objects; (3) a high degree of quality to deliver streaming media and other data- and time-sensitive products and service; and (4) the ability to support even complex transactions with “nomadic” users—that is, users connected through wireless devices. The networks and the service providers will evolve in tandem. There is, similarly, considerable uncertainty in the dynamics of mobile functionality. We believe that, in the end, the network will evolve into integrated systems of landline access, mobile access, and simple messaging. There may be more than one “appliance” for connectivity, but there is likely to be only one “account.” Whether that overarching vision is correct or not, there are a number of rivalries along the way that will shape who dominates and controls those integrated networks. Let us consider some of those rivalries. One will be between the landline based TCP/IP system dominant in the United States and the mobile systems emerging in Europe and Japan. The Internet emerged and diffused most rapidly in the United States around open standards and open access that encouraged innovation throughout the network and across applications. The American telecom deregulation, combined with public research
support, generated the original Internet (DARPA). Public research institutions generated the WWW (CERN in Geneva, Switzerland) and the foundations for the browser (the National Center for Supercomputer Applications at the University of Illinois). Together the WWW and the browser made possible commercial applications and wide popular use. Indeed, it is now what most users see as the Internet. Importantly, the data network revolution has been led by sophisticated users who either built their own private or virtual private networks or forced traditional carriers to adapt to their demands. Lead developers of new network technologies such as Cisco and Nortel supported sophisticated users in their efforts to have networks reflect their preferences, not those of network providers. Provider-created data networks such as the French Minitel, by contrast, have not proven to be the route to success. But suddenly, the American system built on landline data innovation is being challenged by European and Japanese innovation that is both mobile and, in the case of DoCoMo’s i-mode in Japan, provider driven. Leadership in mobile telephony development, deployment, and usage has come out of Europe and secondarily Japan, not the United States. Innovative mobile data networks appear also to be coming from Europe and Japan. On the surface it would appear that European and Japanese firms will entrench positions of leadership in mobile data networks, mobile commerce or “m-commerce,” and even generate a challenge to American leadership in digital technologies more broadly. But not so fast. The present wireless data networks are narrow band with quite limited services. Will leadership in narrow-band services provide a foundation for leadership in broadband mobile services? Put differently, will mastery of mobile telephony or leadership in terrestrial Internet be more conducive to establishing leadership in a world where mobile Internet is a substantial feature? As the mobile companies migrate to broadband wireless data, they will have to develop more extensive service offerings to capture a client base that is, in the United States, attached to landbased ISPs and portals. Conversely, landline providers already comfortable with broadband may be able to create successful mobile offerings. A second rivalry to consider is that between the two dominant mobile Internet standards, i-mode and WAP. Jeffrey Funk, in chapter 15, tells us the story of DoCoMo’s introduction of the mobile Internet in Japan via imode and pays special attention to mobile applications. At this point, however, let us look specifically at the differences in network architecture
between i-mode and WAP and what these differences imply for the future of mobile Internet services.3 I-mode is a proprietary radio packet-switching technology owned by NTT DoCoMo that provides always-on service to compatible mobile handsets. Users are charged on a per-packet basis for data transferred with a transfer rate of currently 9600 baud, significantly less than that of current conventional wireline dial-up modems, which operate six times as fast. The DoCoMo innovation is fundamentally about new business models resting on the slow-speed packet-switch technology. The Wireless Application Protocol (WAP), by contrast, is an open standard developed by the WAP Forum, an industry group initiated by Motorola and Phone.com of the United States, Nokia of Finland, and Ericsson of Sweden. Working in consultation with the World Wide Web Consortium (W3C), the WAP Forum has outlined a set of standards not only to format data for display on mobile screens, but also to structure more fundamental handset-server communications. Their standards are designed to be independent of the underlying wireless transmission technologies— which makes WAP compatible with all 3G schemes. WAP is currently used in conjunction with GSM and other TDMA air-interfaces that, at present, do not support always-on service. Whereas i-mode provides packetswitched always-on connections similar to a fixed-line Ethernet connection to the Internet (albeit at much lower speed), WAP-based systems work through dial-up circuit-switched connections that closely resemble dial-up modem access to an Internet Service Provider in the fixed-line world. Although the data transmission rates of the underlying GSM and TDMA networks are generally higher than i-mode’s 9600 baud, the requirement for users to dial into WAP servers in order to send and receive data adds an initial connection time of between twenty seconds and a minute per use, and needless to say, pricing is based on airtime. With i-mode, service is instantaneous and pricing is based on the amount of data transmitted. There are third-generation packet-based protocols that do not need WAP and are full time. But the real story is not technical but commercial. The two systems also differ substantially in the way content is coded, provided, and accessed, and in which role the underlying architectures award to operators. Content for i-mode is made available by approximately 1,200 official providers as well as approximately 30,000 unofficial ones. 3. Thanks to David Lancashire for this analysis.
These providers’ webpages use a special subset of HTML, c-HTML (compact HTML), to format the data for display on the small mobile phone LCDs. While c-HTML is not identical to HTML, the boundaries between the two are intuitive to those familiar with HTML. Due to the similarity to HTML, i-mode pages can be accessed with conventional HTML browsers. This same ease of development does not apply to WAP, which involves entirely new standards.4 WAP does not employ a version of HTML but rather relies on Wireless Markup Language (WML). Consequently, WAP-based mobile devices need to employ a special WAP browser. Both i-mode content and content for WAP systems can be hosted on conventional Internet servers with the important qualification that content for WAP access has to be hosted either in WML (alongside an HTML version for conventional fixed-line access) or it has to be converted from HTML to WML on its way to the user. What is the role of network operators? In the Japanese case, DoCoMo has used the network’s architecture to position itself in a gatekeeper role. When users request any URL via i-mode, DoCoMo servers translate the request, send and receive the material across traditional fixed-line networks, and pass the requested page back to the user across their wireless connection. All i-mode content must pass through the dedicated servers of the network operator. This configuration has in the past created reliability problems in Japan, where rapid growth has overstrained DoCoMo’s i-mode servers, resulting in occasional service blackouts. Furthermore, DoCoMo controls the start menu i-mode users view on their phones. Being listed on the official menu and being accessible through it is precisely what distinguishes the “official” from the “unofficial” i-mode content providers alluded to above. In the case of WAP, users can in principle access any WAP server directly via dial-up. Anybody can provide WAP-accessible content simply by sticking a WML page on an Internet server. If users wish to access sites that are available only in HTML, however, operators can provide conversion into WML, a nice choke point that can be used for revenue generation. Speculation puts the total number of currently accessible WAP sites worldwide below 30,000. Simply put, the architectural differences between i-mode and WAP can be summed up as closed-closed versus open-open, the former part of the 4. Another little-made point is that because cHTML is a subset of standard HTML, it is possible to access i-mode websites through traditional browsers (for example, www.town.chiran. kagoshima.jp/imode/).
respective pairs referring to the character of the standard and the second to the character of the network. I-mode is proprietary and content is selected and made available by the network operator. WAP is an open standard and anybody can provide content. Given that this volume has emphasized that the Internet’s stunning success is in large part because of its reliance on open standards and user-driven innovation, how can we make sense of the fact that i-mode, a proprietary network provider–driven system, has not only built critical mass in Japan but also seems to have become the techdarling of the moment while WAP has stagnated? Interest in i-mode has surged, and DoCoMo is partnering with foreign firms like AT&T and Telefonica to install i-mode networks abroad. There are several reasons to think that the competition between the systems is far from over and that the competition about leadership in an increasingly wireless world is just beginning. For one, as Funk points out in his analysis, DoCoMo has not taken steps to prevent “unofficial” content providers from offering their services, and he expects the network to become increasingly open for content that does not pass through a DoCoMo filter. In addition, the central role for operators awarded by i-mode comes in handy when it comes to e-commerce applications. DoCoMo, as Funk reports, is providing payment and settlement services for its “official” content providers (in exchange for a fee), a crucial function in a market that has a low degree of credit card diffusion. While mobile application providers like the fact that WAP enables them to offer services independent of the network operator, this freedom comes with the potential burden of arranging for payment and settlement mechanisms. Despite the underlying crucial architectural differences between i-mode and WAP, the most visible difference for consumers is certainly the alwayson character of i-mode as compared to dial-up in the case of WAP. Dial-up and WAP need not go together, however, and the link between the two is about to be broken. General Packet Radio Service (GPRS) are now being rolled out over existing GSM networks. This will not only boost theoretical maximum data transmission speeds up to 171.2 kbps, but it will also provide for always-on data services.5 Furthermore, comparisons of the relative success of i-mode and WAP based on the number of users or the amount of data transferred miss a crucial phenomenon: the incredible popularity of Short Messaging Service (SMS), or G-mails, among GSM and 5. Simon Buckingham, “An Introduction to the General Packet Radio Service,” GSM World, January 2000 (www.gsmworld.com/technology/yes2gprs.html[April 16, 2000]).
now also TDMA users. As Funk reports, much of i-mode traffic (27 percent) consists of short messages sent between users. GSM users have enjoyed this very basic data service for years, long before WAP, and recent estimates of the number of SMS messages sent each month were projected to be 10 billion by December 2000. The real numbers for December 2000 seem to have exceeded that projection.6 It will be interesting to see how the introduction of always-on, packet-switched mobile services will affect the quantity and quality of data services offered and demanded by non–i-mode mobile Internet users. At the same time, however, DoCoMo’s success with i-mode has sped up the process of launching 3G services, and DoCoMo is currently planning on launching an i-mode compatible, W-CDMA–based 3G network in fall 2001, just as this volume is published.7 As indicated above, WAP will work with whatever 3G standard an operator chooses, as long as it is supported by the handset manufacturer. More sophisticated (universal) subscriber identification module (SIM/USIM) cards will also probably give handsets the capability of working with more than one system. It is also noteworthy in this context that WAP 2.0, which is currently under development, is reputed to be including c-HTML as an acceptable method of formatting data. If WAP 2.0 is indeed deployed in that way, we can imagine that it would strike a real blow to i-mode’s current “critical mass” of content. However, i-mode presently enjoys another critical-mass benefit resulting from the imposition of standards on Japanese handsets. Funk, using recent statistics available from the Ministry for Posts and Telecommunications, shows that there are over 30 million i-mode users in Japan. Consumer adoption has been pushed by NTT DoCoMo, which has ensured that all new PDC-compatible phones have built-in i-mode functionality. Simply put, i-mode access is not optional. While the competition between the two systems will undoubtedly continue for some time, the comparison highlights that it is not simply a question of which standard or what technology to use for a defined set of services. The future of networking and wireless’s place in it has to be seen in an even broader context. The question is fundamentally one of the degree of convergence of wireless services and broadband Internet. Technological convergence is one aspect, but ultimately the question becomes how much 6. Joanne Taaffe, “Mobile: Short Messaging Services—Mobile Ops Slow to Answer the Call of the PC,” Communications Week International, March 19, 2001. 7. Reed Stevenson, “DoCoMo Stays On Track for October 3G Launch,” Total Telecom, July 26, 2001.
service convergence users will desire. Will users continue to maintain one (virtual) presence in the broadband Internet world and one in the area of wireless? Put differently, will future users generally have one number (whether IP address or phone number) or will they have several? The stronger the trend toward one number, the more the future of mobile Internet services will be enmeshed in the broader context of the development of next-generation data networks, applications, and services. Many firms known for their “terrestrial orientation,” particularly in the United States, have begun to accommodate the “nomadic user”—offering services regardless of whether users connect via a fixed line or through the air and regardless of whether they connect with a PC, a PDA, or a mobile phone. AOL and Yahoo!, for example, have recently teamed up with AT&T Wireless and Sprint PCS to enable AOL and Yahoo! customers access to their e-mail accounts via wireless operators’ networks. For its part, AT&T’s own Excite portal offers services to users of both AT&T Cable and AT&T Wireless connections. Content and services are centrally provided and automatically reformatted in response to the customers’ access method. These developments are mirrored in the world of wireline Internet access; AOL’s most recent connection kit detects automatically whether a user has broadband or modem access and accordingly enables or disables streaming video options. On a more fundamental, network architectural level, engineers are designing next-generation networks such that they can accommodate the special needs of nomadic users. As Kleeman’s contribution illustrates, among the most important of these needs is network-internal caching to make sure a user connected via a wireless connection does not lose any data if the connection is suddenly disrupted in the middle of a transaction. In sum, as suggested at the beginning, one unasked question in all of this is how both WAP and i-mode stack up against more traditional communications mechanisms, including current developments in the world of broadband Internet services. One reason i-mode is said to be so successful in Japan is that terrestrial Internet access is exceptionally expensive. For many Japanese, i-mode offers the first affordable means of Internet access. A similar argument can be made for much of Europe, where local calling and hence most dial-up Internet access continues to be metered. Traditionally, separate types of services are already converging, and discontinuous technological change could even accelerate this trend. The second question is, will distinct mobile standards continue? The type of memory and data transmission limitations that have sparked the entire movement to develop these new “mobile” standards are arguably being lifted. Finally,
as all this illustrates, the mobile story is fundamentally one about services provided and the business strategies to do so. Or, put cynically, who will control which choke points. The networks and application tools (which are not considered here) are evolving as we speak. Just as important, the process by which these technologies are created evolves as well, with even less clear consequences for ebusiness. One issue is that the innovative dynamic of the Internet has so far been fostered by the competition and innovation permitted by end-to-end open access. Will that continue? What are the implications for the network if it does not? Will the broadband system be built on open access and interconnection in the way the present Internet has been? Or will the networks, such as the AT&T@Home cable network, be principally separated closed systems? The several authors of chapter 18 consider the critical issue of open access in their critique of the FCC’s unwillingness to apply to the world of broadband the logic of its successful policies during the early Internet boom. While that particular contribution focuses on terrestrial broadband access, much the same logic applies to the world of wireless. DoCoMo’s current mobile Internet strategy is remarkably similar to AT&T’s strategy of vertical integration in the area of cable. In this light, the recent alliance of the two to introduce i-mode over AT&T’s wireless network takes on additional significance. In December 2000, DoCoMo paid $10 billion to acquire 16 percent of AT&T Wireless in conjunction with an agreement between the two to roll out i-mode in the United States in late 2001.8 The FCC may therefore soon have to reckon with the question of open access across several second-generation Internet access technologies: cable, DSL, and wireless. Its policies and decisions are likely to significantly shape the trajectory of competition in Internet access technologies and services. Another rivalry of significant interest exists in the realm of software development, in the tension between the proprietary and largely companymanaged development of software products from WebTools to Windows and the open source world of shared individual efforts and access to the underlying intellectual property. Although the Internet itself has grown up on open standards and was created fundamentally by the open source community, the success of Linux in recent years has raised questions about how the course of network, tool, and application development will proceed. 8. Michiyo Nakamoto, “DoCoMo Sets Deadlines for New Service,” Financial Times, March 23, 2001, p. 34.
Critically, the shape of the networks, and the pace of the build-out, will turn on public policy. Will there be legislation in the United States that ensures open access? If so, the biggest implications may be for DoCoMo, not simply for AOL–Time Warner. Will intellectual property rules be biased to favor proprietary or open sources, or will the two approaches, which represent different development models and are consequently reflected in different business models, be allowed to compete openly? Many questions about the future of networks and tools remain. Critically, however, the future of networking and the next generation of tools will not be determined by technological advances alone. Current usage of networks and tools generates particular demands for new networks and new tools. User-driven innovation will favor some tools and not others. Network and service convergence depends not solely on technological convergence and common standards but, importantly, on the preferences of users. Current networks and tools as well as their future development occur in a particular policy and business context. Despite much uncertainty, the chapters contained in this last section of the volume provide some reference points for the future. We are convinced: we ain’t seen nothin’ yet. Jeffrey Funk’s chapter, “The Mobile Internet Market: Lessons from Japan’s I-Mode System,” is summarized as a bridge to this section. In expectation of the future arrival of mobile Internet in much of the world, Funk’s study seeks to preview the developments likely to go along with the mobile Internet revolution by analyzing the case of the Japanese i-mode system. NTT DoCoMo’s i-mode is the most popular of several Japanese wireless Internet services, with about 20 million users in February 2001. Funk seeks to preview (1) which applications are likely to succeed in the mobile Internet world; (2) how service providers should select and present content; and (3) which business models will be successful in mobile Internet and how they differ from fixed-line Internet models. Funk borrows from Wurster and Evan’s Blown to Bits and uses their idea of a trade-off of richness (quality of information as defined by user) versus reach (number of people who participate in information sharing) to situate i-mode–style mobile Internet vis-à-vis other communication technologies. Funk argues that the mobile Internet will increase reach due to the high diffusion rates of wireless technology, particularly in Europe and parts of Asia, but will lower richness due to small screens and keyboards that make information-intensive applications difficult. Since preferences over richness versus reach are said to differ by age (young people do not care as much about richness as about reach, argues Funk), mobile Internet service
providers should aim content and applications at young people. In addition, the new richness versus reach trade-off in the case of mobile Internet predestines these services to provide low-information, location-specific data, such as finding restaurants, making reservations, and buying tickets. Turning to the question of content and the tightly linked issue of business models, Funk emphasizes that Japanese service providers have been very careful about how to select content and which content to link directly to the service menu provided by the service provider. Whereas one of the current Internet’s cornerstones has been to largely separate content provision from access provision, DoCoMo’s i-mode business model awards it a central role in both domains. This “walled garden” strategy has worked well for i-mode despite it running counter to conventional Internet wisdom. Funk attributes this to the different trade-off between richness and reach found in the mobile Internet as compared to the fixed Internet environment. In the long run, however, DoCoMo will have to open up its service to third-party content providers in response to pressure from consumers and competition from other service providers. In Japan, early “killer applications” for mobile Internet have to accommodate bandwith limitations. Consequently, they are e-mail and entertainment services that are not data-intensive, such as games, horoscopes, and cartoons. Payment for value added services is currently handled by DoCoMo and other operators on behalf of service providers through the regular monthly mobile phone bill. In the long run, Funk suggests, providers of value added mobile Internet services will have to develop micropayment systems to acquire some independence from operators. In addition, mirroring the development in the fixed Internet, content producers are financing services through selling advertisement space on their sites. Funk’s chapter provides an interesting overview of the Japanese mobile Internet market, the first such market to have developed, and highlights important differences between the fixed-line and mobile Internet and their significance for business. Importantly, Funk notes in his conclusion that mobile Internet, rather than third-generation (3G) wireless technology, is the key wireless innovation and that mobile Internet may be accomplished even without 3G. As a result, he suggests, 3G networks will return the considerable investments in equipment and licenses only if they build off previous mobile Internet strategies. And with respect to the latter, Japan is at present ahead of the curve.
15
.
The Mobile Internet Market: Lessons from Japan’s i-Mode System
and relatively parallel growth of the Internet and mobile phones has caused many people to believe there will be a convergence between the two. It is expected that mobile phones will become a major tool for accessing the Internet; some people predict that they will soon be used more than personal computers (PCs) to access the Internet. Many questions remain as to how different the mobile Internet is from the fixed-line Internet. First, what kind of applications and contents will succeed in the mobile Internet, and how are they different from the successful applications in the fixed-line Internet? In the fixed-line Internet, shopping for material products, making travel reservations, and searching for general information are major applications in the business-to-consumer market. In the business-to-business market, the Internet supports online purchasing and other business-to-business activities. In the consumer-toconsumer market, online chat groups and the auctioning of products are popular applications. Will these also be the most successful applications in the mobile Internet? Second, how should service providers manage their service menus? In the fixed-line Internet, service providers such as America Online (AOL) have adopted a very liberal policy toward contents, attempting to maximize the amount of contents that can be accessed from their service menus. But
T
.
in the mobile Internet, should service providers control or even restrict the number of sites that can be accessed? Should they restrict the types of links that can be made between different websites? Third, what kind of business models will be successful in the mobile Internet, and how are they different from the successful business models in the fixed-line Internet? For successful online shopping sites on the fixedline Internet, greater economies of scale, lower inventories, and more efficient sales processing are key components of the business models. However, it is generally perceived that advertising revenues play an equally large role in these business models, particularly the models used by portals, search engines, and other websites that do not sell products. Further, many successful Internet shopping sites are trying to become Internet portals and thus reap the advertising revenues that come with portals. Will shopping and advertising play as large a role in the mobile Internet as the fixed-line Internet? This chapter uses the Japanese mobile Internet market to address these issues. Japan is the first country in the world to experience rapid growth in the mobile Internet market and thus provides a number of data points concerning the appropriate contents and applications, service menus, and business models for the mobile Internet. This report finds that different applications and users are driving the mobile Internet in Japan than are driving the fixed-line Internet in Japan and elsewhere. Different approaches to service menus are being used in Japan’s mobile Internet market since a slightly different set of issues is important in the mobile Internet than in the fixed-line Internet. Further, different business models are also being used in the Japanese mobile Internet than in the fixed-line Internet both in Japan and elsewhere. This chapter first summarizes the growth in the Japanese mobile Internet market. This is followed by discussions of fixed-line versus mobile Internet applications and managing the service menu. The chapter then summarizes mobile Internet traffic in the Japanese market and discusses business models for service and content providers, future trends, and implications and recommendations for U.S. and European service providers.
The Status of the Japanese Mobile Internet Market As of the end of February 2001, there were more than 30 million mobile Internet subscribers in Japan as compared to less than 10 million WAP
’ -
Table 15-1. Comparison of Internet Microbrowser Services, August 1, 2000 Company Item
DoCoMo
KDDI
Name Start date
i-mode February 1999
EZ Web April 1999
Monthly charge (yen) Access charge (yen) Markup language
300 0.3 per packet c-HTML
Tsuka Cellular
J-Phone
200–400
EZ Web November 1999 200–300
J-Sky Web December 1999 None
0.27 per packet WML
3–10 per minute WML
2 per download MML
(wireless application protocol) subscribers in the rest of the world. Further, while the average Japanese subscriber to the leading service was spending 2,300 yen a month, the average Western subscriber was barely using the phone.
A Comparison of Services DoCoMo’s i-mode service had the largest number of subscribers at the end of February 2001 (19.9 million), followed by EZ Web and Sky Message (JPhone).1 EZ Web had about 5.9 million and Sky Message had 5.4 million subscribers. As shown in table 15-1, NTT DoCoMo started i-mode in February 1999 followed by EZ Web (KDDI) in April 1999 and Tsuka Cellular’s EZ Web and J-Phone’s J-Sky in November 1999. NTT DoCoMo’s i-mode service has succeeded far more than the other mobile Internet services because it has moved faster than the others in introducing the appropriate handsets, packet services, clearinghouses, and contents. In DoCoMo’s clearinghouse service, it collects information or content charges through the monthly bills sent to subscribers, and it takes a small percentage of these monies as a handling charge. NTT DoCoMo has also created an advantage in contents. Initially, this was through its choice of c-HTML (a compact form of hyper text markup language in which only text is used), its large share of the Japanese mobile
1. DDI, IDO, and KDD became KDDI as of October 1, 2000.
.
Table 15-2. i-Mode–Related Revenues per Subscriber Month Yen Item Monthly charge Packet charges Content charges Total
DoCoMo
Content providers
Total
300 2,000 27 2,327
... ... 273 273
300 2,000 300 2,600
Source: Natsuno (2000).
phone market (57 percent), and its faster introduction of handsets, packet services, and clearinghouse services. NTT DoCoMo’s early advantage in contents and subscribers has created a positive feedback loop between the number of subscribers and the number of contents: content providers want to create contents for i-mode before they create them for the other service providers since i-mode has more subscribers than the other services, and users want to subscribe to i-mode since it has more contents than the other services.
The Short-Term Effect of Mobile Internet Services on Revenues: The Case of I-Mode Table 15-2 summarizes i-mode’s positive effect on monthly revenues. The per-person i-mode revenues per month include a 300-yen monthly charge, packet charges of 2,000 yen a month, and about 27 yen in handling charges for the paid information services (110 yen to the U.S. dollar). As discussed in the section on business models, DoCoMo collects the fees for the paid content services and extracts a 9 percent handling charge. Thus DoCoMo makes more than 2,300 yen a month in additional revenues from i-mode on top of its approximate 8,000 yen a month in voice revenues (monthly and airtime charges) for about a 30 percent increase in revenues.
Fixed-Line versus Mobile Internet Applications and Contents The basic trade-off between fixed-line and mobile Internet access can best be understood by using and analyzing the trade-off between “richness” and
’ -
Figure 15-1. The Traditional Trade-Off between Richness and Reacha Richness
Trade-off
Reach Source: Wurster and Evans (2000). a. Richness is quality of information. Reach is the number of people showing that information.
“reach.” This is shown in figure 15-1. Richness refers to the quality of information, as defined by the user. This includes a number of factors, of which the most important is probably the depth and bandwidth of the information. Reach refers to the number of people who participate in the sharing of that information.2
The Traditional Trade-off between Reach and Richness There is a strong trade-off between richness and reach both in the “traditional” and “new” economies. For example, general-interest newspapers reach a large number of people and thus enable a large number of people to share in that experience. However, the information contained in newspapers is clearly not as “rich” as the information contained in specialpurpose magazines or journals that do not have as large a “reach” as the general-purpose newspapers. Many of these special-purpose magazines have not only been difficult to obtain in the past; it has often been difficult
2. Wurster and Evans (2000, ch. 3).
.
even to identify them due to their limited circulation. Traditionally, only specialists are familiar with these special-purpose magazines and journals, and identifying and contacting the relevant specialists has also been difficult. Thus traditionally, there has been a strong trade-off between richness and reach. Many people argue that the Internet enables firms to provide both more reach and more richness, thus causing the trade-off between reach and richness to change and perhaps disappear. As the number of Internet users increases, the reach of information provided over the Internet also increases. The reach of the Internet will probably surpass the reach of general-purpose newspapers within a few years, because the number of Internet users will soon exceed the number of newspaper readers in the United States and elsewhere. Further, the links between home pages make it possible for users to easily obtain rich information on the Internet from accessing just one site; for example, many general-purpose newspapers that operate on the Internet provide links to related articles, corporate home pages, and other sites, thus providing easy access to both rich and high reach information.3
The New Trade-off between Reach and Richness The new trade-off between reach and richness involves the trade-off between fixed-line and mobile Internet applications. As shown in figure 15-2, mobile devices provide lower richness but have higher reach than desktop computers. Mobile phones have smaller screens and keyboards and thus cannot access the level of rich information that can be accessed with a desktop computer. The larger reach of mobile phones comes from their greater diffusion, greater mobility, and faster power-up as compared to desktop computers. There are more mobile phones being used than desktop computers in many countries in the world, including Japan, and it is expected that the number of mobile phone subscribers will exceed the number of installed desktop computers within a few years. Naturally, mobile phones are easier to carry than desktop computers and personal digital assistants (PDAs). Further, mobile phones can be used within seconds of turning them on versus several minutes for desktop computers. Figure 15-2 suggests that desktop computers, mobile phones, and PDAs will coexist, with each device occupying a different place in the trade-off 3. Wurster and Evans (2000, ch. 3).
’ -
Figure 15-2. The New Trade-Off between Richness and Reach Richness Fixed (larger screen and keyboard) New trade-off (Internet)
PDA
}
Phone
Mobile (easier access)
Traditional trade-off
Reach
between reach and richness. Many people will use these devices to complement each other; rich information will be handled on desktop computers and less rich information will be handled on PDAs and mobile phones. Content providers and other firms will provide services that make it easier to use these devices as complements. However, there will also be competition between these devices. Mobile phones will always have a larger reach than desktop computers and even PDAs due to their lighter weight and lower costs. The challenge for the phone manufacturers is to increase the capability of phones to access rich information. Manufacturers of mobile phones and even PDAs are attempting to do this by reducing costs, increasing display size, and improving input methods—either through larger keyboards or new technologies like voice recognition. Further, successful content providers are making rich information more accessible to mobile phones through search capabilities and the storage of user characteristics and other information. - . There are two other critical implications of figure 15-2. First, contents that are location-dependent will be major applications for the mobile Internet because of the large reach provided by mobile phones. These location-dependent services include navigation services, travel services, and information on local stores, restaurants,
.
and bars. Although these types of services do not yet represent a large market in Japan, there are many efforts under way to create these services. The challenge is to create the standards necessary to integrate the location and information services. The trade-off between reach and richness is a critical aspect of locationdependent services. Location-specific information can just as easily be acquired from the fixed-line Internet as from the mobile Internet. Airline tickets, hotels, and rental cars can be reserved using a desktop computer, and information on local bars, restaurants, stores, and trains can be obtained from a desktop computer. The difference between these types of reservations and information and other types of information is that people’s plans often change while they are in the specific location. Therefore, a high reach device such as a mobile phone is needed to acquire new information or make new reservations. Thus the unique aspect of locationspecific information is that it requires high reach. . A second implication of the trade-off between reach and richness is that it is a function of age. This is why more than 35 percent of i-mode subscribers are under twenty-five and almost 70 percent are under thirty-five. Further, the people who run up the biggest bills are under twenty-five. The reason that this trade-off is a function of age is that in most countries, young people place a greater importance on reach and a lower importance on richness than older people do. Younger people are more mobile than older people are; those under twenty-five generally spend a much larger amount of their time away from home and the office (if they have one) and use public transportation (buses and trains) more than older people. Young people also place less emphasis on richness than older people do because they have less experience and thus lower specialization.
Managing the Service Menu The term “managing the service menu” refers to a broad set of issues concerning the way in which information is presented on the mobile Internet’s service menu and who is allowed on this service menu. For example, AOL’s home page contains an e-mail entry area, links to various shopping and information categories, a search function, links to information about AOL, and—of course—advertisements. The choice of these particular items is a critical decision for AOL.
’ -
This section addresses only a few of the issues that are considered to be important in understanding the key differences between the mobile and fixed-line Internet. Clearly, the smaller screen presents a challenge and it is not just a matter of copying contents that were created for the fixed-line Internet into a WAP or c-HTML environment. Service providers must reduce the amount of information that is shown on a single page. They can do this by placing the information on multiple pages or by excluding some of the information that is ordinarily on a fixed-line Internet home page. The number of advertisements that can or should be placed on the service menus is also an important issue. A key issue is a service provider’s policy regarding who can present information on its service menu. NTT DoCoMo has adopted a somewhat restrictive service menu in two ways. First, DoCoMo uses a very detailed screening process to determine whether firms are allowed onto its official menu. DoCoMo allows only firms that they believe have a new and interesting service and will maintain high quality. This is, of course, quite different from most fixed-line Internet service providers, which have attempted to distance themselves from the content providers for legal reasons. Second, DoCoMo has restricted the links between official and nonofficial content sites. DoCoMo has done this because of two critical and related differences between the mobile and fixed-line Internet. First, the small screens (lack of rich information) and the short time periods in which the Internet is accessed make a set menu very effective. People do not want to spend much time looking for sophisticated information. Second, if the service provider is going to collect content charges for the content providers, it needs to be concerned with quality of the contents. The set menu gives the chosen content providers a lot of power and enables them to charge for contents. And the service providers can collect these micro-payments through the monthly bills that they send to users. As we shall see later, this is the most profitable business model in the Japanese mobile Internet. On the positive side, DoCoMo’s restrictive policy may enable its content providers to make more money than they would be able to make with a less restrictive policy. This is certainly important in order to attract content providers. Without content providers, any mobile Internet service will fail. And the low incomes being generated by the paid services as compared to DoCoMo’s packet income suggests that the content providers are not making windfall profits even with this restrictive policy. Thus in the short run, DoCoMo’s restrictive policy may be contributing to its success.
.
In the long run, however, it is not clear whether this restrictive policy will be beneficial to Japan or even to DoCoMo. On the one hand, DoCoMo’s restrictive policy has created an additional barrier to entry to firms—particularly new firms trying to enter a new industry. This barrier is in addition to the many entry barriers that already exist in Japanese industry and that are a significant reason for Japan’s current economic problems. Just as Japan’s Ministry of International Trade and Industry (MITI) has not been very successful in choosing winners, it is doubtful that DoCoMo will be much better. On the other hand, unofficial sites are growing rapidly and thus provide an alternative to the official set menu. These unofficial sites can be easily accessed by inputting the sites’ universal resource locator (URL). As of early August 2000, there were more than 18,700 independent i-mode compatible sites, up from 5,000 in early 2000.
Mobile Internet Traffic in the Japanese Market The greater reach and lower richness of young people is reflected in the market for i-mode services. A different set of applications is driving the mobile Internet market than the fixed-line Internet (both in Japan and elsewhere) because, in part, of the large number of young users. The Japanese mobile Internet is driven by e-mail and simple entertainment sites. In December 1999 e-mail represented 27 percent of i-mode traffic; this figure has not changed much since then. Access to official sites represented 34 percent of traffic; access to official sites, 14 percent; automatic messages, 16 percent; and access to the menu, 9 percent.4 As discussed later, entertainment represents more than half of the traffic to official sites and an even higher percentage of the traffic to unofficial sites. This includes automatic messages—messages that are sent automatically by the contents provider to users and primarily include the daily sending of animated characters and cartoons.
E-Mail The difference between the fixed-line and mobile Internet e-mail contents is even more interesting than e-mail’s current large representation in i4. DoCoMo Kansai (2000).
’ -
mode traffic. E-mail with mobile phones tends to concern what people are doing at the immediate moment or what they did the night before because the higher reach of mobile phones enables them to do this. Messages such as “it’s pretty hot today, how’s work, when do you get off, how was your date last night, what are you doing tomorrow night” are common during the day. At night, “what are you doing right now, where are you, who are you with” are common messages. These types of message are less common in fixed-line e-mail—by the time people receive a response to such a message, the message has lost a lot of its relevance.
The Low Richness of Contents Mobile Internet traffic also demonstrates the importance of young people and entertainment. As of early 2000, 55 percent of accesses to the main categories in DoCoMo’s official sites was related to entertainment. It should be pointed out that the percentage of accesses is somewhat different from the percentage of traffic because automatic messages are not counted as accesses. As discussed earlier, 16 percent of the overall traffic was accounted for by these automatic messages and most of these are for entertainment-related sites. Thus the figure for accesses alone underestimates the importance of entertainment. The most popular entertainment categories are the downloading of ringing melodies, playing games, the downloading of animated characters and other pictures (as screen savers), horoscopes, and information about music. News (14 percent) is second to entertainment, followed by tickets-living (11), financial (6), local information (5), dictionaries and business tools (5), travel (3), and restaurants and recipes (1).5 Entertainment is the most successful i-mode category, and the entertainment services clearly tend toward high reach and low richness. They require fairly few clicks to obtain the desired information, and mistakes are not serious in terms of time or money. Further, the most successful sites have created effective search routines and other functions that enable users to more easily obtain the desired information. As expected, the users of the entertainment services are primarily young people. News is the second most popular category. These services also tend toward high reach and low richness. There is far less information available in these mobile phone services than there is in newspapers, and it will be a 5. DoCoMo Kansai (2000)
.
long time before there are links to other home pages where more detailed information can be found. But for people who are commuting by train or bus, this type of simple news information is very useful. It is much easier to read the news on a small mobile phone screen than from a newspaper while standing on a crowded train or bus. In Japan, it is often difficult to find a seat on a train or bus, or while waiting for one, and there is typically also insufficient space to open a newspaper under these conditions. The information provided in the tickets and living category is similar in that the contents do not contain rich information, and the most successful services are those in which rich information is not needed. For example, rich information is not needed in the tickets service, and the major users of this service are young people who favor reach over richness. Young people under twenty-five are also more likely to use the other services in this category, such as information on rentals, part-time employment, games, CDs, used cars, and education. Although the current financial services are clearly not aimed at young people, they are aimed at the low richness and high reach applications. The most popular i-mode financial transactions are relatively simple ones, such as balancing checks and money transfers (mobile banking). While these transactions can clearly be done on the fixed-line Internet, many people have a tendency to remember the need to do them when they are not at home. And even if they are, the simplicity of the tasks may make it faster to do them on a mobile phone than on a desktop computer, which takes longer to power up. Although these transactions represent less than 6 percent of total traffic on i-mode’s official sites, they have probably played an even more important role in generating interest in i-mode than the low figure suggests. By convincing some of Japan’s most conservative financial institutions to participate in i-mode, DoCoMo was able to convince the media and the public that i-mode was going to succeed. In the long run, however, richer information is needed before the market for news, tickets-living, and finance will grow to the levels seen in the United States for fixed-line Internet services. It is not just a matter of improving the small screens and other technical limitations; the richer information will require changes in DoCoMo’s policies. Unless DoCoMo changes its restrictive policy toward linkages, it will be difficult for news services to provide links with corporate and other home pages and for megasites to appear that allow the easy comparison and purchasing of hotel reservations, airfares, rentals, new and used cars, books, CDs, and financial services. This is because DoCoMo cannot screen applications from every
’ -
Figure 15-3. Typical Billing Scheme in the Japanese Mobile Phone Market Activation commissions ($300–$400)
Service providers
Buy phones (less than $50)
Monthly and airtime charges (average $40)
Subscribers
Retail outlets
Buy phones (less than $300)
Phone manufacturers
firm that has a home page. This suggests that the United States and Europe may be able to move much faster in these areas of the mobile Internet than Japan.
Overall Business Model Used by Japanese Service Providers Figure 15-3 summarizes the basic business model used by Japanese service providers. Like most non-Japanese service providers, they pay activation commissions to the retail outlets in return for the acquisition of subscribers. The Japanese service providers are currently paying activation commissions of about 35,000 yen a subscriber. The retail outlets use this money to subsidize the price of phones and advertise the phones and accessories. The service providers make their money in monthly and airtime charges. The major difference between this and the overall business models used by non-Japanese service providers is the large amount of activation commissions and high monthly and airtime charges. Further, unlike their counterparts in other countries, the Japanese service providers pay activation commissions not only to sign up new subscribers but also to some extent for the exchanges of phones. Depending on the length of the subscription
.
and the age of the phone, many subscribers can obtain a new phone for less than 10,000 yen without even changing service providers or phone numbers. For new subscribers, the price of a phone is typically less than 5,000 yen. These high activation commissions are one reason why the Japanese use the smallest phones in the world. Until recently, there has been one mobile phone segment in Japan in which the mobile phone manufacturers compete primarily on the basis of weight. With voice-only phones now weighing less than fifty grams and most phones now containing a micro-browser, the competition is changing to display size and probably keyboard size (or another input method) since larger displays and better input methods enable users to obtain richer information. And the large activation commissions have caused the prices of the i-mode and other browser phones to be similar to the prices of regular phones. These high activation commissions also help the service providers move subscribers to new services such as i-mode. In August 2000 DoCoMo was attracting more than one million new subscribers a month to its i-mode service and only about half that many overall new subscribers. Thus roughly half of its new i-mode subscribers are current voice subscribers who have traded in their old phone for a new i-mode phone (for less than 10,000 yen). The adoption of such a business model would clearly be a major change for many U.S. and European service providers. The higher activation commissions would require much higher monthly and airtime charges, and these higher charges would probably be unacceptable to most users. Nevertheless, the lower activation commissions in the United States and Europe will slow down their move to the mobile Internet and to phones that can most easily access the Internet (that is, phones that have large screens and thus can access richer information).
General Billing Models Used in Japan’s Mobile Internet Market Figure 15-4 summarizes a number of billing methods or business models that are being used by Japanese service and content providers in the mobile Internet market. Currently, the most widely used billing method in terms of revenues is the “clearinghouse” model, in which the service providers collect the information charges for the content providers. For example, when an NTT DoCoMo user subscribes to a content service that is on the
’ -
Figure 15-4. Billing Schemes in the Japanese Mobile Internet Market 1
Service providers
Content providers
4
Existing business
Other firms
1
3
2
1
4
Consumers
4
4 1
Other payment schemes (credit cards, debit accounts)
1 = Clearinghouses
Money
2 = Pay to have contents loaded
Information or products
3 = Support existing business 4 = advertisement-related model
NTT DoCoMo official menu, NTT DoCoMo includes these fees on the subscriber’s monthly bill. NTT DoCoMo takes 9 percent of these fees as a handling charge, and the remaining monies are delivered to the content providers. Most of the entertainment and news services use this business model. Some third-party firms are also offering these types of clearinghouse services. A second business model is the “pay to have contents loaded” model. Many websites and content providers that provide information in the tickets-living and town information category use this. This is partly a carryover from the practice of the traditional print media in Japan—here firms also pay to have information about their bars, restaurants, employment needs, rentals, and cars described in positive terms. These payments are probably less common in the United States or Europe, where a stronger distinction is made between advertising and analysis. A third business model is used heavily by financial firms, concert ticket providers, airlines, government offices, rental car agencies, TV and radio
.
stations, and manufacturers. These firms provide free information or purchasing services to their customers in order to support their existing businesses. A variation on this model is the “shopping” business model. Shopping is still in its infancy on the mobile and also the fixed-line Internet in Japan—credit cards are uncommon and costly and there is a lack of confidence in using them on the Internet, although this problem will be solved in the near future. It is estimated that there were about 100 million yen in sales over the mobile Internet in July 2000. A fourth business model, which is just beginning to take off in Japan, is both similar to and different from the advertising model used on the fixedline Internet. It is similar in that it includes firms paying to have their advertisements loaded on to service provider and content provider menus. It is different in that mobile phone advertisements are actually experiencing higher viewer and click rates than are seen on the Japanese fixed-line Internet—either because of the smaller screens or the tendency for users to use the phone to kill time. It also appears that many users will be paid to view advertisements, particularly in the form of discount coupons. Some of these monies will flow directly from the advertiser to the user, and other monies will flow through a payment provider.
Discussion The mobile Internet is clearly an important market and an important part of the overall Internet revolution. Japan had more than 30 million Internet subscribers at the end of February 2001. The market for mobile Internet services including monthly and packet charges was about 50 billion yen, and mobile contents and commerce was 7 billion yen just in February 2001. Japan expects to have about 50 million Internet subscribers by the end of 2001, and the market for mobile contents and mobile services (packet charges) is expected to exceed $1 billion and $7 billion, respectively, in 2001. Contrast this with the rest of the world, where the number of mobile Internet subscribers (that is, WAP subscribers) totaled 4 million as of midAugust 2000.6 Further, it is generally agreed that few WAP subscribers actually use the service, thus placing the market for WAP contents and ser6. “China and Japan Seen as Key Drivers for New-Generation Internet Access Source,” South China Morning Post Publication, September 19, 2000.
’ -
Figure 15-5. Japanese versus U.S. and European Approaches to the Mobile Internet Richness
Fixed-line Internet
European and U.S. approach
★
Common goal Mobile Internet (PDAs) Current trade-off on Internet
Japanese approach
Mobile Internet (phones) Reach Source: Author’s calculations.
vices outside Japan at almost zero in late 2000.7 Part of the problem is technical since few service providers outside Japan had implemented a clearinghouse function or packet communication capability, and there were still problems with the WAP specifications in late 2000. But the different results have more to do with differing perceptions of the concepts of reach and richness. While the West tries to move the high richness of the fixed-line Internet onto the small screens of mobile phones,8 the i-mode experience shows us that the most successful applications are simple contents like entertainment (see figure 15-5). This is because of the technical limitations of current phones (small screens and keypads and low data rates) and the fact that young people favor reach over richness. Thus 7. K. Chan, “‘Mobile Internet’ Hits Bumps but Next Year Looks Better—Despite Flaws WAP Is Set,” Dow Jones Newswires, September 25, 2000. 8. U.S. and European firms have made a large number of announcements concerning their plans in mobile Internet. Most of these suggest they are focusing on contents that are currently successful in fixed-line Internet. For example, see Chan, “‘Mobile Internet’”; J. Dodge, “The Wireless Web Is Going Nowhere Fast,” eWEEK (Dow Jones Newswires), September 25, 2000; and R. Tiernan, “Moving at WAP Speed,” Dow Jones Newswires, October 12, 2000.
.
the initial applications and users of the mobile Internet are substantially different from the current applications and users of the fixed-line Internet. In their focus on business users and business-related contents, Western service providers have not created the necessary positive feedback among contents, users, and phones. In fact, they have created a negative feedback loop where the poor reception by users to WAP phones and contents has led to lower investments by content providers and phone manufacturers in WAP contents and phones. This negative feedback will be difficult to change, and it requires that Western service providers rethink their entire business model for the mobile Internet. Only by focusing on the contents and users that are most suited to the mobile Internet will they be able to create positive feedback among contents, users, and phones and thus create a successful mobile Internet. This would be particularly problematic for those service providers that are investing in third-generation licenses and technologies. Thirdgeneration services will probably succeed only if the mobile Internet is in place to some extent when they are started: in 2001 in Japan, 2002 in Europe, and probably later in the United States. This is because third-generation services must build off of the mobile Internet. Interestingly, the factor that explains the different results seen in Japan and the rest of the world is not openness or, conversely, the so-called walled garden effect. NTT DoCoMo, the other Japanese service providers, and many Western service providers have created somewhat closed menus, which many people believed would doom an Internet business. But this has not prevented growth in the Japanese market—unofficial sites have grown rapidly, and many firms have created technologies including clearinghouses to support these unofficial sites. If the mobile Internet were only about simple contents like entertainment, these issues of reach versus richness, closed versus open menus, and different types of business models would be relatively unimportant. But the mobile Internet is more than just mere entertainment. The contents, business models, and menus will evolve because the trade-off between fixedline and mobile Internet access will change. It was argued earlier that the trade-off between reach and richness is an important concept for understanding the trade-off between fixed-line and mobile Internet access. And several trends in Japan and elsewhere will cause this trade-off to change, as shown in figure 15-6. Java-based phones, phones with larger and better displays (most displays are already color),
’ -
Figure 15-6. The Future of the Trade-Off between Richness and Reacha Richness
Fixed New trade-off
Current Internet trade-off
PDA Phone
Reach a. The new trade-off is the result of new phones, better data services, and new PDAs.
and better input methods will enable mobile phones to access richer information. Faster data services such as those available in third-generation services will also enable phones to acquire richer information. Bluetooth (a method of providing short-run communication between PDAs and phones) and the subsidization of personal digital assistants by service providers will increase the reach of PDAs and thus provide another method of obtaining richer information through the mobile Internet. All of these changes will cause the curve that represents the trade-off between the fixedline and mobile Internet to rotate in a counterclockwise direction, as shown in figure 15-6. The changes in this trade-off will enable richer contents to succeed and more interesting applications to appear on the mobile Internet. It is highly likely that traffic to sites concerned with such areas as news, finance, rentals, employment, new and used cars, CDs, education, airlines, and hotels will increase at the expense of entertainment as the trade-off changes over the next few years. Further, better presentation of contents, including the synergistic use of fixed and mobile contents, will also cause these richer
.
contents to become more widely used. Already firms are integrating the best aspects of fixed and mobile contents, including the addition of dynamic pricing in Japan. Further, Japanese firms are already beginning to create mobile intranets that use i-mode phones. These mobile intranets provide their employees with mail and access to a variety of internal databases such as schedules, customer data, and inventory data. Examples of applications currently under development for various industries in Japan include creating reports for in-home medical care, job scheduling for construction firms, bulletin boards for universities, inventory management for vending machine and the beer distribution firms, auctions for used car dealers, and fleet management for package delivery services and freight transporters. For example, Sagawa Kyubin provided 25,000 of its employees with i-mode phones by January 2001, which will help them manage the delivery of parcels. The new phones and PDAs mentioned earlier, along with the creation of general purpose software, will accelerate the emergence of these applications. Many Japanese and foreign firms like Oracle, Sun, and IBM are currently developing this type of general purpose software. The emergence of these richer contents and applications will eventually make the mobile Internet an important part of the future. But without investments in the initially appropriate contents and users, the positive feedback needed to create the new phones and PDAs and thus cause an evolution in the trade-off between reach and richness will not be generated. Japan has created positive feedback among contents, users, and phones and thus will move toward these new applications and richer contents much earlier than the West.
References DoCoMo Kansai. 2000. “Kontentsu Bijensu no Tenkai (The evolution of the contents business).” Presentation at Kobe University, June 28. Natsuno, Takeshi. 2000. i-mode Strategy. Tokyo: Nikkei BP. Wurster, Thomas, and Philip Evans. 2000. Blown to Bits. Harvard University Press.
1 16
. with
E-Commerce and Network Architecture: New Perspectives
, roads and bridges were designed to carry people, horses, and carriages. When automobiles first appeared on the roads and their use began to spread, many existing bridges had to be reinforced and otherwise upgraded to accommodate the different needs of this new form of traffic. Roads had to be widened and hardened. The inadequacy of the existing infrastructure put limits on the size and weight of the cars that could travel. And a single insufficient bridge or narrow road was often enough to cut off entire regions or neighborhoods from the changes automobiles brought. It was quite some time before a truly adequate system of new roads and bridges, optimized for automobiles rather than people and horses, had been built. Until then, traffic congestion around the “relicts from the past” was a common feature of the modern transportation era, and anybody wanting to use automobiles for business had to reckon with a slew of technical limitations. The story of e-commerce and the Internet to date has much in common with the scenario sketched above. The Internet is a packet-switched data network that was built over a physical infrastructure designed and deployed for the circuit-switched world of voice telephony.1 Despite the massive
O
1. In a packet-switched network, a stream of data is divided into smaller packets that are placed in electronic envelopes, labeled with the address of the addressee, numbered, and sent off over shared
amounts of money spent over the last few years on upgrading and expanding capacity in all parts of the network, this effort for the most part has gone toward reinforcing old roads and bridges. Fundamentally new bridges, optimized for new forms of traffic, are only now being built. The first generation of e-commerce applications, much like the first automobiles, has used an infrastructure conceived, built, and optimized for a different world. Our current data infrastructure is an awkward hybrid of legacy networks inherited from the era of analog voice telephony, partially upgraded and adjusted for the task of carrying digital data.2 The limitations and inefficiencies inherent in hybrids of this kind constrain current and future e-commerce applications. Upgrades, fixes, and workarounds allowed a very rapid deployment of a first-generation Internet. The next-generation Internet and its accompanying e-commerce applications will be built on a new network that is designed from the ground up for digital high-speed transmission. This will not only make current applications run “better,” but also enable applications we cannot even imagine today. Shaped by the demands of sophisticated commercial applications, network evolution is approaching a dramatic shift. This chapter previews the future of communications networking. It identifies how one crucial factor in the Internet’s dramatic success—its ability to diffuse over an existing infrastructure—is increasingly becoming its main limitation. Business needs are the principal drivers of nextgeneration networks. These networks will be much simpler in their architecture than existing technologies. Paradoxically, the simpler the network, the more sophisticated the applications it supports. Next-generation networks, no longer optimized for operating on a historically voice platform,
circuits. Special computers called “routers” read the address labels and route them toward their destination. No two packets need take the exact same route through the network. The data are simply reassembled at the destination once all the packets have been received. In a circuit-switched network, by contrast, data are exchanged over a dedicated end-to-end connection. Switches establish a unique connection between two users. All data exchanged between the users travel over the same circuit and the circuit does not carry any other users’ data while the connection is in effect. For a useful pair of diagrams illustrating the distinction between circuit and packet switching, see Cerf (1991, p. 76). 2. Analog electronic technology transmits information as a wave whose variant frequency and amplitude contains the data being transmitted. Data are affixed to a carrier wave—a process called modulation—sent, then decoupled from the carrier wave at the receiver—a process called demodulation (hence the name “modem”). Digital technology, by contrast, codes information directly in binary form as a sequence of zeros and ones. It can then be sent over electronic links (that is, the presence or absence of an electric pulse indicating zero or one) or optical links (the presence or absence of a flash of a light indicating zero or one).
-
as the old ARPANET did, will be easier to control. Control will make the network more efficient, robust, scalable, and flexible. But because the network architecture and information transport (internally and between cooperating networks) are more explicit and capable of being monitored, the changes will also make the network more “regulable,” raising important policy questions.3 Assessing next-generation networks requires a brief review of the development and current state of the Internet. We then illustrate how early ecommerce applications have turned the network into a set of patches, fixes, and workarounds. Subsequent sections describe the architectural principles of next-generation networks and how they meet the demands of current and future e-commerce applications. A final section considers the implications for network control and regulation.
Network Development and the Current State of the Internet In essence, the Internet is a set of open network and data transmission protocols deployed on a hybrid of underlying facilities. In other words, what makes the Internet the Internet is not a physical network but a common set of public and open technical standards that enable the seamless exchange of information across data networks. Hence the name “inter”net.4 The goal was a “network of networks” that would guarantee communication links between distant locations even if parts of the physical infrastructure were unavailable. In short, our current Internet is a set of standards that enables machine communication in a decentralized, nonhierarchical, distributed network of networks. The Internet’s architecture differs dramatically from that of the voice telephone system. The telephone system is circuit-switched. A singleapplication terminal—the telephone—is connected to an intelligent switching system that establishes end-to-end connections among terminals. For the duration of a call, a dedicated circuit remains open between the parties. The Internet, by contrast, is packet-switched. Intelligent, multipleapplication terminals—computers—break data into little packets, number 3. For the idea of architecture and code determining a network’s “regulability,” a term that is admittedly slightly awkward but useful, see Lessig (1999). 4. The word Internet is actually a contraction of “Intergalactic Network,” the name given it by the head of the program office at ARPA that funded it.
them, and add an address to each one before sending them away. The network’s routing system then routes individual data packets closer to the address label affixed to the data until the packet reaches its destination. The computer at the other end of the transmission receives the packets, requests individual packets to be resent in case they got lost, reassembles the data to its original form, and presents it to the user. David Isenberg describes the difference between the two architectures as that of an “intelligent” versus a “stupid network.”5 Telephone networks have low information processing capability at the terminals and an intelligent core. The Internet has intelligent terminals and a “stupid” core. This architectural difference translates into a different relationship between services and applications. The telephone system is designed for a single application—opening and closing a circuit for a voice call. Consequently, the transportation service is coupled with its application. The Internet, by contrast, is designed in a layered fashion, with its transport of digital packets effectively indifferent to the use of the data in the packets. This was a conscious effort to decouple transportation and applications. The Internet’s core does not differentiate between different kinds of data; instead, it follows the simple doctrine of “route the packet, stupid!” To the largest extent possible, applications are pushed “up and out,” away from the core data transportation layers and to the end points of the network. This means that a common transport infrastructure can be used to support multiple functional environments, applications, and systems. This design principle, known as the “end-to-end argument in system design,”6 gives the Internet the flexibility and versatility it is known for, as new applications do not necessitate changes to the network core.
Rapid Diffusion over Existing Infrastructure The Internet’s dramatic success as a communications technology is in large part due to its core design principles of unbundling transportation and applications, interoperability of network layers, and intelligence at the network end points. Its rapid diffusion rate and its success as a medium for commercial applications are due to the fact that the technology was rolled 5. Isenberg (1997, p. 16). See also www.hyperorg.com/misc/stupidnet.html (April 10, 2001). 6. The original end-to-end argument in network design was formulated in Saltzer, Reed, and Clark (1984). For a recent elaboration with respect to the future of the Internet, see Clark and Blumenthal (2000).
-
out over the previously deployed infrastructure of the old telephone system.7 This allowed the Internet to economically leverage the largely paidfor voice infrastructure, in effect standing on the shoulders of the prior network technology. An example of this is flat rate local phone service, designed initially for three- to ten-minute phone calls, supporting oneand two-hour modem calls at no additional cost to the customer. Additionally, the Internet—perceived by U.S. regulators as an “enhanced service”—was not subject to many of the tariffs and fees that had grown into the “basic service” telephone network. These fees, while historically rooted in true underlying costs, have grown into a complex system of subsidies and economic distortions that are layered on voice telephony. The Internet avoided much of this cost distortion and thus enabled service provision at a lower price than much of voice telephony service. Private users connected through dial-up modem via residential phone lines and institutional users tended to be connected via local area networks (LANs) through leased lines to a network gateway. The Internet’s initial backbone was simply the backbone that had long sustained long-distance telephony, now enabled for data traffic. Using existing infrastructure was certainly a source of strength for the early Internet. Given that this infrastructure was designed and optimized for a network with entirely different service and application profiles, however, the prevailing architecture operates with many inefficiencies. Our current networks are a hybrid—data communications networks constructed on the foundation of the old voice networks. The physical technology layers of the two are different, leading to the inevitable compromises that occur when one tries to fit two worlds together. In addition, there is the level of complexity generated by a series of local workarounds to basic problems.
Commercial Applications and Technological Fixes The commercialization of the Internet in 1994 and the introduction of the first web browser opened the floodgates. Whether one chooses to look at the number of users, number of sites, or the growth in data traffic, every indicator suggests that the Internet is the most rapidly diffused communications 7. Examples of Internet applications are e-mail, audio, video, web pages, file transfers and downloads, and peer-to-peer computing such as SETI-Online and Napster.
technology in history. User-driven innovation has generated a vast number of applications, commercial and other, unimaginable even to the Internet’s creators. From the beginning, however, sophisticated commercial applications in particular have encountered the constraints imposed by the current networks’ hybrid character. In fact, much network technology innovation and many exorbitant stock valuations over the past few years have had their origin in the need to work around or stretch current network limitations. The current networks are essentially a set of patches, fixes, and workarounds rather than a coherent design tailored to the needs of a wide array of users. This increased complexity leads to higher costs, less flexibility and robustness, and limited capabilities for expanded services. Let us consider a few examples of how current network technology forces fixes and workarounds or simply constrains what can be done on the net. First, let us examine security. The need for sophisticated encryption technology arises out of the fact that confidential information and general information travel alongside one another through the same pipes and routers. Encoding data on one end of a connection and decoding it at the other is a software-based fix for the lack of network-provided security. An alternative is the deployment of physically secure networks, which is being employed by many governments and, increasingly, by commercial entities. Given the substantial infrastructure investment, however, this option may be economical only for users with very high data volumes until the prices for dedicated networks fall closer to underlying costs. Second, let us look at the impact of architecture on the transport of different types of information and the equipment needed to support this transport. The large number of routers involved in carrying traffic over the Internet has made Cisco Systems, an early provider of router equipment, one of the most powerful players of the New Economy.8 Put simply, the high demand for Cisco’s current line of products is due to the lack of linear engineered connections between major data centers. Sending a large file from coast to coast commonly entails millions of switching and routing processes in which each data packet is being routed and rerouted until the packets, passing through multiple routers and often multiple carrier networks, are ultimately reassembled at the addressee. Often the traffic is car8. It has also enabled many of Cisco’s competitors, such as Juniper (with investors such as Ericsson, Nortel [Northern Telecom], the Siemens/Newbridge Alliance, and 3Com), to grow in this standards-based environment.
-
ried over less than optimal routes, and typically the messages are split over multiple different routes. This is fine with asynchronous traffic like e-mail (although there is the cost of the inefficiency involved) but wrecks havoc on broadband and streaming traffic. Caching, the deployment of computer-based storage of frequently used data, at the edges of the network (as provided by Akamai) is another example of an elegant but complex workaround. Given the problems associated with delivering high-quality streaming media through existing networks, Akamai offers its customers local “storage space” to shorten the path length—that is, the distance to users—and to minimize the number of router hops along the way. This improves quality of the delivered product and the response time, but it radically increases the complexity and lowers the scale efficiencies of the overall network. Finally, consider the current obstacles widespread streaming video encounters in the Internet’s core. In October of 1999, Cisco demonstrated that the existing public Internet could not handle even 10,000 simultaneous high-quality video streams. The “Net Aid” project, sponsored by Cisco, used the Internet to send a large number of video feeds of concerts. But while Cisco’s video servers were capable of handling the large number of user requests, many users were not able to get their video feeds because their requests were stifled by the chaotic nature of the Internet’s infrastructure.
Building Next-Generation Networks from the Bottom Up Given the commercialization of the web, it is natural for next-generation network architecture to be shaped by the demands of current and anticipated future e-commerce and related emerging commercial applications. It is commonplace to identify network evolution with the debate over residential broadband Internet access and the dramatic increase in backbone capacity to accommodate the associated increase in data traffic. However, as the previous section illustrates, many current constraints on commercial applications result from the architecture of the network’s core, not its overall backbone capacity or the access bottlenecks in the local loop. In other words, an Internet with ubiquitous broadband access to the home and a backbone with capacity several orders of magnitude greater would still require considerable fixes and workarounds to reckon with the architectural limitations.
Given that existing networks are already a set of fixes, patches, and workarounds, it should not surprise that evolutionary improvements of existing networks will be insufficient to meet the demands of future applications. Entirely new networks designed for broadband quality data traffic are needed. These next-generation networks will be qualitatively different in addition to being significantly larger and faster. It is not simply a story of bigger pipes and faster switches but one of entirely new components, new ways of connecting pieces, and new processes. It is a story of building an information network from the bottom up, avoiding all the flaws that rolling out a data network over a voice infrastructure naturally entailed. Next-generation networks will remove these compromises and be the first to be built for a new generation of traffic that is broadband in nature and highly distributed in its origin. So what will a next-generation network designed from the bottom up and optimized for Internet Protocol data traffic look like? The short answer is architecturally as simple as possible. Making networks simpler in essence means reducing the number of active components in the traffic path. Active components tend to introduce delay and raise the potential for failure and information loss. Two trouble spots in particular can be identified. The first concerns the large number of routers, and the second has to do with the conversion of electronic signals into optical signals and vice versa. With respect to routers, the simple contention is that the fewer routers, the better. Routers, computers located at network hubs that read the address labels on data packets and send them in the proper direction, introduce delay and in times of congestion can lose data (dropped packets). While multiple routers were essential to making early packetswitched networks run over infrastructure optimized for circuit-switched voice traffic, the large number of these core building blocks of the current Internet have to be considered a workaround as far as the requirements for new high-speed products are concerned. As the price of fiber comes down and fiber bandwidth capacity scales in a fashion quite similar to Moore’s Law for transistor density on semiconductors, deploying larger capacity optical transmission pipes between fewer hubs and routers becomes not only desirable from an engineering perspective, but economically sensible. To put developments into perspective, more information could be sent over a single state-of-the-art fiber optic cable in the year 2000 in a second than all the information that was sent over the entire Internet in 1997 in
-
a month.9 As fiber transmission costs drop, using optical dedicated channels for certain traffic makes much more technical and economic sense than routing it. But using fewer routers and bigger pipes between fewer hubs is only part of the story of next-generation networking. Just as important is the fact that optical parts will do much more in next-generation networks than merely provide tremendous transmission capacity. Most of the current backbone already consists of fiber optic cables, and the capacity keeps increasing. Inefficiencies arise, however, because signals are converted too many times from electronic to optical and vice versa. Next-generation networks will be characterized by what are known as ultra long haul (UHL) transmission systems that avoid this OEO10 conversion, coupled with optical switches and optical routers. To create the very high capacity on a fiber system requires two different technologies. The first involves transmission speed increases; production systems will soon be capable of sending 40 Gbps (40 billion bits per second) over a single optical channel. But a single fiber can carry not just one stream of optical flashes but upwards of 200 in some systems. This bit of magic, the second technology, is created by sending different colors of light down the same fiber, using a system called wavelength division multiplexing (WDM; high channel count WDM is referred to as dense wavelength division multiplexing, or DWDM). Differentiating streams of data by color not only dramatically increases overall capacity (as a large number of streams can be transmitted), it also in essence divides a pipe into a large number of functionally separate pipes. Lasers will send signals over the same fiber optic cable in hundreds of colors simultaneously, creating hundreds of separate data channels on the same physical medium that can be separated at the end, as in WDM systems, or separately routed or switched. The frequency or color division of signals in next-generation optical networks is not intended to enable users to tune in to different data streams; rather, the purposes are to provide a large number of dedicated channels with advantages that will be discussed below and to enable pure optical routers. Whereas current-generation electronic routers read a 9. Gilder (2000, p. 10). Gilder suggests that available bandwidth doubles every six months, three times as fast as Moore’s Law for the number of transistors on a chip, and (modestly) coins this relationship “Gilder’s Law.” 10. Optical-Electronic-Optical: optical in, electronic internally, and optical out. All optical active devices are often referred to as OOO devices.
packet’s address label and direct it according to particular algorithms, the beauty of frequency-divided optical signals is that the address is part of the message.11 Networks become possible in which optical data streams can be directed on the basis of their frequency. For example, two signals could leave San Francisco over the same fiber but with different frequencies, and an optical router along the way could direct the red stream to Chicago and the blue stream to New York. The frequency indicates the destination. The design characteristics of next-generation networks are expressed by two basic principles: fewer hubs and maximizing the optical part of the network. One U.S. carrier is building network hub centers close to locations where data production and data consumption is highest. These “content centers”—twelve in all—are connected to a total of 300 markets without a single router, with each of the markets being connected to at least two centers through independent optical connections and gigabit switches at the end. Naturally, the twelve centers are in turn interconnected to support redundancy and reliability in case of a local failure. Traffic patterns on a network with these characteristics will be dramatically different from those of existing networks. Whereas typically 80 percent of current network traffic is located in the backbone and 20 percent in tributary networks, the breakdown in next-generation networks will be almost uniform. Data-intensive two-way communication such as longdistance video conferencing, for example, will be transmitted through the backbone, whereas most streaming content (video, audio, and so on) will be regional—that is, from the regional content center to the end-consumer. Using regionally managed copies of most media applications makes sense from a technical perspective as it shortens and simplifies transmission to consumers. It also makes sense from a legal perspective. Storing copies regionally will lessen potential problems associated with cross-border flows of streaming data. Each copy can be made to conform to applicable law. Furthermore, regional caching reduces the need to cache content locally, that is, on computers at the network edge, thereby avoiding potentially problematic and complex legal questions about proper use of proprietary content. 11. There is a fixed relationship between frequency (f ) and wavelength (): f * = c. Thus a wavelength system is also frequency based. Other switching approaches use digital addresses or time slots, for example, Time Division Switching (TDS).
-
Meeting the Needs of Future E-Commerce Applications Paradoxically, by making the network simpler, more complex tasks can be performed, and they can be done faster, more reliably, and more efficiently than with current networks. If simplifying the network is understood as making it more “stupid” in the terminology introduced above, then the counterintuitive result of simplification is likely to be a network capable of supporting applications far more advanced than the hybrid patches and fixes we are accustomed to today. Let us examine how simpler networks enable more sophisticated applications. Advanced e-commerce increasingly requires (1) the capacity to deliver large information and content objects;12 (2) a predictable and high degree of network quality to deliver streaming media and other data- and timesensitive products and services; (3) a medium for secure transactions; and (4) the ability to support even complex transactions with “nomadic” users—that is, users connected via wireless devices. It should be obvious that networks designed according to the principles identified in the previous section speak clearly to one problem of current networks: the lack of reliable and fast high-quality data transmission that is a prerequisite for most streaming media applications. Regional management will simplify and economize transmission of streaming data. Furthermore, clearing the backbone of traffic that can be confined to regional tributaries and further improving backbone capacity greatly increase the ability of the user to transmit large units of data (such as a massive file or video stream) efficiently through the network. This satisfies a second demand to future networks that has arisen as a consequence of surging commercial activity. This kind of network architecture also provides for greatly enhanced data security in the network itself, rather than that provided solely by the addition of encryption software. Eliminating as many routers as possible and using wavelength division to create separate channels on single fibers make dedicated connections feasible. Large bandwidth users and users who transmit sensitive data can lease dedicated channels, connections that will transmit only their packets and nobody else’s. These “optical virtual private networks” (OVPNs) are inherently more secure due to their isolation of
12. Today this is typically the case in media-rich web pages.
traffic. If pipe sharing is eliminated, and a degree of physical security is maintained, packets need not be encrypted in order to be secure, reducing cost and transmission time. Finally, let us consider wireless. To say the least, the growth in wireless network services has been rapid over the past few years. At the same time, wireless follows the earlier development of wireline networks as traffic patterns shift from pure voice to an increasing share of data. What began with short messaging and two-way paging is increasingly moving toward sophisticated wireless data transactions. Japan’s i-mode is already providing (narrowband) “always-on” packet-switched data services, and the introduction of first 2.5G systems such as General Packet Radio Service (GPRS) and later 3G networks will greatly enhance the diffusion of wireless data. Current network architecture, however, requires a user to be in continuous contact with the network to carry out transactions. Disrupting a connection generally entails starting anew when it comes to online applications. Clearly, a (technological) requirement to maintain a continuous connection to the network imposes limits on nomadic users and their applications. In order to free mobile users from these constraints, a messaging layer that is interconnected with the wireline network needs to be built in so that the network itself is capable of storing information until the connection with the user is reestablished. “Always-on”connections thus in essence become “never-lost”connections. Local data storage is again of crucial importance, and a network designed as described above therefore promises to accommodate the special needs of mobile users. Next-generation networks will meet many demands of advanced ecommerce applications. In the first round of user adoption, existing tasks will be done better and more efficiently. In the second round, new business models and business practices will emerge that are impossible with current technology. At the same time, however, next-generation networks will generate demand for the management of these new technologies. The flexibility and scalability of next-generation networks will benefit all users, whether private or corporate. It is also certain to make the world of information transmission more complex as a whole range of new services and service packages become available. Given this complexity, many users are likely to outsource the matching of network services to data needs. A whole new market for value added services in the area of real-time negotiation, set-up, and settlement of data transmission services will therefore emerge. An average user may never know how the connection was provided and
-
configured for his or her particular needs or even which provider supplied the service.
How Distant Are Next-Generation Networks? Next-generation networks designed in the way sketched above promise to solve many of the problems users encounter with current networks. How far away are these networks? Not as far as one would think. In fact, all components have been developed, and it is now a matter of putting the pieces together in a network built from the bottom up. We should expect networks of this next generation to become operational for commercial use around 2003, pending adequate financing. Needless to say, the enhanced features and capacity of the nextgeneration network core will maximize consumer benefit only if the diffusion of consumer broadband access continues. DSL, cable, and GPRS offer “always-on” residential access, and they will continue to provide the bulk of residential broadband access for the short- to medium-run future. By 2005 or so, however, fiber access from the home will become an increasing reality. Most large residential construction projects already anticipate this development and include fiber links among the set of essential infrastructure.
Location of Control and Implications for Policy and Regulation Next-generation networks will optimize data flows. Optimizing data flows entails consciously organizing and managing them. Organizing data flows, in turn, requires some hierarchy in the physical and logical structures of the network. Hierarchy provides a central access point in the network, which in principle can be subject to government regulation or outside manipulation. Simply put, greater data transport efficiency requires an architecture that entails a greater degree of potential control over data flows. Let us examine this trade-off and its implications in more detail. The existing Internet’s design principles of separating transportation and application layers (and concentrating network intelligence at the edges), have given the Internet much of its decentralized character. These design principles reflect an architectural priority for redundancy and survivability rather than an emphasis on speed and efficiency. Future networks will be
built to meet different needs. A network optimized for commercial applications will still be unbundled. The separation of physical infrastructure, data transportation, and applications will remain. Compared to existing networks, however, future networks will be more centralized. Next-generation networks will contain an independent network control layer: an element of hierarchy the current Internet does not know. This control layer is necessary to make data flows more secure, reliable, and predictable when traffic flows across multiple networks. Current networks do not differentiate between the type of data contained in data packets. Next-generation networks, by contrast, can be configured to provide secure, direct connections for some content and more conventional connections for others. Matching attributes of the connection to attributes of the data certainly makes sense from an efficiency perspective. It also makes sense from a business perspective, as network providers add differentiated service levels to their product (and pricing) portfolio. While next-generation networks will thus be far more flexible and scaleable—and as a result more efficient—than current networks, the enabling architectural features will result in increased network “regulability” and a higher risk of failure. An independent network control layer can be used to control different things. By their very nature, the algorithms governing data flows over the new networks are potentially regulatory loci.13 These algorithms provide a central access point for regulation. Similarly, technical difficulties with the network control layer pose a threat to the network as a whole. Efficient algorithms are likely to be more brittle. And if next-generation networks of the kind described in this chapter manage the flow of information across many existing networks, a sophisticated next-generation network’s control layer creates a vulnerability for the communications infrastructure at large that was less present in the first-generation Internet. Given our overall dependence on communications technology today, the robustness of the network will be an end in itself. Operators of next-generation networks and governments will have to work together to ensure network integrity and robustness. The current Internet uses a network paradigm that was designed for redundancy and survivability. This had the effect of making control over data flows difficult (hence John Gilmore’s famous saying, “the Internet 13. For instance, with the ability to monitor flows and the quality of flows, different forms of contract can be defined and enforced, or certain types of information can be determined inappropriate to be carried across certain networks, and so on.
-
interprets censorship as damage and routes around it”). The Internet of the future will rely on networks that are optimized for new applications. Optimization of this sort will almost certainly mean introducing into either the physical or the logical structure of the network a greater level of hierarchy. This, in turn, will make control over data flows easier. The simple fact is, decisions about how to use this control capability will be made. These decisions should be made self-consciously, and decision makers—whether they be located in governments, industry, or somewhere else—should be aware of what is at stake. What kinds of questions are likely to arise, and where should we look for answers? A seemingly counterintuitive place to start is the old world of circuit-switched voice telephony. The old telephone network’s control layer and resulting element of hierarchy are not unlike those we are moving to in the world of data networking. The use of this control capability was organized around a set of principles. These principles provide at least a good starting point for thinking about the future of network regulation. Unfortunately, however, the Internet has developed its own paradigm. The Internet has its own language, its own publications, and—above all— its own particular way of thinking about how to move information from point A to point B. As with all paradigms, those caught within become inward-focused, and insights from the old world are likely to be disregarded. We must be careful not to ignore the lessons learned from the “traditional” communications networks when looking at the Internet simply because the Internet is different or newer. For example, there is still a tremendous benefit to enabling open interconnection among all users. This universality of service is the major attribute of today’s networks, so much so that we take it for granted. As we have seen, an environment of open standards and open access regulation for voice telephony guaranteed it. Furthermore, a sophisticated communications infrastructure is of benefit to all societies and economies and thus justifies some form of subsidy. Both of these tenets of the prior networks, and others, are ignored or refuted by many commercial entities now involved in Internet services. Sophisticated next-generation data networks are built and operated by businesses in a highly competitive environment. The incentives to construct rent-generating bottlenecks are high. For next-generation ecommerce to approach its potential, competition policy will therefore have to ensure the greatest possible extent of interconnection in an immensely competitive market. So far, the debate over “open access” has
largely focused on the so-called “last mile” (as discussed in Bar and others in chapter 18 in this volume). Future networks will raise the stakes for open access and interconnection throughout the network. And if the public (on a global basis) is to realize the greatest benefits of future networks, this tapestry of networks will have to be closely woven and seamlessly interconnected. This interconnection will need to occur on both a technical and a business level. Network operators should thus adopt a long-term view of their interest and keep the lessons of the old world in mind when determining their relations with government and market competitors. Succumbing instead to purely near-term economic interest could undermine the very foundation on which this revolution rests.
Conclusion The Internet has been described as a “network of networks.” This term is somewhat misleading as today’s Internet is primarily a suite of protocols and standards that enable interoperability and interconnectivity of terminals and different carrier networks and their constituent network layers. Nextgeneration networks, by contrast, will be a lot closer to the idea of a physical “network of networks.” Just as the Signaling System 7 (SS7) organized and coordinated the actions of switches in the old telephone system, nextgeneration networks will organize data flows according to specified criteria. Coupled with the introduction of the next version of the Internet Protocol (IPv6) and application developments on the highest network level, nextgeneration networks will be able to guarantee privacy standards, data security standards, and other data attributes across network components. Originally developed as a set of protocols that would make machine communication possible across a nonhierarchical existing network infrastructure, driven by fundamental economic principles and the benefits of competition, the Internet has become the most rapidly diffused technology in history. However, in the process, the network has become a web of patches, technical fixes, and workarounds as the demands of sophisticated applications have sought to stretch the constraints imposed by an architecture optimized for circuit-switched voice traffic. The first few years of electronic commerce and the prospects of future application innovation have generated sufficient demand for a fundamental reorganization of the network from the bottom up. Next-generation networks will primarily be simpler, and as a result will be more flexible, more reliable, more efficient, and
-
consequently ultimately (and paradoxically) more sophisticated than existing networks. These networks will unleash a new wave of user-driven innovation, sustaining a transformation that promises to be profound. At the same time, however, optimization for commercial applications will make future networks more susceptible to regulation and, taken as a whole, more vulnerable to internal and external manipulation due to the introduction of a layer of control. Fewer routers, direct high-capacity links between major data centers, and a gradual move toward pure optical networking are the building blocks of future networks. The pieces are here, and the assembly is under way.
References Cerf, V. G. 1991. “Networks.” Scientific American 265 (September). Clark, David D., and Marjory S. Blumenthal. 2000. “Rethinking the Design of the Internet: The End to End Arguments vs. the Brave New World.” Version for TPRC submission (August 10, 2000). Gilder, George. 2000. Telecosm. New York: Free Press. Isenberg, David S. 1997.“The Rise of the Stupid Network.” Computer Telephony (August): 16–26. Lessig, Lawrence. 1999. Code and Other Laws of Cyberspace. Basic Books. Saltzer, J., D. Reed, and D. Clark. 1984. “End-to-End Arguments in System Design.” ACM Transactions on Computer Systems 2 (November): 277–88.
1 17
The Political Economy of Open Source Software
“free” software is not new. In the 1960s and 1970s, the idea of making source code freely available was standard research practice. It was mostly taken for granted in leading computer science departments (such as at the Massachusetts Institute of Technology [MIT] and the University of California at Berkeley) and corporate research facilities (particularly Bell Labs and Xerox PARC). Today, however, the vast majority of software production is organized under the economic logic imposed by an intellectual property rights system. Patents, copyrights, licensing schemes, and other means of “protecting” computer software ensure that users cannot reverse-engineer, modify, or resell code developed by others. Maintaining control over source code forms the cornerstone of profitability in this model. Indeed, source code is probably the most valuable asset of a firm like Microsoft. Open Source software is fundamentally different, being by definition “free”—that is, public and nonproprietary. The Open Source Definition specifies that software must share three essential characteristics to be considered “Open Source”: it must permit the free redistribution of the software, require that the full source code be distributed with any binaries,
T
and allow anyone to modify and redistribute their own versions under these same terms.1 Today there exist several thousand Open Source “projects,” ranging from small utilities and device drivers to more robust programs such as the e-mail transfer program Sendmail, the HTTP Server Apache, and even the operating system Linux. These projects are driven forward by contributions from hundreds, sometimes thousands, of developers, who work around the world in a seemingly unorganized fashion and receive neither direct pay nor other compensation for their contributions. Thwarting conventional economic logic, these collaborative Open Source projects demonstrate empirically that large, complex systems of code can be built, maintained, developed, and extended in nonproprietary settings in which many developers work in highly parallel, relatively unstructured ways and without direct monetary compensation. Perhaps because the strength of this movement is so counterintuitive, there remains tremendous uncertainty about what drives the “Open Source” model. Some observers have thought of the phenomenon in broadly political or sociological terms, trying to understand the internal logic and external consequences of a geographically widespread community capable of producing excellent software without direct monetary compensation. In early writings and analyses, mostly done by computer “hackers” who are part of one or another Open Source project (and are often “true believers”), Open Source has been characterized variously as —a methodology for research and development; —a new business model (requiring new mechanisms for compensation and profit); —the “defining nexus” of a community geared toward the development of common goods; —a new “production structure” unique to “knowledge economies”; —a political philosophy. In part as a result, Open Source software has suddenly become the repository of extraordinarily diverse hopes and fears about the social and economic consequences of the information revolution. Libertarians see in Open Source a tool to emancipate individuals from governmental and corporate tyranny. Proponents of free markets see Open Source as the ultimate low barrier to entry market where only quality counts. Communitarians 1. Most Open Source licenses also require that the software itself be made available to others for no more than the cost of distribution. The terms of this definition originated in the Debian Social Contract developed in the mid-1990s by Bruce Perens (www.debian.org).
visualize a cross-national, cross-ethnic, and cross-just about every other traditional boundary community that is working together to advance a shared agenda. Economists see a market in reputation evolving naturally and almost automatically in a space with massively reduced transaction costs. The question is, why has Open Source software taken on the mantle of the Internet era’s Rorschach test? The answer is that Open Source challenges much of what economists, lawyers, and businesspeople believe they know about how intellectual property rights, production, and value added together are transformed into profit in a modern economy. At a minimum, the arguments and theories that explain why firms exist, why some knowledge is kept private and sold for a price, why some people earn higher salaries than others, and why groups of people often find it hard to work together to produce something that will serve the common good need to be reinterpreted in light of the success of Open Source. Some of these arguments may need to be substantially rewritten. This chapter takes a step in that direction.
The Economic Foundations— Traditional Approaches to Open Source Building an explanation for Open Source requires a compound argument capable of reconciling the microfoundations of traditional economic logic with the social and political structures that replace “property rights” as the ordering constraints on the organization of software production—and possibly other kinds of knowledge goods as well.
Macroeconomic Approaches The starting point for most economic analyses of Open Source rests on a macroeconomic argument and is a standard collective action type of approach.2 In this context the economic puzzle is straightforward. For wellknown reasons, nonexcludable public goods tend to be underprovided in nonauthoritative social settings. Open Source products such as Linux ought to exist at the worst end of this spectrum since they also depend on “collective provision.” Recognizing this, Marc Smith and Peter Kollock
2. See the summary and intelligent if sometimes polemical critique in Moglen (1999).
have gone so far as to call Linux “the impossible public good.”3 While projects like it require contributions from a large number of developers, each developer has little incentive to contribute—voluntarily—to a good that he or she can partake of unchecked as a free rider. Simple logic dictates that the system ought to unravel backwards, ensuring that no one makes any contributions, and there is no public good to begin with. Previous attempts to grapple with this paradox have focused on redefining the structural logic of economic exchange. Rishab Aiyer Ghosh, for example, introduces the notion of “nonrival” goods in order to circumvent the “free rider” trap. Using the image of a cooking pot capable of “cloning” all food placed in it, Ghosh suggests that trade in nonrival goods is not plagued by the free rider problem, as the supply of these goods is inexhaustible. His analogy is of course to the digital Internet, where—once uploaded—software can be downloaded and copied an infinite number of times at essentially zero cost. The individual in this setting faces a different cost calculus. As Ghosh explains, “you never lose from letting your product free in the cooking pot, as long as you are compensated for its creation.”4 As long as even one other person contributes an item of some value, trade becomes utility enhancing for all actors in the system. As Ghosh puts it, “if a sufficient number of people put in free goods, the cooking pot clones them for everyone, so that everyone gets far more value than was put in.” The problem with this argument is that it does not actually explain the “trade.” What underlying story accounts for the exchange relationship? Strictly speaking, it is still a narrowly rational act for any single individual to take from the pot without contributing—and ride free on the contributions of others. The collective action dilemma remains unsolved. In its traditional form, after all, the system unravels not because free riders use up the stock of the collective good or somehow devalue it, but because there is no real incentive to contribute to that stock in the first place. The cooking pot starts—and remains—empty. The solution to this paradox lies in pushing the concept of nonrivalness one step further. Software in some circumstances is more than simply nonrival. Most software, and particularly complex interdependent programs such as operating systems, is actually subject to positive network externalities. Whether it be called a network good or an antirival good (an awkward 3. Smith and Kollock (1999, p. 230). 4. Ghosh (1998, p. 16).
but nicely descriptive term), the point is that the value of any software increases as more people download and use it. The traditional benefits of standardization and network compatibility provide one explanation for why this is so. As more computers in the world run Linux, for example, it becomes easier for all users of that operating system to share applications and files. Perhaps more crucially, Open Source software makes an additional and very important use of network externalities in altering the development process itself. The more individuals actively use a piece of software, the easier debugging becomes, as errors are quickly found and eliminated. Software development also speeds up as the user base grows. Individuals have more incentive to expend time building plug-ins and coding new features. In practice, most software development takes place in precisely this way—people inside of organizations write code to do things and solve problems that need to be solved within their own organizations. The Open Source process essentially leverages this huge untapped energy that is usually closeted within organizations by creating an outlet where it can be shared in a coordinated fashion across organizational boundaries. This is the key point recognized by a high-level Microsoft memorandum of summer 1998. Known as the “Halloween memo,” this directive pointed to Open Source software as a direct threat to Microsoft’s revenues and to its quasi-monopolistic position in some markets. As the author recognized, Open Source software represents a long-term strategic threat to Microsoft because “the intrinsic parallelism and free idea exchange in OSS [Open Source software] has benefits that are not replicable with our current licensing model.” The point is not that Open Source software is simply able to accommodate free riders. It is actually antirival in the sense that the system positively benefits from what are typically thought of as free riders in a collective good. Some small percentage of users will provide something of value to the system, even if it is just reporting a bug out of frustration or encouraging greater commercial support for the platform in general.
Microeconomic Approaches The logic outlined above constitutes a structural explanation for the success of Open Source projects. The problem is it provides no explanation for why “core” groups undertake the initial development costs. It remains unclear why these groups arise and which projects are likely to succeed. A
closer look at microeconomic incentives helps to address these questions. Lerner and Tirole make what is probably the most forceful argument.5 They portray individual programmers, regardless of whether they work in Open Source or as employees of a proprietary software firm, as rational actors engaged in straightforward cost-benefit analysis. The immediate benefits to a programmer are private: creating a fix for the specific problem that the programmer faces or leading to direct monetary benefit. The primary cost is the opportunity cost of the time and effort that the programmer expends on the project. Open Source modifies this standard cost-benefit calculus in two significant ways. First, the “alumni effect” should lower the cost of working on Open Source relative to proprietary code.6 Since Unix syntax and Open Source tools are a standard part of most programmers’ educational training, the costs of simply extending the functionality of these existing tools should be lower than building proprietary solutions from scratch. Second, there are “delayed” benefits to developing Open Source programs that create a strong “signaling incentive.”7 These benefits accrue to the programmer’s career in ways that are ultimately transformed—or can be transformed—into money. The logic is as follows: ego gratification for solving difficult programming problems is important because it stems from peer recognition. Peer recognition is important because it creates a reputation. And a reputation as a great programmer is monetizable—in the form of job offers, privileged access to venture capital, and the like. The key point in this story is the signaling argument. As is true in many technical and artistic disciplines, the quality of a programmer’s mind and work is not easy to judge in standardized and easily comparable metrics. To be able to assess the talent of a particular programmer takes a reasonable investment of time. The best programmers have a clear incentive to reduce the energy that it takes for others to see and understand just how good they are. Hence the importance of signaling. The programmer participates in an Open Source project as a strategic act of credentialism—to demonstrate the quality of his or her work. Reputation within a well-informed, committed, and self-critical community is one proxy measure for that quality. Lerner and Tirole argue that 5. Lerner and Tirole (2000). This is an important paper that draws usefully on others’ analyses while recognizing its own limitations as a “preliminary exploration” that invites further research. 6. Lerner and Tirole (2000, p. 11). 7. Lerner and Tirole (2000, p. 15).
the signaling incentive will be stronger when the performance is visible to the audience; when effort expended has a high impact on performance; and when performance yields good information about talent. Open Source projects maximize the incentive along these dimensions in several ways. With Open Source, software users can see more than how well a program performs. They can also look to see how clever and elegant is the underlying code—a much more fine-grained measure of the quality of the programmer. And since no one is forcing anyone to work on any particular problem in Open Source, the performance can be assumed to represent a voluntary act on the part of the programmer, which makes it all that much more informative about that programmer. The signaling incentive should be strongest in settings with sophisticated users, tough bugs, and an audience that can appreciate effort and artistry, and thus distinguish between merely good and excellent solutions to problems. As Lerner and Tirole note, this argument seems consistent with the observation that Open Source has flourished (at least first) in more technical settings like operating systems and not in end-user applications. Alternate micro-level arguments exist that paint individual incentives as a product of existing social institutions. One of the more interesting is that proposed by Ko Kuwabara.8 Kuwabara uses a metaphor of complex adaptive systems and evolutionary change to describe the software development process. His account boils down to a series of causal steps. Programmers are motivated by a “reputation game” similar to what Lerner and Tirole depict. But he argues that the social structure alters individual incentives, not vice versa. Because online communities live in a situation of abundance, not scarcity, Kuwabara suggests that they are apt to develop “gift cultures” where social status depends on what you give away, rather than what you control.9 Expand this into an evolutionary setting over time, and the community will self-organize a set of ownership customs along lines that resemble a Lockean regime of property rights. These ownership customs constitute a sufficient framework for successful and productive collaboration, even if they do not involve explicit legal control over property. The gift culture idea is an important hypothesis. Gift economies— where social status depends more on what you give away than what you keep—are reasonable adaptations to conditions of abundance. They are often seen among aboriginal cultures living in mild climates and ecosys8. Kuwabara (2000). 9. The “gift culture” argument is taken principally from Raymond’s “Homesteading the Noosphere,” in Raymond (1999). See also Baird (1997).
tems with abundant food as well as among the extremely wealthy in modern industrial societies.10 And the culture of gift economies shares some notable characteristics with that of Open Source communities: gifts bind people together, encourage diffuse reciprocity, and support a concept of property that resembles “stewardship” more than “ownership” per se. Interestingly, this cultural argument is strongly evident in the writing of Eric S. Raymond, the unofficial ethnographer of the Open Source movement. In his piece “Homesteading the Noosphere,” Raymond suggests that the gift culture logic works particularly well in software, since the value of the gift (in this case a complex technical artifact) cannot be easily measured except by other members of the software community, who have the expertise to evaluate its technological sophistication. Naturally, therefore, “the success of a giver’s bid for status is delicately dependent on the critical judgement of peers.”11 The culture of Open Source communities shares some of the characteristics of a gift economy. But there is a key flaw in focusing exclusively on social constraints when attempting to define individual incentives. Doing this makes it difficult to analyze these incentives in common terms. The gift culture hypothesis misses the point about the nature of abundance in this setting. Of course, the physical tools of programming—bandwidth, disk space, and processing power—are plentiful and cheap. From a technological standpoint, it is certain that each of these will grow more abundant and less expensive over time. Yet when anyone can have a supercomputer on his or her desk, there is little status associated with that “property”—the very abundance of computing power should devalue it. The things that add value in this setting depend on human mind space and the commitment of time and intellectual energy by very smart people to a creative enterprise. It is the time and brainpower of smart, creative people that is scarce, and probably becoming more scarce as demand for their talents increases in proportion to the computing power available. Great programming skills are extremely rare. Nor is a reputation for greatness typically abundant, because only a certain number of people can really maintain a reputation for being “great writers” at any given point in time.12 10. Raymond (1999, p. 99). 11. Raymond (1999, p. 103). 12. I say this because standards of “greatness” are themselves endogenous to the quality of work that is produced in a particular population. If there is a normal distribution of quality, and the bell curve shifts to the right, what would have been thought “excellent” in the past is now merely good. The tails of the distribution define excellence in any setting, and they remain small.
The Search for New Institutions: Case Studies in Open Source Macroeconomic approaches do not explain the motivations of individual programmers. Micro-level arguments about utility functions do not follow directly from exogenous social structures. A static conception of property rights has traditionally allowed analysts to bridge this gap. Under its logic, the level of analysis problem can be sidestepped because pressures on both levels are expressed in the terms of a common independent variable (money). But because Open Source development is not conditioned by a traditional logic of property rights, bridging the gap between macro- and micro-level approaches is no longer automatic. Any successful explanation must therefore identify the logic of the particular software licenses and other social constraints that effectively replace standard systems of property rights as the fundamental ordering principles. It is this task that this section of the chapter adopts by examining in greater depth how two Open Source communities actually work, the Free Software Foundation and the Linux development community, and then drawing general conclusions about the nature of Open Source development.13
The Free Software Foundation Steven Levy’s book Hackers gives a compelling account of the impact the growing importance of intellectual property rights in software production had on the programming community, particularly at MIT. With the unbundling of software from hardware in the mid-1970s, many of the best programmers at MIT were hired away into lucrative positions in spin-off software firms. Simultaneously, MIT began to demand that its employees sign nondisclosure agreements in order to use university computing facilities. The newest mainframes, such as the VAX or the 68020, came with operating systems that did not distribute source code—in fact, researchers had to sign nondisclosure agreements simply to get an executable copy. MIT researcher Richard Stallman led the backlash. Driven by moral fervor as well as simple frustration at not being able easily to modify software for his particular needs (such as fixing a printer driver), Stallman in 1984 founded a project to revive the “hacker ethic” by creating a complete set of 13. These two examples attract a great deal of public attention but are by no means the only important examples; my forthcoming book, The Success of Open Source, examines these and others in much more detail.
“free software” utilities and programming tools.14 Called the Free Software Foundation (FSF), this project aimed to develop and distribute software under what he called the General Public License (GPL), also known in a clever wordplay as “copyleft.” The central idea of GPL is to prevent cooperatively developed software or any part of that software from being turned into proprietary software. Users are permitted to run the program, copy the program, modify the program through its source code, and distribute modified versions to others. What they may not do is add restrictions of their own. This is the “viral clause” of GPL—it compels anyone releasing software that incorporates copylefted code to use the GPL in their new release. The Free Software Foundation says: “you must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program [any program covered by this license] or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this license.”15 Stallman and the Free Software Foundation have created some of the most widely used pieces of Unix software, including the text editor EMACS, the GCC compiler, and the GDB debugger. As these popular programs were adapted to run on almost every version of Unix, their availability and efficiency helped to cement Unix as the operating system of choice for “free software” advocates. But the FSF’s success was in some sense self-limiting. Partly this is because of the moral fervor underlying Stallman’s approach—not all programmers found his strident libertarian attitude to be practical or helpful. Partly it was a marketing problem. “Free software” turned out to be an unfortunate label, despite FSF’s vehement attempts to convey the message that free was about freedom, not price— as in the slogan, “think free speech, not free beer.” But there was also a deeper problem in the all-encompassing nature of the GPL and its “viral” clause. Stallman’s moral stance against proprietary software clashed with the utilitarian view of many programmers, who wanted to use pieces of proprietary code along with free code when it made sense to do that, simply because the proprietary code was technically good. 14. In Stallman’s view, “the sharing of recipes is as old as cooking,” but proprietary software meant “that the first step in using a computer was a promise not to help your neighbor.” He saw this as “dividing the public and keeping users helpless” (1999, p. 54). For a fuller statement see www.gnu.org/ philosophy/why-free.html. 15. Free Software Foundation, “GNU General Public License, v. 2.0,” 1991 (www.gnu.org/ copyleft/gpl.html). Emphasis added. There are several different modifications to these specific provisions, but the general principle is clear.
The GPL did not permit this kind of flexibility and thus posed difficult constraints to developers looking for pragmatic solutions to problems.
The Linux Operating System The history of Linux provides more insight into this phenomenon. Linus Torvalds, a twenty-one-year-old computer science student at the University of Helsinki, strongly preferred the technical approach of Unix-style operating systems over the DOS system commercialized by Microsoft.16 But Torvalds did not like waiting in long lines for access to a limited number of university machines that ran Unix for student use. And it simply was not practical to run a commercial version of Unix on a personal computer— the software was too expensive and also much too demanding for the limited PCs of the time. In late 1990 Torvalds came across Minix, a simplified Unix clone that was being used for teaching purposes at Vrije University in Amsterdam. Minix ran on PCs, and the source code was available. Torvalds installed this system on his IBM AT, a machine with a 386 processor and 4 MB of memory, and went to work building the kernel of a Unix-like operating system with Minix as the scaffolding. In autumn 1991 Torvalds let go of the Minix scaffold and released the source code for the kernel of his new operating system, which he called Linux, onto an Internet newsgroup, along with the following note: I’m working on a free version of a Minix look-alike for AT-386 computers. It has finally reached the stage where it’s even usable (though it may not be, depending on what you want), and I am willing to put out the sources for wider distribution. . . . This is a program for hackers by a hacker. I’ve enjoyed doing it, and somebody might enjoy looking at it and even modifying it for their own needs. It is still small enough to understand, use and modify, and I’m looking forward to any comments you might have. I’m also interested in hearing from anybody who has written any of the utilities/library functions for minix. If your efforts are freely distributable (under copyright or even public domain) I’d like to hear from you so I can add them to the system.17 16. “Task-switching” is one major difference between the two systems that was of interest to Torvalds. Unix allows the computer to switch between multiple processes running simultaneously. 17. Linus Torvalds, “Linux History,” 1999 (www.li.org/li/linuxhistory.html).
Figure 17-1. Growth of Linux Operating System, 1990–99 Millions of bytes
20,000,000 15,000,000 10,000,000 5,000,000
Sept. 19, Jan. 31. June 15, Oct. 28, Mar. 11, July 24, 1990 1993 1994 1995 1997 1998
Dec. 6, 1999
Source: Author’s calculations.
The response was extraordinary (and according to Torvalds, mostly unexpected). By the end of the year nearly 100 people worldwide had joined the Linux newsgroup. Through 1992 and 1993 the community grew slowly. New users downloaded it, played with it, tested it in various settings, and attempted to extend and refine it. Flaws surfaced in the form of bugs and security holes, while new features were continually added. Users submitted reports of problems they found or proposed a fix and sent a patch on to Torvalds. Gradually, the process iterated and scaled up to a degree that just about everyone, including its ardent proponents, found startling. In 1994 Torvalds finally released the first official version of Linux (version 1.0). Thereafter, the pace of development accelerated, with updates to the system being released on a weekly (or sometimes even a daily) basis. Figure 17-1 illustrates the dramatic growth of the operating system, which today in its core consists of more than three million lines of code. This rapid growth is attributable to an extremely large and geographically far-flung community. Indeed, the credits file for the original release names contributors from at least thirty-one different countries. In both the Free Software Foundation and Linux circles, as in most Open Source communities, there exist a large number of moderately committed individuals who contribute relatively modest amounts of work and participate irregularly; as well as a smaller but much more highly committed group that forms an informal core. A July 2000 survey of the Open Source
community identified approximately 12,000 developers working on Open Source projects. Although the survey recognizes difficulties with measurement, it reports that the top 10 percent of the developers are credited with about 72 percent of the code—loosely parallel to the apocryphal “80-20 rule” (where 80 percent of the work is done by 20 percent of the people).18 Linux users and developers come from all walks of life: hobbyists, people who use Linux or related software tools in their work, committed “hackers”—some with full-time jobs and some without. The logic behind the process is both functional and behavioral. Development occurred largely through a game of trial and error by people embedded in the culture of a self-aware community. Over time, observers and participants (particularly Eric Raymond) analyzed the emergent process and tried to characterize (inductively for the most part) the key features that made the process work. Drawing largely on Raymond’s analysis as well as my own set of interviews, I propose seven key features common to development in successful Open Source projects. . Open Source user-developers tend to work on projects that they judge to be important, significant additions to software. There is also a premium for what in the computer science vernacular is called “cool,” which roughly means creating a new and exciting function or doing something in an newly elegant way. There seems to be an important and somewhat delicate balance around how much and what kind of work is done up front by the project leader(s). User-developers look for signals that any particular project will actually generate a significant product, not turn out to be an evolutionary dead end—and also contain interesting challenges along the way. . Raymond emphasizes that since there is no central authority or formal division of labor, Open Source developers are free to pick and choose exactly what it is they want to work on. This means that they will tend to focus on an immediate and tangible problem (the “itch that needs to be scratched”)—a problem that they themselves want to solve. The Cisco enterprise printing system (an older Open Source–style project) 18. See Ghosh and Prakash (2000). Specifically regarding Linux, as of spring 2000, there were approximately 90,000 registered Linux users, a large proportion of whom have programmed at least some minor applications or bug fixes as well as a core of over 300 central developers who have made major and substantial contributions to the kernel (www.linux.org/info/index.html).
evolved directly out of an immediate problem—system administrators at Cisco were spending an inordinate amount of time (in some cases half their time) working on printing problems. Torvalds (and others as well) sometimes put out a form of request, as in “isn’t there somebody out there would wants to work on ‘X’ or try to fix ‘Y’ problem?”) The underlying notion is that in a large community, there will be people who will find any particular problem of this sort to be an itch they actually do want to scratch. . Open Source user-developers are always searching for efficiencies: put simply, because they are not paid directly for contributions, there is a strong incentive never to reinvent the wheel. An important point is that there is less pressure on them to do so. This is simply because under the Open Source rubric, they know with certainty that they will always have access to the source code and thus do not need to recreate any tools or modules that are already available in Open Source. . If there is an important problem in the project, a significant bug, or a feature that has become widely desired, many different people or perhaps teams of people will be working on it—in many different places at the same time. They will likely produce a number of different potential solutions. It is then possible for Linux to incorporate the best solution and refine it further. Is this inefficient and wasteful? That depends. The relative efficiency of parallel problem solving depends on many parameters, most of which cannot be measured in a realistic fashion. Evolution is messy, and this process recapitulates much of what happens in an evolutionary setting. What is clear is that the stark alternative—nearly omniscient authority that can predict what route is the most promising to take toward a solution without actually traveling some distance down at least some of those routes—is not a realistic counterfactual for complex systems. . The Linux process relies on a kind of law of large numbers to generate and identify software bugs and then to fix them. Software testing is a messy process. Even a moderately complex program has a functionally infinite number of paths through the code. Only some tiny proportion of these paths will be generated by any particular user or testing program. As Paul Vixie puts it, “the essence of field testing is lack of rigor.”19 The key is to generate patterns of use—the real-world experiences of real users—that are inherently unpredictable by developers. In the
19. Vixie (1999, p. 98). Emphasis added.
Linux process, a huge group of users constitutes what is essentially an ongoing huge group of beta testers. Eric Raymond says that “given enough eyeballs, all bugs are shallow.”20 Implied in this often-quoted aphorism is a prior point: given enough eyeballs and hands doing different things with a piece of software, more bugs will appear, and that is a good thing, because a bug must appear and be characterized before it can be fixed. Torvalds reflects on his experience over time that the person who runs into and characterizes a particular bug and the person who later fixes it are usually not the same person—an observational piece of evidence for the efficacy of a parallel debugging process. . In a sufficiently complex program, even Open Source code may not necessarily be transparent in terms of precisely what the writer was trying to achieve and why. The Linux process depends on making these intentions and meanings clear so that future user-developers understand (without having to reverse-engineer) what functions a particular piece of code plays in the larger scheme of things. Documentation is a time-consuming and sometimes boring process. But it is considered essential in any scientific research enterprise, in part because replicability is a key criterion. “ , .” User-developers need to see and work with iterations of software code in order to leverage their debugging potential. Commercial software developers understand just as well as do Open Source developers that users are often the best testers, so the principle of release early, release often makes sense in that setting as well. The countervailing force in the commercial setting is one of expectations: customers who are paying a great deal of money for software may not like buggy beta releases, and may like even less the process of updating or installing new releases on a frequent basis. Open Source user-developers have a very different set of expectations. In this setting, bugs are more an opportunity and less a nuisance. Working with new releases is more an experiment and less a burden.21 The result is that Open Source projects typically have a feedback and update cycle that is an order of magnitude faster than commercial projects. In the early days of Linux, there were often new releases of the kernel on a weekly basis— and sometimes even daily. 20. Raymond (1999, p. 41). 21. Linux kernel releases are typically divided into “stable” and “developmental” paths. This gives users a clear choice: download a stable release that is more reliable or a developmental release where new features and functionality are being introduced and tested.
Building a New Economic Logic The Open Source process is precisely that—a process of production. The software it generates, useful and elegant as it may be, is ultimately a technical artifact that is an outcome of the production process. To understand the new logic of production, this section examines how Open Source communities grapple with two fundamental problems in this setting: how to solve coordination problems and how to manage complexity. On a less abstract level, this touches on a series of issues that range from who has the right to make decisions about the development of code, to who gets credited for what work, to how conflicts are resolved when they arise.
Coordination Problems Authority within a firm and the price mechanism across firms are standard ways to efficiently coordinate specialized knowledge in a conventional system of property rights. Neither exists in an Open Source community, where legal ownership is extremely fluid. A simple analogy from ecology suggests what might happen over time as modifications accumulate along different branching chains of software. Speciation—or what computer scientists call code-forking—seems likely. Lacking any constraints of formal ownership or copyright, and given the explicit freedom to modify software code in any way that a user finds desirable, the software should evolve into incompatible versions, and synergies in development should be lost over time. Interestingly, this is very much what happened to UNIX in the 1980s (see below). And if ego is a primary determinant of individual behavior, this coordination problem is made even more acute. When egos get damaged, why don’t the owners of those egos walk away from—or even worse, try to undermine—the collective project? The explanation is not exclusively cultural or structural. Macroeconomic incentives connected to positive network externalities are part of the answer. If developers think of themselves as trading innovation for others’ innovation, they will want to do their trading in the most liquid market possible. Forking would only reduce the size and thus the liquidity of the market. Viewing software as an “antirival” good creates a similar dynamic: the more open a project is and the larger the existing community of developers, the less tendency there will be to fork. The potential forker faces a difficult problem: it becomes very hard for the renegade to credibly claim that he or she could accumulate a more talented and effective base of
developers than already exists in the main code base. Operating with diminished resources, this forked development community could also never promise credibly to match the rate of innovation taking place in the primary code base. It could not use, test, and debug software as quickly. And as a result it could not provide as attractive a payoff in reputation to its developers, even if reputation were shared out more evenly within the forked community.22 Cultural and social norms do play a key role in influencing how these macro- and microeconomic pressures play out.23 A prevalent norm assigns decisionmaking authority within the community. The key element of this norm is that authority follows and derives from responsibility. The more an individual contributes to a project and takes responsibility for pieces of software, the more decisionmaking authority that individual is granted by the community. In the case of Linux, Torvalds typically validates the grant of authority to “lieutenants” by consulting closely with them on an ongoing basis, particularly when it comes to key decisions on how subsystems are to work together in the software package. While relatively high levels of trust may reduce the amount of conflict in the system, complicated and informal arrangements of this kind are certain to generate disagreements. There is an additional, auxiliary norm that gets called into play: seniority rules. As Raymond explains: “if two contributors or groups of contributors have a dispute, and the dispute cannot be resolved objectively, and neither owns the territory of the dispute, the side that has put the most work into the project as a whole . . . wins.”24 But what does it mean to resolve a dispute “objectively”? The notion of objectivity draws on its own deep normative base. The Open Source developer community shares a general conception of technical rationality. Like all technical rationalities, this one exists inside a cultural frame. The cultural frame is based on shared experience in Unix programming. Unix was born in the notion of compatibility between platforms, ease of networking, 22. Clearly there are parameters within which this argument is true. Outside of those parameters it could be false. It would be possible to construct a simple model to capture the logic, but it is hard to know—other than by observing the behavior of developers in the Open Source community—how to attach values to those parameters. 23. Ellickson (1991, p. 270) provides a compelling argument about the falsifiability of normative explanations. 24. Raymond (1999, p. 127). One interesting additional piece of evidence for these norms is what has happened when the two norms pointed in different directions. Raymond (p. 128) recalls one such fight of this kind and says, “it was ugly, painful, protracted, only resolved when all parties became exhausted enough. . . . I devoutly hope I am never anywhere near anything of the kind again.”
and positive network effects.25 Unix programmers have a set of common standards for what is “good code” and what is not-so-good code.26 These standards draw on pragmatism and experience—the Unix “philosophy” is a philosophy of what works and what has been shown to work in practical settings over time. The Open Source Initiative codified this cultural frame by establishing a clear priority for pragmatic technical excellence over ideology or zealotry (more characteristic of the Free Software Foundation). A cultural frame based in engineering principles (not anticommercial ideology) and focused on high reliability and high performance products gained much wider traction within the developer community. It also underscored the rationality of technical decisions driven at least in part by the need to sustain successful collaboration—hence legitimating concerns about “maintainability” of code, “clean-ness” of interfaces, clear and distinct modularity. The clear mastery of technical rationality in this setting is made clear in the creed that developers say they rely on—“let the code decide.” Leadership matters in setting a focal point and maintaining coordination on it. Torvalds started the Linux process by providing a core piece of code. This was the original focal point. It functioned that way because— simplistic and imperfect as it was—it established a plausible promise of creativity and productivity: that it could develop into something elegant and useful. The code contained interesting challenges and programming puzzles to be solved. Together, these characteristics attracted developers, who by investing time and effort on this project placed a smart bet that their contributions would be efficacious and that there would eventually be a valuable outcome. In the longer term leadership matters by reinforcing the cultural norms. Torvalds does, in fact, have many characteristics of a charismatic leader in the Weberian sense. Importantly, he provides a convincing example of how to manage the potential for ego-driven conflicts among very smart developers. Torvalds downplays his own importance in the story of Linux: while he acknowledges that his decision to release the code was an important one, he does not claim to have planned the whole thing or to have foreseen the significance of what he was doing or what would happen: “The act of 25. Indeed, Unix was developed in part to replace ITS (incompatible time sharing system). The idea in 1969 was that hardware and compiler technology were getting good enough that it would now be possible to write portable software—to create a common software environment for many different types of machines. 26. Gancarz (1995).
making Linux freely available wasn’t some agonizing decision that I took from thinking long and hard on it; it was a natural decision within the community that I felt I wanted to be a part of.”27 When it comes to reputation and fame, Torvalds is not shy and does not deny his status in any way. But he does make a compelling case that he was not motivated by fame and reputation—these are things that simply came his way as a result of doing what he believed in.28 He continues to emphasize the fun and opportunities for self-expression in the context of “the feeling of belonging to a group that does something interesting” as his principal motivation. And he continues to invest huge effort in maintaining his reputation as a fair, capable, and thoughtful manager. It is striking how much effort Torvalds puts into justifying to the community his decisions about Linux and documenting the reasons for decisions—in the language of technical rationality, that is currency for this community. Would a different leader with a more imperious attitude who took advantage of his or her status to make decisions by fiat have undermined the Linux community? Many in the community believe so (or believe that developers would exit and create a new community along more favorable lines).29 The logic of the argument to this point supports that belief. There do exist sanctioning mechanisms to support the nexus of incentives, cultural norms, and leadership roles that maintain coordination. In principle, the GPL and other licenses could be enforced through legal remedies (this of course may lurk and constrain behavior even if it is not invoked). In practice, precisely how enforceable in the courts some aspects of these licenses are remains unclear.30 The sanctioning mechanisms that are visibly practiced within the Open Source community are two: “flaming” and “shunning.”31 Flaming is “public” condemnation (usually over email lists) of people who violate norms. “Flamefests” can be quite fierce in language and intensity but tend ultimately to be self-limiting.32 Shunning is the more functionally important sanction. To shun people—refusing to cooperate with them after they have broken a norm—cuts them off from the benefits that the community offers. It is not the same as exclusion: 27. Torvalds (1998). 28. The documented history, particularly the archived e-mail lists, supports Torvalds on this point. 29. Examples of this process are given in my forthcoming book (2002). 30. McGowan (2000); see also Merges (1997). 31. Raymond (1999, p. 129). 32. The intensity seems to be self-limiting in part because developers understand very well the old adage about sticks and stones.
someone who is shunned can still use Linux. But that person will suffer substantial reputational costs. He or she will find it hard to gain cooperation from others. The threat is to be left on your own to solve problems, while the community can and does draw on its collective experience and knowledge to do the same. This is clearly a strong disincentive to strategic forking, for example, but it also constrains other, less egregious forms of counternormative behavior (such as aggressive self-promotion).
Problems of Complexity Designing robust, complex software is a gargantuan task. Testing, debugging, and maintaining code is generally even harder. As this is a task that needs to be divided, the standard industrial response to increasing complexity has been to organize labor within a centralized, hierarchical structure—that is, that of a firm. The firm then manages complexity through formal organization and explicit decisional authority.33 Certainly, with complex knowledge goods in particular, this is a very imperfect solution. In The Mythical Man-Month, a classic study of the social and industrial organization of programming, Frederick Brooks noted that when large organizations add manpower to a software project that is behind schedule, the project typically falls even further behind schedule.34 He explained this with an argument that is now known colloquially as Brooks’s Law. As you raise the number of programmers on a project, work performed scales linearly (by a factor n), but complexity, communication costs, and vulnerability to error scales geometrically by a factor of n squared. This is inherent in the logic of the division of labor—the number of potential communications paths and interfaces between developers increases exponentially as the number of developers increases linearly. How does the Open Source process manage the implications of this “law” among a geographically dispersed community that is not subject to hierarchical command and control? Eric Raymond draws a too-stark contrast between “cathedrals” and “bazaars” as icons of organizational structure. Cathedrals are designed from the top down, built by coordinated teams who are tasked by and answer to a central authority. Open Source projects seem to confound 33. Of course, organization theorists know that a lot of management goes on in the interstices of this structure, but the structure is still there to make it possible. 34. Brooks (1975).
this hierarchical model. Linux appears, at least on first glance, to appear much more like a “great babbling bazaar of different agendas and approaches.”35 But there has evolved in Linux a clear hierarchy of decisionmaking authority, where a decision pyramid leads from the dispersed developer base up through trusted lieutenants who have authority over particular parts of the code and ultimately to Linus Torvalds, whose decisions are in a sense “final.” This hierarchy was put in place in the mid1990s, precisely in response to the growth of the project beyond the point where Torvalds could realistically manage the complexity on his own. Programmers explain this with the sly phrase, “Linus doesn’t scale.” In practice, some of his authority is now devolved down rungs of the hierarchy, and decisions made at those levels in effect bear his imprimatur. There is more hierarchical authority here than the popular image of a bazaar captures, although it remains authority that rests on something other than corporate command and control or the power of money. The Linux decisionmaking system is just one example of pragmatic, experimental adaptations to this problem. In fact, Open Source communities manage complexity in diverse ways. Consider the case of the Berkeley Software Distribution (BSD) model.36 In BSD distributions, typically, a relatively small committed team of developers writes code. Users may modify the source code for their own purposes, but the development team does not generally take “check-ins” from the public users, and there is no regularized process for doing that. Apache takes in contributions from a wider swath of developers who rely on a decisionmaking committee that is constituted according to formal rules, a de facto constitution. The Perl scripting language relies on a “rotating dictatorship” in which control of the core software is passed from one member to another inside an inner circle of key developers. These cases differ from Linux, where the public or general user base can and does propose “check-ins,” modifications, bug fixes, new features, and so on. There is no formal distinction between users and developers (a fact aptly symbolized by the many Linux archive sites that take submissions from literally anyone). There are low barriers to entry to the debugging and development process. This is true in part because of a common debugging methodology and in part because when a user installs Linux, the debugging and developing environment comes with it (along with the source code, of 35. Raymond (1999, p. 30). 36. There are now several BSD projects, which I discuss in detail in Weber (forthcoming).
course). Some users engage in “impulsive debugging”—fixing a little problem (shallow bug) that they encounter in daily use; others make debugging and developing Linux a hobby or vocation. The key to managing the level of complexity within the software itself is modular design. A major tenet of the Unix philosophy, passed down to Linux, is to keep programs small and unifunctional (“do one thing simply and well”). A small program will have far fewer features than a large one, but small programs are easy to understand, easy to maintain, consume fewer hardware system resources, and— most important—can be combined with other small programs to enable more complex functionalities. The technical term for this development strategy is “source code modularization.” A large program works by calling on relatively small and relatively self-contained modules. Good design and engineering is about limiting the interdependencies and interactions between modules. Programmers working on one module know two things: that the output of their module must communicate successfully with other modules, and that (ideally) they can make changes in their own module to debug it or improve its functionality without requiring changes in other modules, as long as they get the communication interfaces right. This reduces the complexity of the system overall because it limits the reverberations that might spread out from a code change. Obviously, it is a powerful way to facilitate working in parallel on many different parts of the software at once, since a programmer can control the development of a specific module of code without creating problems for other programmers working on other modules. It is notable that one of the major improvements in Linux release 2.0 was moving from a monolithic kernel to one made up on independently loadable modules. The advantage, according to Torvalds, was that “programmers could work on different modules without risk of interference. . . . Managing people and managing code led to the same design decision. To keep the number of people working on Linux coordinated, we needed something like kernel modules.”37 Torvalds’s implicit point is simple: these engineering principles are important because they reduce organizational demands on the socialpolitical structure. In no case, however, are those demands reduced to zero. This is simply another way of saying that libertarian and self-organization accounts of Open Source software are frankly naive. The formal organization of authority is quite structured for larger Open Source projects. 37. Torvalds (1999, p. 108).
Torvalds, as noted, sits atop a decision pyramid as a de facto benevolent dictator. Apache is governed by a committee.
How Do They Resolve Conflicts? Anyone who has dabbled in the software community recognizes that a large number of very smart, highly motivated, self-confident, and deeply committed developers trying to work together create an explosive mix. Conflict is common, even customary in a sense. It is not the lack of conflict in the Open Source process but rather the successful management of substantial conflict that needs to be explained—conflict that is sometimes highly personal and emotional as well as intellectual and organizational.38 Eric Raymond observes that conflicts center for the most part on three kinds of issues: who makes the final decision if there is a disagreement about a piece of code; who gets credited for precisely what contributions to the software; who can credibly and defensibly choose to “fork” the code.39 Similar issues, of course, arise when software development is organized in a corporate setting. Standard theories of the firm explain various ways in which potential conflicts are settled or at least managed by formal, authoritative organizations. The Open Source community prefers to settle major conflicts through a “battle to consensus.” Programmers devote an extraordinary amount of time and energy to this process, trying to convince each other that there are firm technical grounds for preferring one solution or development path to another. It does not always succeed—in part because the technical criteria often are not definitive, and in part because personalities get in the way. At that point, leadership takes on a much more important role in conflict resolution. Of course, it is a style of leadership that has to justify itself and its decisions to skeptical, independent-minded followers who are free to break away if they so choose. Linux, in its earliest days, was run unilaterally by Linus Torvalds. Torvalds’s decisions were essentially authoritative. As the program and the 38. Indeed, this has been true from the earliest days of Linux. See, for example, the e-mail debate between Linus Torvalds and Andrew Tanenbaum from 1992, reprinted in DiBona, Ockman, and Stone (1999, pp. 221–51). Torvalds opens the discussion by telling Tanenbaum, “you lose,” “linux still beats the pants off minix in almost all areas,” “your job is being a professor and a researcher: That’s one hell of a good excuse for some of the brain-damages of minix.” 39. Raymond (1999, pp. 79–137).
community of developers grew, Torvalds delegated responsibility for subsystems and components to other developers, who are known as “lieutenants.” Some of the lieutenants onward-delegate to “area” owners who have smaller regions of responsibility. The organic result is what looks and functions very much like a hierarchical organization in which decisionmaking follows fairly structured lines of communication. Torvalds sits atop the hierarchy as a “benevolent dictator” with final responsibility for managing conflicts that cannot be resolved at lower levels. Torvalds’s authority rests on a complicated mix. History is a part of this—as the originator of Linux, Torvalds has a presumptive claim to leadership that is deeply respected by others. Charisma in the Weberian sense is also important. It is notably limited in the sense that Torvalds goes to great lengths to document and justify his decisions about controversial matters. He admits when he is wrong. It is a kind of charisma that has to be continuously re-created through consistent patterns of behavior. Linux developers will also say that Torvalds’s authority rests on “evolutionary success.” The fact is, the “system” that has grown up under his leadership worked to produce a first-class outcome, and this in itself is a strong incentive not to fix what is clearly not broken. Ultimately, decisions to accept Torvalds’s authority can be traced back to definable incentives—but the incentives themselves depend heavily on the social structure created by the GPL license and by the constructed authority of the leader. Conflict is expected; indeed, it is normatively sanctioned. When an argument ends, as it is also expected to do at some point, the loser has essentially three options: accept the decision and move on; drop involvement in the project; or fork the code. If the loser drops out of the project, he or she loses the opportunity to accrue reputation and affect future decisions about the evolution of the project. The community may lose the involvement of a particular individual but not more than that (if it is an important individual, obviously the leader has strong incentives to try to heal the wound). The central decision between the other alternatives, to accept the decision or to fork the code, depends in some final sense upon the calculations discussed under the subheading “Coordination Problems” above. The Open Source development process builds momentum as it grows. The larger and more open a project, the higher the threshold for a rational decision to fork the code. The network externalities in the technology have essentially been implanted into the social structure that surrounds it.
The Commercialization and Spread of Open Source? The success and increased visibility of Open Source software over the last several years has brought with it new pressures for formal organization. The process by which this has happened looks very much like a (limited) version of standard sociological accounts of institutional isomorphism. DiMaggio and Powell argue that institutions that interface frequently and deeply with each other will tend to adopt similar organizational structures as a means of improving communication and reducing a broad range of transaction costs.40 As Linux increased in popularity at the end of the 1990s and was adopted by large commercial interests, key developers within the community began to argue that it was important to create an impression of organizational credibility for Open Source that would appeal to and reassure commercial users. Although it is attractive to offer software that is good enough that it needs less support, commercial users still worry a great deal about service and need reassurance that product support— even if it is informal in some sense—will be there: for the long term, and when they need it. In February of 1998, a core group of Open Source developers joined together to create the Open Source Initiative. OSI is quite explicit about its goal: to establish a firm public relations base for Open Source software that is deeply credible to standard commercial users and that lies outside the realm of morality and politics (particularly as those messages were associated with the Free Software Foundation). The organization’s manifesto states, “we think the economic self interest arguments for Open Source are strong enough that nobody needs to go on any moral crusades about it.”41 The Apache Software Foundation is now formally incorporated as nonprofit and led by a board of directors.42 Meanwhile, major IT companies, including Hewlett Packard, Sun Microsystems, Motorola, and—most decisively—IBM, have made major commitments to Linux, Apache, and other Open Source software projects for large computing systems, supercomputer equivalents, and new, small, handheld or household computing devices. At the start of 2001, Open Source software is on 40. DiMaggio and Powell (1991). 41. opensource.org/for-hackers.html#marketing. 42. Directors are elected by members. Members are selected by existing members on the basis of “meritocracy, meaning that contributions and skills are the factors used to judge worthiness, candidates are expected to have proven themselves by contributing to one or more of the Foundation’s projects” (www.apache.org/foundation/members.html).
the verge of becoming a mainstream part of corporate information technology systems. Some degree of institutional isomorphism reduces the complexity of relationships between the Open Source process and the increasing range of organizations that use Open Source software. How far this process can and will go is an important question. It is particularly important given the huge financial stakes that now exist in for-profit companies like Red Hat and VA Linux, which are attempting to make money by packaging, marketing, supporting, assembling, refining, and ultimately creating Open Source–based “solutions” for the mainstream market. Arguments about innovative business models in Open Source software are interesting and relevant, as are the various legal issues surrounding intellectual property rights and the licensing regimes.43 The analytic risk is that by focusing too closely on these intriguing and immediate problems, we lose sight of a much bigger and ultimately more significant story about what Open Source represents in the emerging information economy. Open Source is an important, pragmatic demonstration of the possibility of a production process within the digital economy that is really quite distinct from modes of production characteristic of the predigital era. The difference is not one of degree but one of kind. The software, in this light, is a by-product of a new production process that will turn out to be more significant than its immediate output. When Womack, Jones, and Roos wrote The Machine That Changed the World in 1990, they were not really talking about the cars that Toyota built—even though those cars were cheaper, more reliable, and faster to evolve. They were referring to a process known as “lean production”—a different way of building cars—and, ultimately, many other things as well. The Open Source production process depends on and uses the Internet to enable not just a more finely grained division of labor but truly distributed innovation, a revolutionary way of thinking about economic production. In a division of labor, no matter how finely grained, there is still a value chain (even if part of it is located in Cupertino and part in Bangalore). The production challenge is still about getting the weakest link in that chain to deliver. But in parallel distributed innovation, at the limit there is no weak link because there is no chain. Coordination costs may still constrain to some extent the functional distribution of innovation, and
43. I take up these arguments in Weber (forthcoming).
thus constrain in turn the maturation of parallel processing as a mode of economic production. But that is now a function of getting the social organization “right,” not technology per se.44 And the Open Source community is at the forefront of practical experimentation in that social organization. A fundamentally new production process will pose new and surprising challenges for existing economic and legal structures, across the nationalinternational divide. An immediate issue concerns the politics and particularly the international politics of standard setting. These politics are typically analyzed around bargaining power between and among firms, national governments, and international organizations. Open Source adds an interesting twist. The Open Source community is not a firm (although there are firms that may try to represent some of the interests of the community). And the community is not represented by a state, nor do its interests align particularly with any individual state. It bears no allegiance to any kind of international organization. The technological community that produces Open Source software was international from the start and remains highly international in scope. We simply do not know how this community will interact with formal standards processes, which are embedded deeply in national, international, and global politics. Certainly, existing national and international institutions will try to promote and manipulate the dynamic in ways that yield advantage to particular players; yet it is difficult to see right now a viable strategy by which governments could achieve lasting advantage in this way. New production processes bring new possibilities as well. The Open Source process intertwines community and commerce more tightly than almost any current e-business model has yet dared to consider. It offers possibilities for economic bootstrapping in developing countries that could otherwise be locked out of an information economy with much more expensive tools—and thus much higher barriers to entry. It may accelerate the trend toward “ubiquitous computing”—the incorporation of embedded “smart” systems into a broad range of products and processes in human life. These are just some of the more obvious possibilities.45 Ultimately, the most intriguing question about Open Source is how this process of knowledge production and coordination will extend to other realms of production in the twenty-first-century economy. The key concepts—user-driven innovation that takes place in a parallel distributed set44. The underlying argument is Hayek (1945). 45. I expand on these and others in Weber (forthcoming).
ting, distinct forms and mechanisms of cooperative behavior, and the economic logic of “antirival” goods—are generic enough to suggest that software is not the only place where the Open Source process could flourish. The process of annotating the human genome, which is really just a complex piece of code, an “operating system” for a biological, carbon-based “processor,” is an obvious possible application where a new knowledge production process could trump government-sponsored and commercial alternatives. If the Open Source production process is indeed a window into the revolutionary potential of Internet technology, then this kind of application is likely to be only the beginning.
References Baird, David. 1997. “Scientific Instrument Making, Epistemology, and the Conflict between Gift and Commodity Economies.” Philosophy and Technology 2 (Spring– Summer): 25–45. Brooks, Frederick P. 1975. The Mythical Man-Month: Essays on Software Engineering. Reading, Mass.: Addison Wesley. DiBona, Chris, Sam Ockman, and Mark Stone, eds. 1999. Open Sources: Voices from the Open Source Revolution. Sebastopol, Calif.: O’Reilly. DiMaggio, Paul J., and Walter W. Powell. 1991. “The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields.” In The New Institutionalism in Organizational Analysis, edited by Paul J. DiMaggio and Walter W. Powell, 63–82. University of Chicago Press. Ellickson, Robert C. 1991. Order without Law: How Neighbors Settle Disputes. Harvard University Press. Gancarz, Mike. 1995. The Unix Philosophy. Boston: Digital Press. Ghosh, Rishab. 1998. “Cooking Pot Markets: An Economic Model for the Trade in Free Goods and Services on the Internet.” First Monday (March). Ghosh, Rishab, and Vipul Ved Prakash. 2000. “The Orbiten Free Software Survey.” First Monday (July). Hayek, F. A. 1945.“The Use of Knowledge in Society.” American Economic Review 35 (September): 519–30. Kuwabara, Ko. 2000. “Linux: A Bazaar at the Edge of Chaos.” First Monday (March). Lerner, Josh, and Jean Tirole. 2000. “The Simple Economics of Open Source.” Working Paper W7600. Cambridge, Mass.: National Bureau of Economic Research (February). Levy, Steven. 1984. Hackers. Dell Publishing. McGowan, David. 2000. “Copyleft and the Theory of the Firm.” University of Michigan Law School. Merges, Robert P. 1997. “The End of Friction? Property Rights and Contract in the ‘Newtonian’ World of On-Line Commerce.” Berkeley Technology Law Journal 12 (1): 115–36.
Moglen, Eben. 1999. “Anarchism Triumphant: Free Software and the Death of Copyright.” First Monday (August). Raymond, Eric. 1999. The Cathedral and the Bazaar: Musings on Linux and Open Source by an Accidental Revolutionary. Sebastopol, Calif.: O’Reilly. Smith, Marc A., and Peter Kollock, eds. 1999. Communities in Cyberspace. London: Routledge. Stallman, Richard. 1999. “The GNU Operating System and the Free Software Movement.” In Open Sources: Voices from the Open Source Revolution, edited by Chris DiBona, Sam Ockman, and Mark Stone, 53–70. Sebastopol, Calif.: O’Reilly. Torvalds, Linus. 1998. “What Motivates Free Software Developers?” Interview. First Monday (March). ———. 1999. “The Linux Edge.” In Open Sources: Voices from the Open Source Revolution, edited by Chris DiBona, Sam Ockman, and Mark Stone, 101–12. Sebastopol, Calif.: O’Reilly. Vixie, Paul. 1999. “Software Engineering.” In Open Sources: Voices from the Open Source Revolution, edited by Chris DiBona, Sam Ockman, and Mark Stone, 91–100. Sebastopol, Calif.: O’Reilly. Weber, Steven. Forthcoming. The Success of Open Source. Harvard University Press. Womack, James P., Daniel T. Jones, and Daniel Roos. 1990. The Machine That Changed the World. New York: Rawson Associates.
1 18
. .
The Next-Generation Internet: Promoting Innovation and User-Experimentation - is rapidly emerging in which broadband, always-on access no longer remains the privilege of business users, but becomes available to all. This emerging infrastructure promises to create profoundly new possibilities for e-commerce: it will support the creation of new content marketplaces, enable the invention of new communication applications and services, broaden the reach of corporate networking to off-site locations and employees’ homes. The current round of Internet reinvention, like previous rounds, ought to be driven by the cumulative creativity of multiple users, service providers, and equipment makers leveraging this new broadband platform to develop and implement their ideas. However, this will require a freedom of access to the residential broadband network for users, programmers, application creators, and the myriad companies selling Internet services. Limiting access to the thirdgeneration Internet platform can only stunt this innovative process. The Internet’s success to date owes much to its open end-to-end architecture.1 Applications reside in the end-nodes of the network (that is, computers connected to the network) rather than in its core. Thus the information superhighway has no single control point. Network ownership is no longer required to control network configuration and evolution, to
A
1. Saltzer, Reed, and Clark (1981).
, , , , ,
unleash innovation and profits. As a result, the creation of an open network, applications, and end-to-end commerce services was the prime force behind the success of the Internet even as the Internet’s new logic has forced a revolution in telecom networks and policy. Over the coming decade residential broadband services providing “always-on” access will drive e-commerce development. The question is whether market forces will suffice to ensure the open access environment that has prevailed on telecom-based infrastructure for the first two generations. The struggle to ensure “open access” on the broadband Internet is thus a battle to shape not merely the cable or telecom industries but the very forms of network-based commerce, community, and creativity. Yet the cable industry has been skeptical of creating an “open access” model for its network. The cable network, which did not participate in the first Internet generation, retained a broadcast model in which ownership of the physical network itself has been the key to programming control and profits. As cable moved from “broadcast” to “broadband,” policymakers were thus faced with an important choice. Should the open access requirements developed in the telecom world for previous-generation Internet be extended to the new cable broadband access infrastructures, or would competition among third-generation access networks serve as a substitute for open access and continue to sustain wide-ranging innovation? The early broadband debate focused on issues of customer choice and investment incentives as well as arguments about the proper level of policymaking, federal or local. While these are important issues, a critical dimension has been missing from this discussion: the impact that the resulting architecture will have on shaping the very nature of the thirdgeneration Internet and its innovation dynamics. The U.S. government has an urgent responsibility to ensure open access over residential broadband Internet in order to drive an innovative developmental trajectory. The government’s responses to the recent merger of AOL with Time Warner have begun to address this problem much more constructively. Contrary to critics of excessive government intervention,2 we believe that some initiatives to encourage open access were essential. There is no assurance that the current measures will be adequate, although the government
2. See, for example, James V. DeLong, “AOL & Time Warner: Meet the New Broadband Access Regulatory Authority (Formerly Known as the FTC),” Competitive Enterprise Institute, Washington, December 15, 2000.
is right to try for light-touch regulation in such a technologically dynamic market. We have structured our analysis of residential broadband regulation in three main stages. First, we recount how past FCC policy, with its steady promotion of open access to the telecommunication infrastructure, made the Internet possible. We emphasize that the third generation is a distinct market, and as in the past, the practices concerning its network architecture are vital for competition and innovation. Second, we analyze the state of competition in the delivery of broadband access infrastructure at the inception of the merger. Our conclusion is that competition has suffered severe restrictions, which offer a poor substitute for open access. Third, we examine the repercussions of the FCC’s decision on the AOL–Time Warner merger, investigate the positions taken by competitors in the cable market (especially AT&T), and analyze the impact of open access for innovation. Finally, we conclude with suggestions about the possible implementation of such a regulatory policy, drawing on regulation policy in Britain, Canada, and the EU.
Network Openness, Internet Evolution, and User-Driven Innovation Since its emergence about thirty years ago, the Internet has undergone constant transformation. We distinguish three successive generations. From the late 1960s to the early 1990s, the first-generation Internet was a network and social engineering prototype of interest to military and research organizations.3 From the early 1990s until the commercial availability of broadband access around 1997, the second-generation Internet saw the mass adoption and commercialization of narrowband access, largely through dial-up modems providing intermittent, low-bandwidth connections. We have now entered a third phase of the Internet’s history, when a critical mass of users is beginning to experience “always-on” high-speed access to the Internet from the home. Beyond the radical jump in transfer speeds, the functions to which a full-time connected broadband network can be turned and the ways it can be used represent a drastic change that will distinguish the “always-on” broadband Internet from its intermittent, narrowband precursor. 3. Hart, Bar, and Reed (1992).
, , , , ,
In 1990, at the dawn of the second phase of the Internet revolution, nobody had quite envisioned the web or the influence it would have. Similarly today, no one can tell how the third phase will unfold, but one thing is already obvious: narrowband access will no more provide access to the services and functions of the broadband world than the monochrome, text-only computer displays in use throughout the Internet’s first phase could have done justice to the second-phase web. If our analysis of the first two phases teaches us one thing, it is that the applications and services that will blossom during the third phase will come as a surprise. They will emerge through experimentation by users4 and through competition among those providing users with the necessary tools. Experimentation will include broadband content, tele-work, music, movies, video, interactive games, and multimedia extensions of Internet telephony and instant messaging—some forms of which a monopolist provider (or an owner of massive libraries of copyrighted content) might prefer to inhibit. Some important innovations may involve interaction between web functions and conventional broadcast programming over broadband networks or the integration of programming and interactive communication within digital set-top boxes. A market and network structure that continues to promote extensive competition throughout the Internet is therefore clearly required.
Network Openness and Internet Success America’s remarkable success in promoting the Internet revolution owes a major debt to determined regulatory action that encouraged all aspects of network openness and interconnection.5 Throughout the first two phases of the Internet’s evolution, a large variety of service and content providers could share existing infrastructure: the basic phone network. America Online and other Internet service providers, not the Regional Bell Operating Companies, popularized mass subscriptions to the Internet. Personal computers, the Netscape browser, and Cisco, not AT&T, drove the architecture of data networking and the web. All these innovations were possible because the Federal Communications Commission decided in the 1960s that the emerging world of data networking should not be treated like telecom services. Therefore it exempted all forms of computer net4. Napster, Gnutella, Scour, and the like offer good illustrations of users’ innovative energy in the emerging broadband environment. See Kuptz (2000). 5. Oxman (1999).
working from much of telecom’s regulatory baggage, including fees to fund various cross-subsidies for telephone services, and it prevented telephone companies from dictating the architecture of data networks. Policy intervention, not “unregulation,” forced network incumbents with market power (and the incentive to use it) to open their networks to these new entrants.6 Promoting ever greater openness of the U.S. telecommunications infrastructure has been a significant theme of U.S. regulatory policy and an important factor in the Internet’s success.7 In today’s policy language, the FCC chose to foster cost-based access to unbundled “network elements,” the functional elements of the network, rather than to regulate end services. Whereas data services regulation would have frozen experimentation, this policy allowed a variety of actors to take basic network building blocks and combine them in diverse and unpredictable ways. Forty years of regulatory decisions taken by the FCC have progressively opened the phone network and shifted the impetus for telecommunications innovation from incumbent carriers to network users, alternative equipment suppliers, and new entrants.8 Crucially, they protected the competitive space for new entrants to develop into viable commercial firms against entrenched incumbents by mandating interconnection to essential facilities and constraining the incumbents’ use of market power.9 These decisions in turn fostered user-driven innovation by giving leading-edge users—like financial services and energy and manufacturing firms—broader access to enhanced facilities and communication capabilities. 6. For specific details on these policies, see Bar and others (2000). 7. Oxman (1999). 8. Policies and proceedings like the Specialized Common Carrier, Carterfone, Execunet and Open Skies decisions, and the First and Second Computer Inquiries, permitted new entry into equipment, network, and service provision. 9. “Established carriers with exchange facilities should, upon request, permit interconnection or leased channel arrangements on reasonable terms and conditions to be negotiated with the new carriers, and also afford their customers the option of obtaining local distribution service under reasonable terms set forth in the tariff schedules of the local carrier.” Moreover, “where a carrier has monopoly control over essential facilities we will not condone any policy or practice whereby such carrier would discriminate in favor of an affiliated carrier or show favoritism among competitors.” See Federal Communications Commission, 29 F.C.C.2d 870 (1971), para. 157. See also “In the Matter of Use of the Carterfone Device in Message Toll Telephone Service,” Docket 16942, 13 F.C.C.2d 420 (June 26, 1968); MCI v. FCC (Execunet I), 561 F.2d 365 (D.D.C. 1977), cert. denied, 434 U.S. 1041 (1978); MCI v. FCC (Execunet II), 580 F.2d 590 (D.D.C.), cert. denied, 439 U.S. 980 (1978); Computer I, 28 F.C.C.2d 267 (1971); Computer II, 77 F.C.C.2d 384 (1980); Computer III: Notice of Proposed Rulemaking, F.C.C. 85–397 (August 16, 1985).
, , , , ,
A critical group of innovations involved “network performance features.” Examples of such features include higher speed connections, variable bandwidth, error rate correction, tailored data services, and a diverse and growing array of network management, configuration, and billing capabilities. None of these were necessary to provide plain old telephone service, and they were therefore largely unavailable from dominant carriers. As it unfolded, the FCC’s open network policy contributed to the development of these features and made them broadly available to network users and competitive service providers alike. More recently, the FCC policy of openness has moved to further enhance user-driven innovation and to broaden the possibilities for extended user choice by enabling deeper access into the incumbent local network. This created the necessary preconditions for the success of Digital Subscriber Lines (DSL) and the rapid funding by the public markets of numerous competitors to the Incumbent Local Exchange Carriers (ILECs) for high-speed data services. Throughout this history, the monopoly owners of the communications infrastructure strongly resisted opening their network to other service providers. Yet policy persistence paid off, gradually forcing open access to the infrastructure resources the incumbents monopolized. This was the key to the flourishing of a dynamic communications market and the emergence of the Internet. Consistently throughout this history, the FCC rejected claims that networks had to be closed to generate enough investment incentives.10 In each case, the innovative development of the industry with new uses and new suppliers would have suffered had it been forced to develop in a “closed access” environment. Network openness has in fact radically stimulated the use of incumbents’ telecom assets such as second lines. Indeed, U.S. policy has moved gradually and consistently, though not always intentionally and still incompletely, toward support of the new userdriven innovation paradigm. This steady policy set in motion and sustained a virtuous cycle of cumulative, user-driven innovation, new services, and infrastructure development, increasing network usage—with evident economic benefits for the U.S. economy.11 Perhaps the most dramatic single example is the emergence and evolution of the World Wide Web, driven almost entirely by Internet users who pioneered all of its applications. 10. For example, the FCC consistently argued that long-run incremental cost (LRIC) allowed the sharing of network functions on terms that provided for a competitive return on capital. The furious debate over LRIC for unbundled network elements had this discussion as a critical feature. 11. Bar and Borrus (1997).
The World Wide Web in turn facilitated a new surge of innovation that has ushered in Internet-based e-commerce. Furthermore, in an unexpected collateral benefit, the virtuous circle of policy and market innovation came to be recognized by the rest of the world as the right template for network competition and the growth of the Internet. It thus gave the United States a voice in global policy that went far beyond its political and market power. This network openness and the user-driven innovation it encouraged were a distinct departure from the prevailing supply-centric, providerdominated, traditional network model. In that traditional model, a dominant carrier or broadcaster offered a limited menu of service options to subscribers; experimentation was limited to small-scale trials with the options circumscribed and dictated by the supplier. By contrast, open access to the network led to rich experimentation by many actors whose ideas had previously been excluded from shaping network evolution. It is a safe bet that few people, back in the days of 300-baud modems, ever thought that 28.8K data communications would flow over ordinary voice phone lines. Even speeds of 9600 bits per second were seen as reachable only with expensive, cleaned, better-than-voice lines—ISDN or some similar special service. Diversity of experimentation and competition on an increasingly open network was key, since nobody could foresee what would eventually emerge as successful applications. Openness allowed many paths to be explored, not only those that phone companies, the infrastructure’s monopoly owners, would have favored. Absent policy-mandated openness, the Regional Bell Operating Companies (RBOCs) and monopoly franchise CATV networks would certainly have explored only the paths of direct benefit to them. It is doubtful that without such policy-mandated openness the Internet revolution would have occurred. Throughout this process, the most successful innovation paths challenged the very core of the phone monopoly business as well as the industry’s technology and business assumptions. Yet as the Internet ushered in the “creative destruction” of the old network model, it led to deeper economic change and greater business opportunities than anyone could have envisioned.
Who Ought to Shape the Internet’s Third Phase? Assessing Competitive Provision of Broadband Access As we enter this third phase of Internet evolution, the widespread diffusion and adoption of broadband technologies, we face again a similar situation.
, , , , ,
Locally, one provider—the monopoly cable franchise, with significant market power in key market segments, broadband interactive multichannel video service to homes,12 and broadband Internet access to homes outside the DSL circle—finds itself in a position to prevent open access to the Internet. Nationally, the dominant cable firms have slowly accepted limited access policies. Based on the history of telecoms sketched above, this should not come as a surprise. The question is obvious. The successful policy trend of the past thirty years has been to force competition and ensure open access to the incumbent infrastructure when there are significant problems pertaining to market power. Why, now, reverse that successful policy? As stated, there is both a local and national dimension to cable’s power in the market for Internet access. At the local level, cable providers have substantial market power in the broadband access and broadband service provision, because the cable franchisee, whether it be AT&T, Time Warner, or anyone else, typically has a local monopoly over the cable infrastructure.13 As a result of recent acquisitions, two firms, AT&T and Time Warner, now control the majority of the U.S. cable television infrastructure. These vast corporations now have substantial market power over large sections of the present and future broadband Internet and consequently find themselves in a position to have a profound impact on the Internet’s third phase. This share gives them significant influence—well beyond the sheer market power indicated by the number of homes passed by a cable system in which they have an ownership stake. Indeed, it allows the companies to coordinate the activities of many local monopolists and shape the overall network architecture and standards. The cable operators’ current strategies lead them to integrate vertically with Internet service providers (ISPs): AT&T with Excite@Home, Time Warner with AOL. Even if the cable providers let in only one or a few additional ISP partners, there may be insufficient conditions for vigorous competition. In addition, the cable owners will decide which additional ISP(s) they let in and may prefer to pick the least threatening to their own strategy. While it is important to be
12. We do not focus in this piece on interactive television over broadband, although it loomed large in the FTC’s reasoning about the AOL–Time Warner merger, because cable has no monopoly on broadband television services. See Owen (1999). 13. Local franchises, moreover, come up for renegotiations only episodically or with a change of ownership, further reinforcing cable’s local monopoly power. A limited number of regions in America have two systems, but they are very much anomalies.
sensitive to the costs of regulation, a sound policy has to go beyond ensuring access to one or two rivals to the integrated ISP. Clearly, all telecom industry players recognize the importance of this turning point. They have undertaken massive efforts to upgrade existing local telephone and cable infrastructures and to develop new broadband wireless access. In that respect, the current competitive situation is different from the previous generations, where there clearly was no alternative to Ma Bell’s dominant access infrastructure. Yet this does not mean that broadband provision is fully competitive or competitive enough for access not to be an issue: deployment patterns, different regulatory heritage, leadtime of cable, and switching costs result in cable dominance over broadband delivery infrastructure in the short to medium term. Cable providers, which have monopoly franchises in most markets, are achieving substantial market power over broadband Internet access. In our analysis, the relevant market for this policy discussion is the residential broadband access, distinct from narrowband dial-up access. We distinguish broadband access from narrowband and residential access from commercial. Regarding the differences in bandwidth, it should be noted that although there is overlap between the services, broadband is much more than a faster version of the old narrowband Internet. Rather, it enables previously impossible bandwidth-intensive services like broadcast quality video streaming and IP-based videoconferencing. Therefore, the relevant market for our analysis is the market for broadband access, separate from the overall Internet access market. The FCC at first rejected the distinction between broadband and narrowband access but has accepted it during the course of the AOL–Time Warner hearings, recognizing that narrowband technologies are not true substitutes for broadband. A further distinction about relevant market rests on the classes of endusers, where the FCC’s traditional distinction between residential and business markets makes sense. “Always-on” broadband access allows home networks to be permanently connected to the Internet, with access appliances or screens in several rooms. What really distinguishes this phase is the final convergence of TV and PC, of entertainment, education, and work at home: the seamless linking of the home into the larger electronic community. The architecture of the integration point, whether a digital set-top box, a new DSL consumer device, or a home wireless hub, will determine which industry players participate in creating these applications and shaping their character. Sustained development of this next generation of applications will require a critical mass of broadband-enabled users. Closing off key segments
, , , , ,
of the broadband infrastructure to a monopoly provider or a cartel of dominant providers would inevitably choke off the very innovation that has created value from today’s Internet. Thus the residential broadband access market is relevant not only in terms of the economic analysis of market power but also in terms of its broader policy importance. This section argues, first, that in the first five years of the twenty-first century, cable has the ability and incentive to exercise market power in regard to a very significant part of the residential broadband access.14 Second, even when residential consumers have a choice of broadband access provider, significant switching costs blunt competitive dynamics, reinforcing cable’s lead. This lead is likely to endure through the near term, marking the first five years of broadband access deployment. This initial period is particularly critical because patterns get set early.
The Deployment of Broadband Access Alternatives The pace of broadband access infrastructure deployment is picking up dramatically. Both CATV operators and ILECs are working hard to upgrade their networks so they can offer broadband Internet access. In addition, a number of wireless technologies are now emerging as broadband alternatives, ranging from “wireless cable” approaches, such as Multichannel Multipoint Distribution Services (MMDS) and Local Multipoint Distribution Services (LMDS), to “High Data Rate” (HDR), Satellite (Tachyon, Spaceway, Teledesic), or “fiberless optics” (Terabeam). Yet the availability of “last mile” competitive broadband network infrastructure for residential customers remains limited. For all practical purposes, cable and DSL are currently the only broadband options available in the residential market, and cable has a substantial lead over DSL, as the FTC decision on Time Warner conceded. For example, the FCC reported June 30, 2000, figures showing 2.2 million cable subscribers versus just under one million DSL subscribers.15 While more data must be gathered, we can reasonably accept the most conservative industry estimates, reflecting a 2-1 lead for cable.16 14. This chapter focuses on access to broadband Internet services rather than cable provision of interactive TV. While the latter is an important sector of possible cable monopoly, we concentrate on the provision of more flexible Internet services where more fundamental end-to-end innovation will occur. 15. See FCC news releases, 2000. 16. An extensive array of data about cable’s early lead over DSL is available in Bar and others (2000).
Predictions about the future of broadband access competition are more dispersed, although most reports agree that cable’s lead probably will endure through 2003. There is much evidence to support predictions that cable will continue to dominate. In particular, only 23 percent of U.S. households are within 12,000 feet of an upgraded central office, without Digital Loop Carrier (DLC), and therefore can technically receive DSL service, while 52 percent of U.S. households are passed by upgraded twoway cable plants that can technically deliver broadband access.17 Ironically, ILECs are handicapped by their recent upgrades because the DLC equipment they deployed to connect new suburbs make these lines unfit for DSL and will have to be replaced—at a substantial cost. By contrast, cable companies have aggressively deployed digital video services to compete with Direct Broadcast, reaping substantial revenues from that deployment. That investment brings them ever closer to offering broadband data services. While there are certainly additional costs to make digital cable interactive, less than 5–8 percent of the total bandwidth on a digital cable system is used for high-speed data services; the rest remains available for profitable digital video services. Holding a franchise monopoly for cable TV thus creates a solid foundation for cable to enter the market for broadband access. Overall national figures, whether market share or addressability, provide a misleading picture of the competitive situation. Indeed, in the short to medium term, broadband cable and DSL deployments are taking place along two distinct paths with relatively limited overlap. The cable modem path generally covers only residential areas and clearly dominates in many suburbs.18 While it is to be expected that eventually most homes will have a choice between two broadband wires, cable and DSL, in the near term most will have only one option, and in most cases that option will be cable.19 In its staff report on broadband deployment, the FCC’s Cable Services Bureau noted that in addition to these wired approaches, a number of 17. McKinsey and Bernstein (2000, p. 9). 18. Les Freed, PC Magazine, March 9, 1999, p. 172. 19. It should also be noted that a few U.S. cities, notably Palo Alto, California, and Dunwoody, Georgia, have undertaken fiber-to-the-home trials. At this point, however, these remain pricey (Palo Alto’s costs $1,200 for the connection fee and $92 a month for 10 Mbps service, or twice those rates for 100Mbp), and their availability is likely to remain quite limited in the near term. See Hecht (2000) and “Fiber to the Home (FTTH) Trial” (www.city.palo-alto.ca.us/utilities/fth/index.html [March 2001])
, , , , ,
broadband wireless technologies will be offered within a few years.20 Sprint plans to deploy one such technology, Multichannel Multipoint Distribution Services, in eighty-three U.S. markets before 2003, offering data rates and prices roughly similar to today’s cable modem and DSL solutions.21 Like cable, MMDS is a shared solution (in fact, the technology started out as a “wireless-cable” approach to deliver CATV programming). In addition, it suffers from technical limitations, such as the requirement for line-of-sight connections and susceptibility to bad weather. Others in this general category are a variety of “wireless competitive local exchange carriers (CLECs).” 22 However, analysts see MMDS and fixed wireless as niche plays, estimating they will take respectively 8 percent and 7 percent of the broadband access market by 2004, primarily in areas where neither cable nor DSL is available.23 Also on the horizon is an array of other high bandwidth wireless technologies; for example, Qualcomm’s High Data Rate (HDR) wireless technology is expected to offer up to 2.4 Mbps, but it is several years away from extensive commercial deployment.24 In summary, the competitive landscape that emerges from current technology deployment and announcements is one in which until 2004, cable and DSL will jointly dominate the provision of residential broadband access. This timeframe provides a useful horizon: by then, broadband residential access will have been available for about five years, a period roughly comparable to the existence of second-generation Internet.25 Throughout the period, all indications are that cable will enjoy the lead—a vast initial head start, progressively decreasing to rough parity over the five-year period, assuming that ILECs carry through the substantial network upgrades required. In addition, national market share numbers will likely overstate the amount of real competition between cable and DSL net20. FCC Cable Service Bureau (2000, p. 29). 21. “Sprint Rolls Out Wireless Cable: Ubiquitous Broadband Coverage Planned”, Boardwatch, February 2000. 22. These include Advanced Radio Telecom (ART), NextLink, Teligent, and WinStar, which generally plan to focus on providing broadband service to buildings in urban areas that are not served by existing fiber or CLECs. 23. McKinsey and Bernstein (2000, p. 31). 24. See (www.qualcomm.com/cda/tech/hdr/whatis.html). There is great controversy over the feasibility of satellite systems like Spaceway and Teledesic. Another major development is the advent of the wireless local area network, symbolized by the 802.11 systems. However, these local area networks in homes (or uniting clusters of homes) still require a high-speed pipe to feed them. 25. Netscape 1.0 was released on December 14, 1994, providing a convenient marker for the start of the second-generation Internet.
works, as many individual households will not be technically addressable by both systems. In fact, cable operators and telecom companies (telcos) often are not really competing head on, having essentially partitioned the broadband access market: cable modems for residences, DSL for smalland medium-size businesses.26
The High Costs of Switching At one point, the FCC suggested that cable’s initial success has created competitive opportunities and “spurred the deployment of Digital Subscriber Lines (DSL).”27 Still, for competition to serve as a check on broadband providers’ behavior, it needs to be easy for residential consumers to switch from one provider to another. In areas where both broadband cable and DSL are available, competitive discipline only works if the costs of switching from one technology to the other are low enough that consumers do not feel “trapped” by the provider they happened to choose initially. These switching costs are substantial, however, and combined with broadband cable’s early deployment lead, they militate against the free exercise of market power. The switching costs have several sources: the network’s physical architecture, its logical architecture, and the “stickiness” that results from structuring one’s activities around specific network services. The physical architecture of the network creates substantial switching costs. Different requirements for inside wiring, different terminal equipment, nonrefundable connection charges, and different computer setups in many cases are among the factors that can easily push the physical cost of switching between cable and DSL up to several hundred dollars.28 There is much variability in these costs: some cable operators allow their customers to buy cable modems while others include a rental charge in the service fee, different operators and telcos charge different set-up fees, and in these early stages, carriers occasionally waive sign-up fees.29 An additional cost— 26. McKinsey and Bernstein (2000, pp. 10–11). See also “Give Peace a Chance,” Boardwatch, April 21, 2000. 27. FCC Cable Service Bureau (2000, p. 9). 28. Bar and others (2000, p. 20) provide a detailed itemization of these costs as of mid-1999. They tend to change often, as the broadband providers refine their marketing plans, and various incentives and pricing plans make a side-by-side comparison difficult. 29. For example, SBC waived installation charges and equipment fees during the first half of 2000 in exchange for a one-year commitment.
, , , , ,
inconvenience or lost work hours—comes from the fact that today, both DSL and cable installation require a service call by a technician during business hours (and sometimes, in these early days of the technology’s development, several service calls). These costs also decline as both cable and DSL technologies become more robust and as new technology implementation, such as splitterless “G.lite” DSL, eliminates the need for a technician visit. At this point, however, these various costs and inconveniences add up to substantial hurdles for residential customers, making the switch between broadband access methods much more costly and cumbersome than either switching from one DSL provider to another or switching among narrowband ISPs. As a result, broadband cable providers that are not required to offer fully open ISP choice may well have several hundred dollars’ worth of room to maneuver before their customers look somewhere else. The logical architecture of the network and the associated software also create important switching hurdles. Information access and transmission systems become embedded with one’s current provider. This is in contrast to narrowband Internet service provision, where customers can switch relatively easily between ISPs and still have equally convenient access to various kinds of content. Let us consider these several costs of switching from one broadband system to another. First, many everyday communication activities are tightly entangled with one’s Internet provider, so that shifting providers may range from the inconvenient to the truly burdensome. With narrowband Internet access, the inconvenience is typically limited to getting a new e-mail address and modifying a few dial-up settings. Already, the absence of an “e-mail portability” equivalent to telephony’s number portability represents a nonnegligible switching cost. However, switching among broadband access providers would be much more cumbersome because broadband Internet supports an increasingly wide range of new communication activities. For example, for customers who elect to use their “always-on” broadband connection to run web servers from their home, the switch would require a modification of the DNS tables to link their domain name to the new IP address they would receive.30 Additional inconvenience would include the
30. Obviously, at this time, this is a “problem” only DSL customers face, since broadband cable customers are prohibited from running any kind of server from their home through their cable modem service according to the terms of their service agreement. The cost of that operation depends on the ISP providing the DNS service. For example, Pacific Bell Internet charges $100 for its DSL customers to link their IP address to a domain name (or to change such link).
loss of adaptive set-ups that provide ease of access or access to special services.31 Second, if arguments about bundling are correct, competition is all the more stifled. Some market analysts estimate that merely the prospect of bundled services creates approximately $150 in new value per subscriber for a cable system, irrespective of value created by the anticipated revenue from each individual service offering.32 There may be competitive advantages in the package of services created, advantages in pricing those services, and advantages in a single bill. Indeed, the consumer’s preference for one bill is believed to be strong enough to reduce switching, even without price reduction for the services in a bundle.33 Consider only the geographic monopolies noted above. In those areas, cable’s competitors cannot create equivalent packages. The ability to include television offerings in its bundles, whatever the rules on control of program content may be, certainly makes it easier for AT&T and AOL–Time Warner to create distinctive packages. AT&T could, and apparently intends to, offer integrated bundles of phone service (both local and long distance), cable TV, mobile services, and ISP. The new AOL–Time Warner company probably will offer different service bundles, but these would raise similar concerns. For example, these bundles may not include telephone services but would include instant messaging (and its upcoming multimedia extensions), which many see as a significant alternative to IP telephony. As the FTC and FCC both argue, AOL is a dominant ISP. Some aspects of its ISP service, the Network Presence Database (NPD) function-service for instant messaging, can create market power in specific applications such as instant messaging. As a consequence, the AOL–Time Warner merger results in the combination of entities with upstream (NPD) and downstream (cable network) market power, further increasing their ability to define unique service bundles that include features over which they exert substantial market power. More broadly, the vertical nature of the AOL-TW merger led the FTC to raise concerns over vertical foreclosure, including control of programming 31. This category of switching cost, it should be noted, is not specific to cable but affects users switching from DSL to cable, or from cable to DSL, or even among different DSL providers. Their dampening effect on competition might be mitigated, though not eliminated, by rules addressing email portability or IP address portability. 32. John M. Higgins, “All For Just $5,000,” Broadcasting and Cable, May 10, 1999, pp. 16–18. 33. This represents $49.5 million of the value of @Home’s present subscriber base of 330,000. The estimate is from Kinetic Research, cited in Alex Lash, “Surfing the Skies,” Industry Standard, February 1, 1999, p. 30.
, , , , ,
content. In particular, the FTC has pointed out that AOL-TW would have incentives to interfere with content provided by nonaffiliated companies, especially when this competes directly with AOL’s recently introduced interactive TV offerings.34 However, the flip side of this position is that AOL does have bargaining power with AT&T cable et al. Indeed, the FTC went so far as to impose conditions designed to make sure that an integrated AOL system could not use its market power to discourage build-out by rival DSL networks. Time Warner’s merger with AOL has provided the media giant with AOL’s massively successful Instant Messenger (IM) service. Bundling broadband video streaming IM with cable service could mean AOL–Time Warner dominance over a potential third-generation Internet’s “killer app.”35 If competitors cannot create equivalent bundles, the resistance to switching one component of the bundle—broadband access—to an alternate supplier obviously increases. The anticompetitive effect of such bundling strategies will be further amplified through cable players’ efforts to leverage control of the set-top box and capture an increasing share of upside services.36 Finally, and more fundamentally, consumers may never find out what they are missing by being denied open access and thus may never be in a position to decide whether switching broadband provider is worth the costs just described. With traditional products, we tend to think of switching costs as part of a rational decision between two well-known alternatives. For example, customers switching from one brand of cereal to another have all the information they need to make a rational choice: they know the prices, they see the packaging, and they can easily compare objective nutritional value and subjective taste. Not so when picking between two alternative broadband access services. Prices are not always what they seem, with countless hidden costs ranging from rewiring to domain name resetting, and packaging is less than transparent when broadband services come as part of complicated and hard-to-compare bundles.
34. See FTC, “AOL Analysis” (www.ftc.gov/os/2000/12/aolanalysis.pdf ). 35. The FCC decided that IM was an earned dominant market position in permitting the merger. However, it also decided that the IM name and “presence” database (how you know someone else is online) have become dominant due to tipping effects in the market. The merger would not change the degree of market power. But there was the issue of a potential to leverage dominance in current IM into new advanced video streaming IM services. The FCC has consequently decided to insert a condition that for advanced video IM, AOL will be impelled to release the name and presence databases with rivals, albeit after AOL has improved security and privacy safeguards. 36. Galperin and Bar (1999).
More insidious is the difficulty to assess real-life performance (the service’s objective “nutritional value”) or to really understand the difference between “open-access” and “closed-access” communication experiences (the service’s subjective “taste”). Just as with cereals, customers cannot know what they are missing until they buy the competitor’s product and try it out. But unlike the case with cereals, where it is easy to buy two different boxes and give them a taste trial over breakfast, few customers will subscribe to both cable service and DSL and benchmark them against one another before deciding which one they like best. The good news is that whichever they choose, it is likely to be much better than the analog modem it replaces. The bad news is that they will probably never know how much better it could have been had they picked the other one. Until 1998, when France Telecom finally decided to take a real stab at offering mass-market Internet access, French citizens thought that second-generation Minitel was very cool. As they marveled at their new Minitel terminals displaying alpha-mosaic images faster than ever before, they never suspected that across the Atlantic (and across the Channel), the web had vastly overtaken their once-pioneering télématique. In such cases, when first-hand information is hard to obtain, users typically rely on others to help them choose. They follow the lead of neighbors or read Consumer Reports. Operationally, for broadband consumers, comparative shopping will generally mean comparing notes with friends and neighbors who have an alternative. There is clear evidence for this behavior from the PC world. PC users, Austan Goolsbee and Peter Klenow have shown, are strongly influenced by their local social network.37 But neighbors will not be much help if what broadband access service is available to them depends on which cable providers control the local monopoly. French customers certainly could not count on their French neighbors to tell them about the Internet. Even trade magazine benchmarking reports may be of limited use because in the short term, until full-fledged thirdgeneration services emerge, the differences between various flavors of broadband Internet access will seem subtle to the residential consumer. Indeed, the average household does not directly experience “open broadband Internet-access” or “dynamic caching” but rather the services delivered over broadband access infrastructure—web pages loading faster or smoother streaming video. But even when delivered over a third-generation infrastructure, these still remain second-generation applications. 37. Goolsbee and Klenow (1999).
, , , , ,
The Nature of Cable’s Dominance The combination of cable’s early and continuing lead with high switching costs strongly suggests that cable owners will hold considerable power over the broadband residential access market during the formative stages of the third-generation Internet. The precise form of market power may vary according to local market conditions. However the structure of a local market unfolds, it is unlikely to be fully competitive. In one set of local markets—presumably a significant set given the technical limitations of DSL—cable will be the only broadband option. There, absent regulatory safeguards, consumers are likely to be harmed: they will pay the access fees a cartelized industry can charge and they will suffer from limitations on the kinds of services offered and the degree of experimentation allowed by the single access provider. In other local markets, the typical residence will possess two active wires capable of carrying broadband video services subsidizing high-speed data services. Consumers will then be faced with an asymmetric duopoly, where one player’s network is fully open and the other much less open. They will have a choice between the limited set of cable-blessed access providers allowed to operate over the cable line and the full set of ISPs and local exchange carriers buying access over the telephone line from the local incumbent phone company. Is there reason to think that consumers with the potential for dual access would then be worse off than if ISPs could themselves offer access over either wire? We believe there are three sources of concern. First, cable’s early lead in deployment, coupled with substantial physical and logical switching costs, will give cable operators substantial advantage even in potential dual access local markets. Second, a cable provider’s ability to deny access to certain ISPs changes the dynamics of the market in which ISPs and CLECs face the RBOC. ISPs and CLECs purchase broadband access and collocate equipment at a regulated price, but regulators cannot fully specify the quality and reliability of service they receive, or the incumbent’s responsiveness to ISP requests for assistance and accommodation. A credible threat on the part of ISPs to vote with their feet and desert telephone wire for cable wire would provide significant competitive discipline on the RBOC, enhancing its incentives to provide high-quality and flexible service for ISPs and CLECs. But as long as the cable owner tightly controls access to its wire, all but a few competitive DSL access providers will face a monopolist in their RBOC. In the end, residential customers
would be better served if there were real market competition, with cable and telcos each vying for ISPs’ business. Third, as the FTC complaint has pointed out, AOL’s merger with Time Warner will significantly reduce its incentives to market and promote broadband access to its services over DSL, particularly in Time Warner areas. The same would also be true of Excite@Home but is especially significant because AOL is by far the largest residential provider of Internet services and content. As a result of the merger, AOL has incentives to prefer marketing its high-speed services to users who buy Time Warner’s cable broadband access rather than to DSL customers. The FTC has expressed concern that this could limit DSL rollout, especially in Time Warner’s areas of market dominance, but also nationally. The result would be to further increase cable dominance. The FTC’s order that AOL promote DSL access in Time Warner’s areas to the same level as it promotes them in other regions goes some way toward mitigating these concerns. However, it cannot prevent AOL from scaling down its overall nationwide support of DSL if the company finds it more profitable to focus on promoting cable access in the smaller but more lucrative regions covered by Time Warner. Thus in markets where cable and DSL compete, we should not assume that the cable company would then be forced to open its system fully in order to attract customers. Indeed, by keeping tight control over access, the cable owner would strengthen the ILECs’ bargaining position vis-à-vis ISPs, thereby decreasing competitive pressure on its own integrated ISP and its few “favored” ISPs.38 Limited access cable and open access ILEC would in effect have a common interest in keeping cable restricted, thus creating the basis for implicit collusion that would strengthen their respective positions over nonaffiliated ISPs. By contrast, if both network providers were open, ISPs could then negotiate with the owners of both wires to the home and give their business to the one with the best terms and conditions. Perhaps both network owners would prefer not to cooperate with the ISPs, but if both were open that would be a much harder implicit bargain to strike. So even where cable and DSL are in a position to effectively compete with one another, one can imagine scenarios under which this would not necessarily result in forcing cable to open access to its infrastructure or in fair competitive terms for all ISPs. 38. Indeed, if the cable system also had upstream power in the ISP, as at AOL, it might hurt both rival ISPs and DSL.
, , , , ,
The merger between AOL and Time Warner underscores this point and magnifies the concern that competition alone might not be a sufficient source of discipline to yield open access. Despite its considerable premerger clout, AOL had vehemently protested against @Home’s closed access, suggesting that other smaller ISPs may be even more vulnerable. If open access was so critical to AOL as an unaffiliated ISP, it must be equally critical for smaller ISPs that will find themselves unable to merge with a cable operator. The merged AOL–Time Warner combines the world’s largest ISP and America’s second cable operator with 20 million cable households, 85 percent of which are broadband addressable.39 Before the merger clearance, nods to “open access” were made by the two corporations. The U.S. government’s actions on AOL have spelled out a cautious policy for open access that goes beyond the corporate offers, but a decision on conditions for a specific merger falls well short of a general mandate for open access. The consequences for the innovative dynamic of the Internet will be quite different in these three cases: effective monopoly, asymmetric duopoly with one side closed and the other open, and real competition between network owners and among ISPs. In all three cases, however, we have strong suspicions that competition alone would fail to guarantee open access throughout the emerging broadband infrastructure. As the British regulator OFTEL argued, there must be “rules to deal with market power exercise by firms with control over capacity constrained systems.”40 Such capacity constrained systems can create “joint dominance,” a situation with a very limited number of competing suppliers. In that case, OFTEL argued that it might be necessary to apply the same rules that govern individual firms with market power.41
Nurturing Third-Generation Innovation To encourage the successful deployment of third-generation Internet access infrastructure and the promotion of the accompanying wave of innovation, policymakers need to pursue two goals simultaneously. First, they must ensure that sufficient incentives exist for industry to invest in upgrad39. McKinsey and Bernstein (2000, p. 12). 40. U.K. Office of Telecommunications (1999, p. 4, para.13). 41. U.K. Office of Telecommunications (1998, p. 59). It defines an “open state” as a market where “there is universal access control (that is, all consumers can enter into a direct commercial relationship with the suppliers of electronic information delivered over electronic networks) and no scarcity of transmission capacity” (p. 9, para. 2.6).
ing existing access infrastructures—cable, phone, and wireless—and to pursue the development of new ones. Second, they must shape a governance framework for this access infrastructure that stimulates innovative competition not simply between alternative access infrastructures but also among the service providers (ISPs and others) and the end-users who will take advantage of broadband access to invent and deliver third-generation communication applications. Much of today’s access debate views those two goals as substitutes in a zero-sum game where we must chose between either setting up the right incentives to generate infrastructure investment or creating the right framework to foster broad-based competition in services. Following that dichotomous vision, the cable industry warns that universal open access requirements would destroy its incentives to invest in modernizing the cable infrastructure. It further argues that infrastructure competition is a fine substitute for service competition. ISPs conversely claim that absent open access to cable and phone infrastructures, innovation would be smothered by dominant infrastructure owners. In our analysis, by contrast, the paramount policy goal should be to balance both goals because they are equally important to the success of thirdgeneration Internet. Without incentives to invest in upgrading existing access infrastructures, there will be no platform to explore and leverage innovative service ideas; and without vibrant competition among alternative uses of upgraded infrastructures, we would explore only a limited set of innovative ideas—those of the infrastructure owners. This section analyzes the two facets of this argument in turn. First, we argue that open access requirements would not eliminate the cable industry’s incentive to invest in the deployment of third-generation access infrastructure. Second, we show how a closed access infrastructure could channel innovation along the sole interests of the infrastructure owners, using a case study of AT&T’s strategy for Excite@Home. Third, we show how the merger of AOL and Time Warner has changed the dynamics underlying the cable industry’s argument. With the previous section’s assessment of the competitive situation, this lays the groundwork for our concluding section exploring further possible policy approaches to escaping this false trade-off between infrastructure investment and service innovation.
Sustaining Investment in Third-Generation Access Infrastructure The cable industry argues that if it cannot limit the ISPs operating over cable broadband access, its network upgrades will be too risky and unprofitable to
, , , , ,
warrant the large investment needed. The consequence, it is implied, would be to stall the deployment of a digital cable infrastructure, holding back not only the wide diffusion of broadband Internet access and digital television but also the emergence of a nationwide facilities-based competitor for residential telephony. This argument initially resonated strongly with the FCC, whose preliminary findings on broadband access supported the industry. Separately from the broadband access debate, the FCC is quite eager to encourage facilities-based local telephony competition, and AT&T’s suggestion that open access requirements might slow that as well appeared to carry weight. This line of argument was first and most extensively laid out in a December 1998 filing by the National Cable Television Association (NCTA).42 On this issue of investment incentives, our view differs from that of the NCTA in a number of respects. We use the AT&T investment in cable to illustrate our logic. First, the argument omits to point out that a great deal of investment to upgrade cable facilities has already been undertaken within a very protected environment. Indeed, cable networks are franchise monopolies in most markets and they are built, capitalized, and largely upgraded under a monopoly market operation. For example, cable operators deployed more fiber in 1997 than all the RBOCs combined.43 When it acquired TCI, AT&T did not buy companies in competitive markets, but rather bought a set of video distribution monopolies. These monopolies had, arguably, largely made the decision to upgrade their networks to digital video in order to compete with direct broadcast and, perhaps most important, to offer cable-based phone service. Second, these investments, and the large sums AT&T spent to acquire these companies, were predicated on more than simply broadband Internet. In particular, upgraded local cable plant would allow AT&T to save considerable sums in access and interconnection fees, estimated to run as high as $15 billion in 1998, about a third of its domestic wireline revenues.44 Cut those charges in half, and AT&T’s net income doubles. Some estimates suggest that AT&T plans to have extensive and exclusive cablephone penetration by 2005. In that case, gains from video services, let 42. Bruce M. Owen and Gregory L. Rosston, “Cable Modems, Acess and Investment Incentives,” filed on behalf of the National Cable Television Association, December 1998. 43. David MacKie-Mason (1999), citing the Telecommunications Industry Association’s “1998 Multimedia Telecommunications Market Review and Forecast,” p. 46. 44. Larry Darby, “Open Access: The AT&T Internet Business Case?” Last Mile Telecom Report, August 12, 1999.
alone Internet access, are just gravy.45 Seen that way, AT&T will obtain Internet access for a small marginal cost, since the modifications required to add Internet capacity to an existing digital cable system are much lower than the estimates of the costs required for upgrade of the digital network itself.46 Third, the cable industry claims that open access regulation would reduce its revenues and its incentives to invest. The FCC initially backed these claims, reporting that “there was near unanimous agreement among the cable and investment panelists that government regulation of the terms and conditions of third-party access to cable systems would cast a cloud over investment.”47 Several analysts, however, including Merrill Lynch and Jupiter Communications, believe on the contrary that open access would be profitable for cable operators48 because it would create additional wholesale revenues. MacKie-Mason’s own detailed economic modeling of this question on behalf of the Open Access Coalition shows in fact that open access would yield substantial revenues for cable operators.49 Such economic models, just like the less quantitative claims of the NCTA economists, are obviously always subject to argument. MacKie-Mason, however, also points to compelling additional evidence in what he calls a “controlled experiment”: the Canadian CRTC’s 1996 announcement that it would require open access did not stop investment, and in fact, the major Canadian cable operators are ahead of their U.S. counterparts in deploying broadband facilities.50 In summary, we believe there is ample reason to strongly question cable’s claim that open access requirement would stop the deployment of broadband cable access. Moreover, the waves of mergers undertaken by both AT&T and Time Warner demonstrate that the companies have adequately readied themselves for the challenge of rapidly acquiring a broad market 45. MacKie-Mason (1999, p. 12). Owen (1999, pp.120–25) points out that cable is an industry with economics similar to real estate. It takes heavy depreciation to gain tax advantages and rarely runs an accounting profit. Nonetheless, it can sustain major investments with substantial positive cash flows. Its biggest risks include modification of tax laws influencing its depreciation strategies and the possibility that rival technologies will create alternative ways to deliver video-related services. The cable plant is a capital-intensive system with few alternative uses. 46. Providing broadband Internet access via cable modem is estimated by the FCC to cost the cable operator $800–1,000 per subscriber. Federal Communications Commission (1999a, chart 2); Federal Communications Commission (1998, para. 40); DePompa-Reimer (1999). 47. FCC Cable Service Bureau (2000, p 34). 48. MacKie-Mason (1999, p. 35). 49. MacKie-Mason (1999). 50. MacKie-Mason (1999, p. 27).
, , , , ,
share: AT&T through its purchase of smaller cable companies, Time Warner through its merger with AOL’s enormous user base. We might also add that, if open access requirements were such an obstacle to broadband deployment, it would be appropriate to call for lifting such requirement from the ILECs. But continuing regulatory requirements that they open their network to all ISPs appear not to stop the telcos from carrying out ambitious DSL deployment. Perhaps they would race to deploy DSL even faster, were it not for these constraints. But in their case, policymakers have apparently decided that deployment speed is not the only value at stake; fostering an open innovation environment is an equally worthwhile goal, even at the cost of a hypothetical deployment slowdown.51
Fostering Innovation in Third-Generation Applications: The “Closed Access” of AT&T and Excite@Home Closed access control would allow cable owners to pursue only the exploration and deployment of those third-generation services that directly benefit them. This is not to say that no innovation would take place, simply that only the technology trajectories that line up with their interest would be pursued. As a result, the kind of wide-ranging, open innovation and experimentation that has been central to previous generations of Internet explosion would be stifled. We examine here the early experience with the Excite@Home broadband offering as an illustration of the implications of such an incentive structure. While the practices of Excite@Home are perfectly understandable and legal, they create concerns when consumers have no alternative. We separate two categories of consequences: first, the restrictions imposed on end use and second, the upstream implications of closed network architecture for electronic communication and commerce. First, Excite@Home imposes a number of restrictions on its customers’ usage patterns. Of course, any network owner, left unconstrained, will logically attempt to shape network uses along patterns that best serve its own interests, and Excite@Home understandably configured its service to force usage that fits the specific patterns that generate the most profits. Excite@Home’s limits on what its users do are spelled out in the “acceptable use policies” they agree to when they subscribe to the service. The overall Internet usage pattern encouraged by Excite@Home is strongly
51. For a similar argument, see Lemley and Lessig (1999).
aligned with a vision of third-generation Internet as an extension of a broadcast network: a communication where traffic patterns are asymmetrical, where users download much more than they send, and where users are passive consumers rather than publishers of multimedia content. The practices involve a number of elements:52 —limits on upstream traffic, which curtail consumers’ ability to experiment with their own uses of the network, including Internet telephony and interactive video teleconferencing;53 —prohibitions on setting up any kind of server;54 —technical biasing against and limits on the performance for nonpartner content that will structure the cyber marketplace, limiting experimentation and innovation; —prohibitions on using Excite@Home for work-related activities, for which customers are expected to purchase the more expensive (and DSLbased) “@Work” service. That means it will be difficult to hook up to corporate LANs from home, which will limit the present diffusion of innovative forms of work at home. In order to enforce these rules, Excite@Home must constantly monitor its customers’ data traffic, raising serious privacy concerns.55 Arguably, these restrictions flow from the limitations of cable technology. They represent, however, Excite@Home’s own approach to dealing with these limitations, encouraging communication patterns that happen to fit well with Excite@Home’s business strategy. It would certainly be interesting to see how innovative nonaffiliated ISPs might explore alternative ways 52. See At Home Corporation, “@Home Acceptable Use Policy” (www.home.com/support/aup/ [February 5, 2001]), “@Home User Guide” (www.home.com/support/ [February 5, 2001]), and “@Home Frequently Asked Questions” (www.home.com/qa.html [February 5, 2001]). 53. Corey Grice, “Excite@Home Speed Caps Draw Fire, Prompt New Plans,” CNET News.com, June 28, 1999 (available at www.news.com/News/Item/0,4,38479,00.html). 54. “Examples of prohibited uses include, but are not limited to, running servers for mail, http, ftp, irc, and dhcp, and multi-user interactive forums” (see www.home.com/support/aup/). 55. See Karen J. Bannan, “Excite@Home: Protection Or Invasion?” Inter@ctive Week, June 21, 1999 (www.zdnet.com/intweek/stories/news/0,4164,2279510,00.html): “One percent of the subscriber base is responsible for 80 percent of the traffic flow. We’re just watching to make sure this group of users that are trying to use a $40 product like a $1,200 T1 [1.5-megabit-per-second] line don’t spoil it for the rest of the users,” said Milo Medin, the company’s chief technology officer. The company not only tracks how much traffic is going and coming into a specific household, but it also tracks where the traffic goes once it leaves the home and what kind of data is being sent and received, he said. Don Hutchinson, senior vice president of the company’s @Work division, said Excite@Home tracks a customer’s data destination in order to pinpoint where it might need to improve connections to its backbone. In addition, the company said, monitoring individual usage helps the company upgrade its services.
, , , , ,
around these limitations.56 However, while it will still be possible to receive Internet service from other ISPs though still paying for Excite@Home ISP service, alternative service providers will be denied access to key network performance features of the Excite@Home infrastructure, such as dynamic caching and collocation on the Excite@Home network. Closure and usage limits thus preclude experimentation with a range of alternative patterns of use, in a provider-dominated context reminiscent of telephony’s prederegulation, pre-Internet era. By contrast, open access to cable would allow dynamic network innovation in the broadband era to unfold with the force, pace, and innovative imagination of the narrowband era. The development logic that has characterized the Internet to date could continue. Second, whoever owns the network, absent competitive or regulatory constraints, will also logically try to extend its infrastructure ownership into control of the services and content it carries. There is clearly a range of strategies available for the provider of a large cable modem network to bias Internet access to the advantage of some content providers over others. Though some may be intelligent ways to speed up the Internet experience for customers (dynamic caching is a good example), these practices could easily become abuses of dominant position if applied differentially to different service and content providers. Indeed, if a single ISP has sole access to these strategies, it can then at its discretion, and at its discretion alone, systematically shape what content and services gets to the end-users under optimal conditions. Further, it could shape the very terms of innovation on the Internet, deciding who gets to experiment and who can capture the resulting benefits. Open access, by contrast, would ensure that other ISPs could use the cable infrastructure to pursue similar approaches, where appropriate, and would foster healthy competition of network applications, programming, and architecture. In the present case, AT&T-@Home strives to leverage its cable access monopoly into e-markets that ride on top of cable access, well beyond the bundling of Internet service provision with other AT&T services. The @Home 1998 annual report57 was very clear on these strategic practices and included details of how @Home offered speedier service to Internet content providers that agreed to become “content partners” and share their 56. As a comparison, the open DSL market is starting to spur innovative ways to exploit DSL technical characteristics—for example, the provision of multiple voice lines over a single DSL line. 57. The 1999 annual report is much more vague about the specifics of these practices. There are, however, no indications that they have been abandoned.
revenue stream.58 Under the sole control of a broadband access monopoly, the potential for serious abuse is evident. Consider in particular: The @Media group offers a series of technologies to assist advertisers and content providers in delivering compelling multimedia advertising and premium services, including replication and co-location. Replication enables our content partners to place copies of their content and applications locally on the @Home broadband network, thereby reducing the possibility of Internet bottlenecks at the interconnect points. Co-location allows content providers to co-locate their content servers directly on the @Home broadband network. Content providers can then serve their content to @Home subscribers without traversing the congested Internet.59 Further, the report noted: We have established relationships with certain of our interactive shopping and gaming partners whereby we participate in the revenues or profits for certain transactions on the @Home portal. We also allow certain of our content partners to sponsor certain content channels for a fee.60 These quotes describe two strategies aimed at shaping the architecture of the cyber marketplace. The first is “collocation,” the second is “replication.” Both functioned to allow Excite@Home to privilege partners and handicap competitors—they differ only slightly in their implementation. Excite@Home developed partnerships with noncompeting firms in each of several content areas (interactive shopping, gaming, digital audio, digital photography, and search services) and collected “fees relating to content partnering arrangement.”61 In keeping with its cable origins, Excite@Home saw these practices as “programming,” and it viewed itself as “programming the Internet.”62 Excite@Home also offered collocation service to bring better performance to Excite@Home customers (merchants as
58. At Home Corporation (1999). 59. At Home Corporation (1999, p. 8). 60. At Home Corporation (1999, p. 9). 61. At Home Corporation (1999, p. 9). 62. At Home Corporation (1999, p. 8).
, , , , ,
well as end-users), but the term “collocation” is not meant in the nondiscriminatory sense that those familiar with telecommunications are wont to use. Rather, each partnership appeared to be exclusive to a particular area of content. A collocated partner has faster access to Excite@Home consumers because of a presence on the same network. In 1999 Excite@Home had already collocated at least one partner (SegaSoft) and was planning to collocate others. Replication is manipulation of the caching system to favor partners. It essentially speeds requests for certain content by preloading it at sites that are close and well-connected to subscribers. As of 1999, Excite@Home replicated news feeds from CNN and Bloomberg. Excite@Home then promoted these replicated and collocated partners on its portal and with its “wizards,” making competitors harder to get to. The result was the creation of a cyber marketplace that systematically favored the providers of content, services, or transactions who have a privileged financial relationship with the monopoly owner of the infrastructure that supports that cyber marketplace. If customers had a real choice of broadband access infrastructure, this would matter less, but when they became customers of Excite@Home’s access infrastructure, they automatically and unknowingly received access to a cyber marketplace biased to favor Excite@Home’s financial partners. As of 1999, Excite@Home had such agreements with partners including Amazon.com, BuyDirect.com, AutoConnect, N2K, PC Connection, QVC, Realtor.com, Reel.Com, Travelocity, Bloomberg Radio, CNET Radio, Net Radio, SportsLine, and Spinner.com.63 In addition, it certainly is possible to manipulate the caching architecture in many other ways to favor partners. Excite@Home had the incentive, given its relationship with content providers, to further use the caching system to actually slow requests to competitors’ “programming” rather than merely speeding up access to its own brands.64 Excite@Home’s annual report also noted that “local caching servers can compile far more comprehensive usage data than is normally attainable on the Internet.”65 If 63. See the amicus curiae brief of Excite@Home, Re: AT&T v. Portland (August 16, 1999), especially notes 17, 18, 19, and 20 (techlawjournal.com/courts/portland/19990816exc.htm). 64. In their joint letter to FCC chairman Kennard, dated July 29, 1999 (tap.epn.org/cme/ kennard.html), the Consumer Federation of America, Consumers Union, Media Access Project, and the Center for Media Education have documented a variety of such possible manipulations. The technical basis for their claims is laid out in “Controlling Your Network: A Must for Cable Operators,” Cisco White Paper (1999). 65. At Home Corporation (1999, p. 10).
this data were shared with partners, this would create a further barrier to competition from nonpartner content providers. Not only could an Excite@Home partner know detailed information about Excite@Home subscribers using its service, it would also be possible to know the same detailed information about who was using a competitors’ service or to restrict access to a competitors’ service while substituting its own. In summary, Excite@Home proposed in its own materials to structure a cyber marketplace that would steer Excite@Home customers, unknowingly, toward merchants who partner with Excite@Home. Excite@Home was able to structure the cyber marketplace in various ways—for example, through the advantageous positioning and access of partners or through the organization of its site around devices such as “How Do I” wizards.66 Excite@Home’s own reports explained how they would provide superior quality performance to partnering merchants on their network. If you were a merchant, either you were on Excite@Home’s service network or the majority of broadband customers (those that use AT&T@Home cable service) would not have been able to access your site, as you intended. Opponents of open access requirements argue that market forces will naturally bring cable operators to open their networks because they will want to maximize the amount and diversity of content available to their subscribers. Jim Speta explains that, while telecommunications networks derive value from connecting people to each other and thrive on direct network externalities (the more connections, the greater the value of each connection), cable networks derive value from bringing content to people and benefit from indirect network externalities (the more content, the greater the value of each connection). Therefore, he argues, “a broadband access provider has the incentive not to restrict the market for information services and the availability of those services to its subscribers even if it has a monopoly in the provision of broadband access.”67 This view overlooks strategies such as those just documented in Excite@Home’s case. Indeed, as Excite@Home argued to its investors in its annual report, a cable operator clearly benefits from using its control over network architecture to design a biased cyber marketplace, favoring affiliated content and network services, especially if it has a monopoly in the provision of broadband access. In this respect, Excite@Home was trying to act very much like Microsoft,
66. @Home describes the “wizards” at (www.home.com/howdoi.html). 67. Speta (2000, p. 84).
, , , , ,
using its control of the operating system’s architecture to favor some applications over others—with similar anticompetitive implications. These capacities to structure the cyber marketplace are of startling significance, especially when customers are unaware of the marketplace’s structured biases. They are particularly important if a single ISP has a local monopoly and of broad significance if a single ISP holds stakes in enough local monopolies or dominant positions locally to influence the very structure of the cyber marketplace. And, it should be noted, even allowing the choice of another ISP for no additional fee (for example, if customers could choose to substitute Earthlink for Excite@Home as the default ISP over their broadband cable access) would not correct the competitive problems created by broadband access architecture that rewarded Excite@Home with performance advantages over all rivals. There are at least two reasons. First, electronic commerce is certainly one of the—if not the only— killer applications of the broadband era. The unfolding of e-commerce will drive innovation throughout all segments and elements of a competitive network. Yet suddenly the competition across segments and elements that has driven the evolution will be squeezed into and captured by a vertical structure with a single buyer, the ISP. Second, business-to-business ecommerce has dominated until recently. Broadband facilitates the fullfledged emergence of retail e-commerce. Closed access would, as a matter of policy, permit Excite@Home to structure the cyber marketplace for a significant portion of the American consumer population. With control of the broadband service provision, Excite@Home would become a truly dominant influence in American retail. Even if Excite@Home’s control of the broadband market were more limited, it would nonetheless structure the cyber marketplace used by a substantial number of American consumers. The biases would not be immediately obvious, and they would not necessarily be brought to the attention of the consumer. The competitive possibilities of e-commerce, ease of entry, and experimentation producing new business strategies and new business organization would be wiped away. Broad gains to the American economy would be lost. In the absence of a policy requiring open access, the suppliers of the network component and services, the merchants seeking to reach consumers through the cyber marketplace, and the users of the network will confront AT&T/@Home’s market power. The Internet and e-commerce will then evolve as the result of strategy choices made by AT&T and @Home alone, not as a result of market competition. Is this the “digital economy” we really want?
The AOL–Time Warner Merger: A Halfway House? The FTC and FCC decisions to approve the merger of AOL and Time Warner were concluded by January 12, 2001. Together they marked a new chapter in access policy for the cable industry. The two agencies approved the merger subject to a number of conditions that, as it stands, will provide the regulatory framework for broadband access for the first phase of nationwide broadband deployment. As regards enabling ISP access to Time Warner cable, the FTC adopted AOL’s offer to provide the ISP Earthlink (the largest ISP in the United States after AOL) effective access over Time Warner cables before AOL broadband can be released over the network. In addition, within ninety days of AOL’s service commencing, two other ISPs must be provided with effective access within major urban areas. Other ISPs are permitted on the Time Warner network provided this causes no further technical problems. The FTC has appointed a permanent monitor to oversee this transition and access issues. While this settlement clearly prevents AOL from securing sole access to Time Warner cable (mirroring Excite@Home’s situation), it is a compromise built on several policy gambles. In a first gamble, the decision still limits the number of ISPs likely to operate on the Time Warner network. While EarthLink has been given a hand up, other ISPs will have to wait three months to secure access rights, by which point AOL’s powerful marketing machine could have siphoned up much of the potential market. The FTC has essentially given the two fastest ISPs a head start. The promise of later additions of two other ISPs is further bolstered by the equivalent of a “most favored nation” (MFN) clause that grants any other ISP the right to get the same terms as AOL. However, the FTC does recognize the possibility of technical constraints on access. The self-regulatory nuances of determining “technical feasibility” of future ISPs beyond the first four operating on the Time Warner network may mean that access becomes even further constrained. The FTC remedy thus rests on hopes that the phased introduction of four other ISPs on the Time Warner cable network, along with a very general MFN policy, will suffice to curb AOL–Time Warner’s exercise of market power. The FCC bolstered the FTC safeguards by further insisting that rival ISPs control the users’ “first screen” and billing arrangements. It also warned AOL not to engage in technical discrimination against other ISPs. However, these rules do not match the detailed FCC guidelines that open up local phone networks for DSL competitors.
, , , , ,
The resulting “limited access” environment set for the AOL–Time Warner cable network falls short of the “open access” model set for the telephone network. With respect to our concern about the innovation dynamics this sets for the third-generation Internet, this arrangement misses the mark in two important respects. First, it entrusts the cable owner with the selection of the lucky few ISPs that will be allowed alongside AOL. AOL–Time Warner may, for example, be inclined to favor ISPs that share its vision of where the third-generation Internet should be heading (perhaps a network that resembles AOL’s Interactive TV project, probably not one that encourages Gnutella-inspired swapping of Time Warner media content) and to think more kindly of the ISPs whose strategy does not directly challenge AOL’s. Second, even assuming that AOL–Time Warner’s selection of ISPs will not be biased, the “limited access” policy vision assumes that ISPs constitute an adequate proxy for other network users, that ISPs will explore the full range of possible network services and applications. This is far from obvious. Consider the contrasting situation in the telephone network. There, open access does not simply mean that nonaffiliated ISPs can get access on equal terms with the telecom-affiliated ISP, but that any network user can get cost-based access to unbundled network elements, thus creating conditions for much broader experimentation. Indeed, the telephone network supports not only ISPs who offer alternative ways to take advantage of telecom-developed DSL, but allows the implementation of other flavors of DSL and lets providers offer various service level agreements and quality of service guarantees. It allows corporate users to tap into the phone network to extend their local networks and make it possible for their employees to get secure intranet access from home. The current architecture of the cable network does not support that level of unbundling. Neither did the architecture of the telephone network until the 1980s, before the FCC progressively established unbundling requirements. In another gamble, the merger decisions propose regulations that are limited to the partners in this particular merger, Time Warner and AOL. No parallel conditions have been imposed for AT&T’s cable network, where Excite@Home’s retains exclusivity. By neglecting to draw up a comprehensive access policy, the FTC and FCC have thus passed the buck. They have declined to address the more generic problem of ensuring open access and end-to-end innovation over evolving networks, preferring an incremental approach. Missing this opportunity to outline broader goals, regulators have created uncertainty as to when and how exactly they might
rule in the future. Clearly, not every “open access” issue will result in a merger review. In the short term, though, the FCC and FTC decisions are a significant step forward. They will prove at least one critical assertion: whether “open access,” however limited, is technically feasible over the cable networks. The stalemate has finally been cleared and an opportunity to move on has been created. Now that they have ruled, the FCC and FTC will need to monitor the performance of AOL–Time Warner vis-à-vis AT&T closely. The limitations imposed on the former may become handicaps in its battle against AT&T, whose bundled packages and vertical integration could provide a competitive edge. This may push regulators eventually to explore a more comprehensive cable broadband policy for all industry participants, not just those engaging in controversial mergers. Conversely, AOL and Time Warner may discover that open access serves them well, fostering greater traffic and innovation on their network. Perhaps AT&T might then be inspired to follow suit and open its own cable network. At the least, the creation of a limited open access environment on one of the cable networks will open a window to observe the unfolding of competitive dynamics which, even if limited, will inform the next stages of policy debate.
Conclusion: Dealing with Joint Dominance Broadband third-generation services for households represent a new market. For the very large percentage of all buildings unlikely to obtain fiber optic access in the next fifteen years, there is a fundamental issue about how competitive the market will be for the supply of broadband access to these households. And, of equal import, does limited competition produce other adverse effects, such as the ability to restrict innovation in third-generation services in ways that harm consumers in the long term? This is a technologically and economically volatile market. We are very sympathetic to fears that regulators find it difficult to chart rules with more benefits than costs in such a market. However, we are equally concerned that technological upheaval is being equated with the emergence of effective market competition in a timely manner. At best, for the next several years it appears that most homes will be served by a monopoly or a duopoly for broadband services. Joint dominance in broadband access, even monopoly power over broadband access in many cases, raises serious threats to the public interest. If the joint
, , , , ,
dominance continues, the resulting vertical integration and closed access defeat the fundamental innovation dynamics that have made the Internet successful. Open standards, open access, a clear set of competitive principles and prohibitions against leveraging access control into control of service architecture, cyber marketplace, communication patterns, and content will all wane. Vertical disintegration has traditionally led to real competition and innovation in each segment as well as competition and innovation in alternative ways to package combinations of services. The policy problem arises at the moment at which the cable television “broadcast” system, built up with local monopolies and successfully built out because of the appeal of cable TV offerings, is being transformed into a broadband digital system and integrated into the national communications network. The current debate stems from the collision of the policy legacy of cable’s monopoly and restricted access origins with the evolving open access thrust of telecommunication policy that has enabled the successful explosion of competition throughout the telecom network segments, ushering in user-driven innovation and the Internet revolution. Reversing the set of policy innovations that have led to broad American communications leadership would be unwise, at best. But what can be done? The most important point is to recognize that the situation is ripe for an explicit set of policy decisions, not wait and see. By including access requirement in their merger rulings, the FCC and FTC have finally recognized this need, even if they stopped short of calling for rules that would apply to AOL–Time Warner’s competitors as well. The question as to the right prescription is not one that we wish to resolve here. But we would offer some observations about how to proceed. To begin, in the access debate some believe that the main policy issue was that consumers should not have to pay twice for use of an ISP other than that bundled with cable service. This emphasis on nondiscriminatory access to the broadband cable network for all ISPs, they suggest, requires only a light regulatory touch. But, however light, the touch may be essential. The FCC and FTC could have, and probably should have, written the requirement into decisions on the AT&T–Media One merger, as they indirectly did on the AOL–Time Warner merger. Other countries would have to find appropriate policy instruments, as we discuss shortly. Just as important, a nondiscrimination rule in itself would not solve the underlying problems that we have described. For example, suppose that the rule simply said that nonaffiliated ISPs will pay the same as Excite@Home for access to the AT&T cable broadband network. This would not prevent
AT&T from taking its rents on the network access charge and simply bundling in Excite@Home for no fee. This would be like Microsoft making its money off Windows while charging nothing for its browser.68 Is this satisfactory? After all, these ISPs could change their business model to the one used by Yahoo (or AOL in its U.K. operations for some customers), in which there is no monthly charge for e-mail and access. Revenues derive from ads and sales commissions. Arguably, the “don’t pay twice” rule, while straightforward, addresses only one of the least important issues discussed in this chapter. The critical issue is the creation of an open architecture for broadband services that supports widespread innovation. Policymakers should aim to stimulate, or at least not to stifle, innovative designs and uses of the network. But the vertical arrangement between the AT&T–TCI broadband network and Excite@Home, as well as AOL–Time Warner’s integration, may defeat this goal. The infrastructure owner will have strong incentives to configure its network to give superior performance to the preferred ISP and superior service to the ISP’s favored partners. Nothing will prevent such bias in AT&T’s case, and we will have to see how effectively FTC and FCC monitoring prevents abuses in AOL–Time Warner’s case. As we have stressed throughout this chapter, the problem is not just the adverse effect on competition in the markets for Internet service provision. The closed architecture of the underlying broadband network will also restrict access to the “network performance features” that are so vital to innovation. In its decision on the AT&T purchase of TCI, the FCC rightly expressed concerns about some matters of the network architecture but settled for rather toothless promises by AT&T in its filings to the commission.69 The right question is whether there are policy options that are lighter-handed than the regulatory regime for DSL imposed on the ILECs and yet responsive to the issues posed by broadband cable networks. We have noted how the combined approach of the FTC and FCC opened the way to a limited open access policy that tried to get enough competition in the ISP market to remove AOL–Time Warner’s incentive to exercise market power. It combined a mandate for a certain number of competitors, a loose MFN rule for others, and some rules guiding other 68. In effect, it is like the first U.S. Department of Justice consent decree with Microsoft, whereby Microsoft ended its licensing agreement provision that charged OEMs for Windows on every system that they shipped (even if the OEM had installed Unix or OS2 on the computer instead of Windows). 69. Federal Communications Commission (1999b).
, , , , ,
network capabilities. In addition, the FCC laid down a guideline for the important IM market. It declared that AOL had market power in regard to the NPD, but then said that the FCC would not regulate this market unless AOL tried to launch broadband IM services such as videoconferencing that uniquely tapped the capabilities created by the merger. Some have claimed that the FCC action on IM was either a needless intrusion (including the dissent by current FCC Chairman Powell)70 or ineffective because it did not touch the current IM market. The FCC staff clearly thought that broadband applications of IM were likely, so the trigger requiring AOL to open access to its NPD would be tripped. From our viewpoint the FCC was, albeit tentatively, tackling precisely the question of when network architecture poses a serious risk to flourishing competition. This was precisely the same conclusion of Britain’s OFTEL and the EU Commission: not to spell out the technical characteristics of an unbundled network architecture for the future, but to lay down a process for the consideration of situation when architectural features could create a problem. Regulation is costly, but so is neglect. OFTEL and the European Commission have rightly sought to create a more technology-neutral view of services and regulations. But at times they seem excessively focused on the issue of pricing and the ability of those with market power to raise prices to consumers at the expense of addressing issues of manipulating the technical architecture of the network in such a way as to slow innovation and restrict competition. However, both authorities have recognized that such issues, if significant for competition, are of concern. For example, the European Commission has extended its analysis of digital television to the question of applications program interfaces (APIs) that are crucial to interactive services. They have noted the possibility that regulators may need to impose “compulsory licensing and publication” of the interfaces and require “functional interoperability.”71 This is analogous to the issues in this chapter about broadband services. The European Commission has also suggested that “it would be appropriate for Member States to place an ‘obligation to negotiate access’ on a cable
70. “Statement of Commissioner Michael K. Powell, Concurring in Part and Dissenting in Part,” February 11, 2001 (re: “Memorandum Opinion and Order, Applications for Consent to the Transfer of Control of Licenses by Time Warner Inc. and America Online, Inc., Transferors, to AOL Time Warner, Inc., Transferee,” FCC CS Docket 00-30). 71. European Commission (1999, para. 4.2.5).
TV operator with significant market power for delivery of broadband services with the possibility of NRA [National Regulatory Authority] intervention if commercial negotiation fails.”72 The FCC has emphasized its hopes that technological innovation may resolve competition issues about broadband access faster than could any regulatory intervention—thus avoiding the inevitable downsides of regulation. Perhaps. But in its anxiety not to stifle investment in cable television upgrades, the FCC may proceed too cautiously. It needs to examine the issues of the competitive implications of the architecture of broadband systems as carefully as it worked out the logic of open network architectures in the telephone networks. Even a biennial detailed public inquiry into these issues may deter some forms of anticompetitive behavior by sending a powerful signal that the government is aware of the potential risks and might intervene. It also ought to consider laying out a policy statement on technology-neutral principles for assessing network architecture and its impact on market power. This would be akin to the Justice Department’s antitrust guidelines. Such a statement, wielded by a chairman intent on modernizing regulation for the next generation of Internet services, could guide analysis of the deluge of day-to-day cases considered by the FCC. But it would leave the details, including the ability to forebear to act, to the individual case without creating sweeping new rules. In closing, it would be highly desirable if the United States again established itself as the international leader for broadband Internet policy. Silence in policy in the United States takes away America’s significant advantage globally in shaping the policy for the next generation of global Internet services. The key factor governing future success must be present urgency. Problems about how to assure competitive network infrastructure for broadband access exist everywhere in the world. In an increasingly competitive world market, neglecting to ensure domestic competition constitutes an unnecessary burden. Moreover, the global economy will not itself “wait and see”; action must be taken now. The trajectory of broadband will be set today, not by future FCC reregulation. The FCC’s hesitation leaves a leadership vacuum in the global policy arena that others will surely fill, perhaps with results that the United States may not like.
72. European Commission (1999, para 4.24).
, , , , ,
References At Home Corporation. 1999. “1998 Annual Report.” February 29. Bar, François, and Michael Borrus. 1997. “The Path Not Yet Taken: User-Driven Innovation and U.S. Telecommunications Policy.” University of California, Berkeley Roundtable on the International Economy. Bar, François, and others. 2000. “Access and Innovation Policy for the Third Generation Internet.” Telecommunications Policy 24 (July–August): 489–518. DePompa-Reimer, Barbara. 1999. “Cable Modems, Wireless Networks Slow to Spark Interest.” Internet Week 34 (1): 34. European Commission. 1999. “Towards a New Framework for Electronic Communications Infrastructure and Associated Services: The 1999 Communications Review.” COM 539. Brussels. Federal Communications Commission (FCC). 1998. “Annual Assessment of the Status of Competition in Markets for the Delivery of Video Programming.” CS Docket 98-102 (December 23). ———-. 1999a. “Deployment of Advanced Telecommunications Capability to All Americans in a Reasonable and Timely Fashion, and Possible Steps to Accelerate Such Deployment Pursuant to Section 706 of the Telecommunications Act of 1996.” CC Docket 98-146 (February 2). ———-. 1999b. “Memorandum Opinion and Order Approving the AT&T–TCI Merger.” 99-24 (February 18). FCC Cable Service Bureau. 2000. “Broadband Today.” Galperin, Hernan, and François Bar. 1999. “Reforming TV Regulation for the Digital Era: An International/Cross-Industry Perspective.” Paper prepared for the Twenty-Eighth Telecommunication Policy Research Conference. Alexandria, Va., September 25–27. Goolsbee, Austan, and Klenow, Peter. 1999. “Evidence on Learning and Network Externalities in the Diffusion of Home Computers.” Working paper. University of Chicago (July). Hart, Jeffrey, François Bar, and Robert Reed. 1992. “The Building of the Internet: Implications for the Future of Broadband Networks.” Telecommunications Policy (November). Hecht, J. 2000. “Fiber to the Home. Technology Review (March–April). Kuptz, Jerome. 2000. “The Peer-to-Peer Network Explosion.” Wired (October). Lemley, Mark, and Lawrence Lessig. 1999. “Written Ex-Parte in the Matter of the Application for Consent to the Transfer of Control of Licenses MediaOne Group, Inc. to AT&T Corp.” FCC CS Docket 99-251. MacKie-Mason, David. 1999. “Investment in Cable Broadband Infrastructure: Open Access Is Not an Obstacle.” University of Michigan. McKinsey and Co. and Sanford C. Bernstein and Co. 2000. “Broadband.” (January). Owen, Bruce. 1999. The Internet Challenge to Television. Harvard University Press. Oxman, Jason. 1999. “The FCC and the Unregulation of the Internet.” OPP Working Paper 31. Federal Communications Commission (July). Saltzer, Jerome H., David P. Reed, and David D. Clark. 1981. “End-to-End Arguments in System Design.” Paper prepared for Second International Conference on Distributed Computing Systems. April.
Speta, J. 2000. “Handicapping the Race for the Last Mile? A Critique of Open Access Rules for Broadband Platforms.” Yale Journal on Regulation 17 (1): 39–91. U.K. Office of Telecommunications (OFTEL). 1998. “Beyond the Telephone, the Television, and the OC—III.” London (March). ———. 1999. “OFTEL’s Response to the UK Green Paper—Regulating Communications: Approaching Convergence in the Information Age.” London (January).
This page intentionally left blank
Contributors
David Bach University of California, Berkeley François Bar Stanford University Aleksander Berentsen University of Basel Enrique Canessa University of Michigan Business School Eric K. Clemons Wharton School, University of Pennsylvania
David C. Croson Wharton School, University of Pennsylvania James Curry El Colegio de la Frontera Norte, Mexico J. Bradford DeLong University of California, Berkeley Jennifer Frances University of Cambridge Jeffrey L. Funk Kobe University
Stephen S. Cohen University of California, Berkeley
Elizabeth Garnsey University of Cambridge
Peter Cowhey University of California, San Diego
Lorin M. Hitt Wharton School, University of Pennsylvania
Jan Hammond Harvard Business School
Peter Lotz Copenhagen Business School
John Hawkins Bank for International Settlements
John Paul MacDuffie Wharton School, University of Pennsylvania
Susan Helper Case-Western Reserve University
Will Mitchell Fuqua School of Business, Duke University
Martin Kenney University of California, Davis Jean Kinsey University of Minnesota Michael J. Kleeman University of California, Berkeley Catenas, Inc. Stefan Klein University of Muenster Kristin Kohler Harvard Business School Chien H. Leachman University of California, Berkeley Robert C. Leachman University of California, Berkeley Claudia Loebbecke University of Cologne
Anuradha Nagarajan University of Michigan Business School Jonathan Potter Digital Media Association Setsuya Sato Bank for International Settlements Steven Weber University of California, Berkeley C. C. White III School of Industrial and Operations Engineering, University of Michigan John Zysman University of California, Berkeley
Index
A.C. Nielson Market Research, 7 Access asymmetries, 45–46 Accompany, 62, 63 Acer, 157, 219 Agriculture. See Farmers; Food industry Ahold USA, 258 Air defense system, 6–7 Air France, 122 Air travel industry, 62–63, 112–27; airline alliances, 43, 122–26; antitrust issues, 47, 48, 123; auctions of tickets, 116–17, 120, 123, 125; characteristics of tickets, 115; computer reservation systems, 7, 34, 47, 48, 113, 116, 118, 119; consumer behavior, 114–15; cybermediaries, 120–22; demand collection systems, 121; destination management organizations, 114; disintermediation, 93, 97–100, 116–17; global distribution systems, 113, 116, 118, 119; Internet-based business, 115–16; online travel supermarkets,
119, 124, 125; product features, 114; structure with web distribution channels, 124; trends, 116–23; webbased intermediaries, 118–25 Albertsons, 269 Always-on Internet access, 363, 400, 401, 436, 437, 443 Amazon, 106, 462 AMD, 221, 226 American Airlines, 7, 119, 122 American Express, 96, 97, 120 American Society of Composers, Authors and Publishers (ASCAP), 129, 133, 135 American Textile Manufacturers Institute, 325 America Online, 119, 159, 174, 256, 365, 369, 376; Time Warner merger, 436, 442, 449–50, 453, 458, 465–67, 469 Anam, 219–20 Antitrust issues, 47–48, 57; airline alliances, 123; food industry, 276; music industry, 134. See also Monopoly
Apache, 407, 426, 428, 430 Apache Software Foundation, 430 Apparel. See Textile and apparel industries Apparelbids.com, 325 Apple, 144 Aptiva, 168 Ariba, 195 Arnold Industries, 338–39 Arnold Logistics, 339 ARPANET, 11 ASDA, 283, 303 Asia and e-finance, 69 Asia Pacific region’s semiconductor foundries, 217, 220, 226, 227 AST Research/Samsung, 166 AT&T, 18, 108, 243–44, 365; bundled services, 449; DoCoMo alliance, 366; investment considerations, 456–58; ISP integration, 442, 459; TCI purchase, 469 AT&T@Home, 366, 463, 464 Auctions: airline tickets, 116–17, 120, 123, 125; automobile parts, 196, 203 Australia, 69 AutoConnect, 462 Automobile industry, 145–47, 178–213; add-on products and services, 211; auctions of commodity parts, 196, 203; automakers, 201–03, 211; automating purchasing steps, 196; B2B relationships, 194–98, 207; B2C applications, 191–94, 205, 207; B2V applications, 198–201; build-to-order cars, 180, 182–91, 209; buying services, 192; collaborative mechanisms, 197–98, 203; consortium for procurement, 43, 56–57, 181, 194–98, 211; consumer choice, 146, 182–83, 208–09, 212; design of parts, 140–41, 146; disintermediation of, 183–84, 193; embedded microprocessors and, 9–10; employees
and unions, 206–08; Internet’s effect, 137, 191–98, 201–10; Japanese challenge, 19–20; modularity and product design, 184–87, 210, 211; navigation systems, 200; “online car,” 198–201; open architecture for e-procurement, 195–96; outsourcing of modules, 203; repair work, 207; retailers, 192, 205–06; seats, 203–04; supplier relationships, 194–98, 203–05, 210. See also OEMs in auto industry Automobile insurance, 109 AutoWeb, 192 Avnet, 168 Bandwidth, 21, 358–59; broadband services, 435–73; i-mode limitations, 368; next-generation network, 399. See also Next-generation access Banking, 67, 72–73; barriers to entry, 76; central bank issues, 88–89. See also E-finance Bar codes, 263, 314. See also Point of sale systems Barnes and Noble, 106 Basel Committee on Banking Supervision, 89 Battenberg, J. T., 180 Bekins Van Lines, 344 Belleflamme, Paul, 267 Bell Labs, 406 Beltone, 232 Berkeley Software Distribution (BSD) model, 426 Biztravel.com, 118, 119 Bloomberg, 462 Bloomingdale’s, 317 Bluetooth, 387 BodyMetrics Ltd, 318 Boo.com, 339 Bricklin, Dan, 8 Britain. See United Kingdom
British Airways, 122 Broadcast Music, Inc. (BMI), 130, 133, 135 Brokerage, 60, 66–67, 75–78; electronic communication networks, 75; pricing strategies, 108; retail financial services, 92–111; straight-through processing, 75. See also Finance industry Brooks, Frederick, 425 Brooks Brothers, 323 Brooks’s Law, 425 Build-to-order systems: automobiles, 180, 182–91, 209; PC, 139, 144, 158, 168, 182, 184 Bundling, 158, 449, 450 Business-to-business (B2B) commerce, 16, 43, 47–48; automobile industry, 195–98; food industry, 242, 260–68, 274; hearing aid industry, 234; PC industry, 171; textile and apparel industry, 242, 247, 324–31 Business-to-consumer (B2C) commerce: automobile industry, 191–94, 205, 207; food industry, 242, 250, 256–57, 259–60, 275, 304–05, 307; textile and apparel industry, 242, 246–47, 315, 320–24 Business-to-vehicle (B2V) applications, 198–201 BuyDirect, 462 BuyTextiles.com, 325 Cable, 435–73; broadband access via, 173, 444–47; collocated partnerships, 462–63; costs of switching to DSL, 447–48; dominance over DSL, 444, 446, 452–54; European Commission regulation, 470–71; investment considerations, 455–58; ownership and open access, 436, 441, 442, 457; speed, 12 Cablevision Systems Corporation, 174 Caching, 395, 462
Caliber Systems, 343 Capital One, 105, 107–08, 110 Cargill, 261 Carrefour, 324 CarsDirect, 193, 206 Cell phones, 44 CFMovesyou.com, 345 Chandler, Alfred, 247 Chang, Morris, 218–19 “Channel stuffing,” 160 Chase, 108 Chemconnect, 43 Chichilnisky, Graciela, 15 Chrysler. See DaimlerChrysler CHS, 168, 169 c-HTML, 362, 371 Cisco, 244, 360, 394, 395, 418–19, 438 CitiBank, 108 CLECs. See Competitive local exchange carriers CNET, 165, 462 CNF Transportation, 343 CNPS. See Cross-National Production Systems Code-forking, 421–22, 429 Coles Myer, 324 Collaborative Planning, Forecasting, and Replenishment (CPFR), 267–68, 270, 274 Commercialization of Internet, 393–95 Communications infrastructure, 32–33 Compaq: competition in PC industry, 144, 145, 152, 158; distribution partners, 169; history of PC sales and, 154–55, 157, 160; Internet appliances, 174; Internet’s effect, 166–68 Competitive local exchange carriers (CLECs), 446, 452 Competitive Semiconductor Manufacturing (CSM) Program (U. of Calif., Berkeley), 216, 222, 223, 226 CompuServe, 159
Consolidated Freightways, 345 Consortia-controlled marketplaces, 43, 56–57. See also Covisint Consumer package goods and electronic retailing, 100–02 Consumers: air travel industry, 114–15; automobile industry, 146, 182–83, 208–09, 212; food industry, 256–60, 289, 299; hearing aid industry, 233–34, 236. See also Business-to-consumer commerce Continental Airlines, 122 Copyleft, 415 Covisint, 43, 57, 181, 194–98, 211 CPFR. See Collaborative Planning, Forecasting, and Replenishment Credit cards, 67–68; attracting profitable customers, 109, 110; pricing strategies of, 108 Cross-border finance transactions, 79 Cross-National Production Systems (CNPS), 22 CRST International, 343 Customer loyalty and retail financial services, 107 Customer-product-retailer relationship, 139–41 Cypress Semiconductor, 220 Daimler-Benz, 203 DaimlerChrysler, 194, 202 Danavox, 232 Data-mining tools, 60–61, 111, 301 Data networks, 359 Data sharing. See Electronic data interchange Dattels, Timothy, 326 David, Paul, 16 Defense Calculator, 6 Defense Department, Advanced Research and Projects Administration (ARPA), 11
Deliverables, 34–36 Dell Computer: build-to-order business model, 137, 138, 139, 144–45, 182, 184; history of PC sales, 151–52, 155, 157–58, 160; integrated marketing and production, 22, 23; Internet appliances, 174; Internet-enabled sales, 161–63, 244; order status calls, 327; supplier relationships, 43, 57 Delta Airlines, 117, 122 Digital Equipment Corporation, 158 Digital subscriber line (DSL) technology, 440, 444–45; broadband access, 173; costs of switching to cable, 447–48; dominance of cable over, 444, 446, 452–54; speed, 12 DiMaggio, Paul J., 430 Disintermediation, 29, 30–31; air travel industry, 93, 97–100, 116–17; analysis, 98–102; automobile industry, 183–84, 193; benefits, 96–97; finance industry, 61, 68, 70, 72; food industry, 269–70; retail financial services, 92–93, 96–103; risks of threatening, 97–98; trucking industry, 337 Distributors. See specific industry DNA and architecture, 8–9 DoCoMo, 44, 360, 361, 364; alliance with AT&T, 366; billing by, 368, 382–84; managing the service menu, 377–78; strategy, 366. See also i-mode system Dong-bu, 220 Dramatic random-access memory (DRAM) companies, 219, 226, 227 DSL. See Digital subscriber line technology E-Bay, 35 ECNs. See Electronic communication networks
E-Color, 246, 247, 317 E-commerce: analysis, 54–55; direct, 35; future of, 16; indirect, 34–35; industrial structure and, 55–56; innovations in organization and business practice, 17–23; integration into regular commerce, 4; map, 32–36; transformation, 23–24 ECR. See Efficient consumer response EDA. See Electronic design automation software EDI. See Electronic data interchange Edmunds.com, 192 Efficiency, 35–38 Efficient consumer response (ECR), 263–66, 296, 300 E-finance, 64–91; brokerage, 75–78; central bank issues, 88–89; change in individual institutions, 70–75; conceptual framework, 69–78; conflicting forces, 74–75; cross-border transactions, 79; disintermediation, 61, 68, 70, 72; financial stability and, 78–85; fraud, 80; future size of market, 66; hybrid services, 73, 80; importance of existing relationship, 72; information technology alliances, 74; interrelationship of issues, 85, 86; legal issues, 84–85; macroeconomic indicators, 87–88; monetary stability, 85, 87–88; nonbanks and, 80; operating procedures, 85, 87; payments system risks, 81–82; privacy issues, 80; retail financial services, 92–111; safety net for, 83–84; structure, 71; supervision, 78–81, 88–89; systemic stability, 82–83; transmission mechanism, 87; U.S. vs. other countries, 69 EFS Network, 261 E-hubs, 261–62 Elderly and hearing aid industry, 233–34; use of Internet, 235
Electronic benefits system, 277 Electronic communication networks (ECNs) and stock trading, 75 Electronic data interchange (EDI): Internet’s effect on, 139; market-related communication via, 34, 36, 57; PC industry, 160; semiconductor industry, 148; textile and apparel industries, 246–47, 314, 324, 326–27; trucking industry, 338, 351–52; U.K. food retailing, 247, 248, 283, 285–87, 290, 293–96, 303; U.S. food industry, 250, 264, 272 Electronic design automation (EDA) software, 148–49, 224–25 Electronic payment systems, 58, 67–68 Ellison, Larry, 173 E-machines, 159 E-mail: access from wireless networks, 365; mobile phones, 378–79; portability issues, 448 E-money, 67 Encryption, 394, 400 Entertainment services, 379, 387 Entry, ease of, 29, 30; automobile industry, 195–96, 203, 211; finance industry, 72; Linux debugging and development process, 426; mobile Internet access, 378; PC industry, 163–66; retail finance industry, 98–99; semiconductor industry, 221, 227 EPOS systems. See Point of sale systems Ericsson, 361 E*Trade, 60 Europe: affordable Internet access via wireless, 365; alliance of airlines for online sales, 122; automobile industry, 208; e-finance, 69; hearing instruments, 233; PC market, 155 European Commission regulation of cable, 470–71
Evan, Philip, 367 Excite@Home, 44, 442, 453, 454, 458–64 Expedia, 119, 125 EZ Web, 371 “Fabless” semiconductor firms, 55, 148–49, 214, 216, 225 Fabrication plants (“fabs”), 55–56, 148–49, 215–16, 221 Farmers: agricultural dot-coms, 261; cooperatives, 255; U.K., 291–92. See also Food industry Fasturn, 327–28 Federal Communications Commission (FCC) regulation of Internet access technologies, 366; AOL–Time Warner merger, 437–40, 443, 449, 465–67, 468; cable and broadband systems, 456, 457, 471; Cable Services Bureau report on broadband deployment, 445–46 Federal Trade Commission (FTC), 449–50, 453, 465–67, 468 Finance industry, 7, 60–62, 64–91; common interface, 58; retail financial services, 92–111. See also E-finance Financial services, retail, 92–111; activitybased costing, 110; attractive to attack, 98, 99; consumer package goods, 100–02; cost and revenue differences, 104–06; customer loyalty, 107; data mining, 111; difficult to defend, 98, 99; disintermediation and, 92–93, 96–103; ease of entry, 98, 99; jobbers, 93–94; London’s Big Bang, 93–94; predictions, 102–03; pricing strategies, 104–11; product design to attract profitable accounts, 108–10; profitability, 107–11; skill-based competitive strategies, 110–11; transparency, 92, 93–96, 106. See also Air travel industry Financial Stability Forum, 89
Fisher, Marshal L., 277 Fixed-line vs. mobile Internet applications, 372–76; architectural differences from WAP, 362–63 Flaming, 424 Food industry, 241–42, 247–51, 253–79; alliances, 274, 275; antitrust issues, 276; B2B commerce, 260–68, 274; B2C firms, 250, 260, 275; consumers, 256–60; costs, 257–59; demand pull system, 255; disintermediation of, 269–70; Efficient Consumer Response, 250, 263–66; e-hubs, 261–62; electronic data interchange, 247, 248, 250, 264, 272; farmers’ cooperatives, 255; Internet’s effect on, 274–76; justin-time delivery, 275–76; manufacturers, 249–50, 254; mergers, 269; POS data, 247, 250, 263–64, 267, 271, 273–75, 277–78; pre-Internet, 254–55; pricing, 249, 257; pure-play food retailers, 250–51; retailers, 249–50, 254–55, 269–71; shift of power to retailers, 272, 274; supply push system, 254; trust and cooperation in, 272, 273; value-added network, 264; wholesalers, 254, 269–70, 272. See also United Kingdom, food retailing Food Marketing Institute, 263, 264 Food stamps, 277 Ford, 194, 199, 202, 204, 275, 339–40 Ford Retail Network, 205 Forking, 421–22, 429 Foundry firms in semiconductor industry, 214. See also “Fabless” semiconductor firms France, 69, 360, 451 Fraser, Charles, 27 Fred Meyer, 269 Free software. See Open Source software Free Software Foundation (FSF), 414–16 Freight. See Trucking industry
FreightPro.com, 342 Freightquote.com, 336, 337 Galbraith, Jay R., 296 Game consoles, 174 Gateway, 137, 145, 155, 174 GEMA (Germany), 133 General Instruments, 174 General Motors (GM), 190, 194, 199, 200, 204, 210 General Packet Radio Service (GPRS), 363, 400 General Public License (GPL), 415–16, 424, 429 Ghosh, Rishab Aiyer, 409 Global NetXchange, 246, 267, 324 G-mails, 363 GN ReSound, 232–33 Goods: definition, 53–54; logistics management, 243–44; research on e-commerce, 271–73. See also Food industry; Textile and apparel industries Goolsbee, Austan, 451 Gopher, 160 Governance of Internet, 28 Government: common interface, 58; G2C (government-to-consumer) web-based services, 58; regulation as barrier, 57. See also specific federal agencies GPL. See General Public License GPRS. See General Packet Radio Service GreenLight.com, 193, 206 Greiger, Jim, 345 Grocery Manufacturers of America, 263 Groceryworks, 258, 260 Grove, Andrew, 268 H. E. Butts, 266 Hackers (Levy), 414 Hardt, Rich, 353 Harry Fox Agency, 130, 133, 135 HDR. See High Data Rate
HDTV. See High definition television Health care sector, 58–59, 108 Health insurance: hearing aid coverage, 233; product design, 109–10 Hearing aid industry, 139–40, 142–43, 229–39; behind-the-ear devices, 230; chains of dispensers, 233, 236; companies remaining after mergers, 232–33; consumer and, 233–34, 236; distribution, 233; hearing instruments, 230–31; Internet’s effect on, 234–39; in-the-ear devices, 230; retailers, 235–37; supplier relationships, 234–35; value chain, 234; wholesalers, 237–39 Heim, Gregory R., 260 Hewlett Packard (HP), 20, 144, 145, 152, 168, 220, 246, 430 High Data Rate (HDR), 444, 446 High definition television (HDTV), 174 HomeDirect USA, 344–45 Home shopping, 256–57, 259–60. See also Business-to-consumer commerce Hong Kong, 69; banking, 79, 108; textileapparel-retail suppliers, 326 Hong Kong Shanghai Bank, 108 Hotwire, 122–23 HTML, 362 Human genome project, 433 IBM: Internet’s effect on, 18, 168; mobile Internet access, 388; Open Source and, 430; PC industry and, 6, 144, 145, 152, 154–55, 157, 158, 160; process technology, 221, 226 IC3D, 323 Ignatius, David, 258 ILECs. See Incumbent Local Exchange Carriers i-mode system, 360–88, 400; always-on character of, 363, 400; architectural differences from WAP, 362–63; bandwidth limitations, 368; closed vs. open
menus, 386; defined, 361; entertainment services, 379; future developments, 388; linkage restrictions, 380; managing the service menu, 377–78; news services, 379–80; pervasiveness, 364; phone and activation costs, 381–82; revenues, 372; richness vs. reach, 367–68, 372–76, 385, 386–87; shopping via, 384; success, 365, 367, 370–71, 384; young people’s use, 379–80 Inacom, 167 Incumbent Local Exchange Carriers (ILECs), 440, 444, 453, 458 InfoFlyway, 116 Information technology (IT) revolution, 3–5, 244 Infrastructure trends, 355–68; competition and open access, 438–44; software development, 366–68; wireless vs. landline access, 360–66 Ingram Micro, 163, 165, 168, 169, 170, 171 Instant messaging, 449, 450 Insurance industry, 7, 60; automobiles, 109; contracts, 81; life, 68 Intel Corporation: affiliate sales programs, 166; manufacturing, 55; PC industry and, 144, 145, 152, 153, 154; process technology, 221, 226 Intelligent Transportation System (ITS), 199 Interchangeability of PC components, 138–39 Intermediaries: air travel, 118–25; apparel industry, 329. See also Disintermediation Internet: development, 11–14, 27–28, 389–92, 404, 437–41; early hopes vs. reality, 29–32; openness, 435–536; penetration by country, 70; World Wide Web evolution, 440–41. See also
Network architecture; Next-generation access Internet appliances, 174 Internet protocol (IP), 28; IPv6, 404 Internet service providers (ISPs), 442, 452–54, 455, 458; AOL–Time Warner merger provisions and, 465–66; local monopoly, 464 Internet 2, 13 Isenberg, David, 392 ISPs. See Internet service providers iSyndicate, 44 Italy, 69 J. Crew, 317 J. Sainsbury, 324 Japan: automobile industry, 188, 189, 208; e-finance, 69; e-mail use, 378–79; manufacturing innovations, 19; mobile services, 44, 359, 369–88; PC market, 155; semiconductor industry, 220, 222. See also i-mode system JCP Logistics, 321 Jobbers, 93–94 Johnson & Johnson, 237 Johnson Controls, 203–04 Jones, Daniel T., 431 Jordan, Peter, 296 Joy, Bill, 8 J-Phone, 371 J-Sky, 371 Jurica, Ed, 151 Just-in-time delivery: food industry, 275–76; manufacturing, 338 Kaplan, Steven, 261 Keystone Fulfillment, 321 Kinecta, 44 Klenow, Peter, 451 Kmart, 57, 311, 352 Knowles, 232
Kochersperger, Richard H., 270 Kollock, Peter, 408 Kroger, 264, 266, 269, 324 Kurt Salmon Associates, 284 Kuwabara, Ko, 412 Lands’ End, 320 Latin America, 69 Leahy, Terry, 284 Legacy information systems, 46 Legislation as barrier, 57 Lerner, Josh, 411–12 Levi Strauss, 320–21, 323 Levy, Steven, 414 Li & Fung, 326 Linux operating system, 416–20; commercialization, 430–31; growth, 417; hierarchy of decisionmaking, 422, 426–29; Intel and, 153; kernel modules, 427; leadership role of Torvalds, 423–24; sanctioning mechanisms in community, 424–25; small program philosophy, 427; voluntary programming and sharing, 407, 408, 409, 410 Loasby, Brian J., 284 Logistics management, 243–44 LSI Logic, 221 Lucent, 221 Lufthansa, 62, 63, 116–17, 120, 122, 123, 125 Machalaba, Daniel, 275–76 The Machine That Changed the World (Womack, Jones, and Roos), 431 MacKie-Mason analysis of open access and cable revenues, 457 Malaysia’s semiconductor foundries, 219 Marketing in PC industry, 161–63 Marketplace: adaptability, 47; commonalities, 33; difficult markets to penetrate, 46–47; longer-term
relationships favored, 46; modern architecture, 42–45; network as, 39–42; new structuring, 38–42; transformation, 45–47 Marks and Spencer, 290 “Mask” manufacturers, 55 Massachusetts Institute of Technology (MIT), 406, 414 McClelland, Anna Sheen, 277 M-commerce, 13 Medical devices, intelligence in, 10 Medical records online, 59 Memory chip industry and Japanese, 19–20 Mergers: food industry, 269; hearing aid industry, 232–33; Internet access providers and, 436, 457. See also America Online Merrill Lynch, 108 Metcalfe’s Law, 11 Metro AG, 324 MicroAge, 165, 169 Microsoft, 18, 144, 145, 152, 154, 406; compatible microprocessors, 153; Internet appliances, 174; view on Open Source software, 410 Microtronic, 232 Minitel, 11, 451 Minix, 416 Mitsubishi, 219 MMDS. See Multichannel Multipoint Distribution Services Mobile services: billing methods in Japan, 381, 382–84; business model, 381–82; e-finance, 69; fixed-line vs. mobile Internet applications, 372–76; intranets, 388; Japan and, 44, 359, 369–88; managing the service menu, 376–78; penetration by country, 70; phone and activation costs, 381–82; shortcomings of Western service providers, 386;
trucking industry, 351–52. See also i-mode system; Wireless networks Modularity: automobiles, 146, 184–87, 210, 211; PCs, 138–39 Monopoly: cable operators, 442; local ISPs, 464. See also Antitrust issues Moore, Gordon, 5 Moore’s Law, 5, 8, 153 Mortgages, 60, 68. See also Finance industry Mosaic, 160 Motor Carrier Acts (1935, 1980), 333 Motor carriers. See Trucking industry Motorola, 18, 220, 361, 430 MP3, 131, 134 Multichannel Multipoint Distribution Services (MMDS), 444, 446 Mushroom growers, 248, 305–06 Music, Etc., 134 Music industry, 12–13, 44, 128–36; change to fight digital distribution, 133–36; consumer empowerment and, 131; copyright royalty administrators, 135–36; decentralized manufacturing, 134; digital technologies and, 130–33; download formats and security standards, 134; historic functions, 129–30; labor union possiblity, 131; licensing rights, 130, 133, 135; “lockers,” 132; resale pricing, 134–35; royalty distribution, 132–33; weaker vs. dominant market participants, 46 MyTailor, 323 The Mythical Man-Month (Brooks), 425 N2K, 462 Nakayama, Makoto, 273 Napster, 12, 44, 128, 131 Narrowband access, 437–38 National Cable Television Association (NCTA), 456 National Semiconductor, 220
Navigation systems, 200 NEC, 55 Negroponte, Nicholas, 14 Net Radio, 462 Netscape, 438 Network architecture, 389–405; caching, 395; commercialization, 393–95; current state of, 391–92; e-commerce’s needs, 399–400; encryption, 394, 400; fiber optic transmission, 396–98; rapid diffusion over existing infrastructure, 392–93, 438, 440; routers, 394–95; streaming video and, 395; vs. telephone, 391–92, 403, 404. See also Nextgeneration access Network Computer (NC), 173 Networks, 10–11; end-to-end argument in system design, 392; internal caching, 365; as marketplace, 39–42; rivalries emerging, 359–65; transformation of organizations, 14. See also Data networks; Network architecture; Nextgeneration access News services, 379–80, 387, 462 Next-generation access, 365, 389–405, 435–73; architecture, 395–98; broadband access alternatives, 444–47; broadband vs. narrowband access, 443; bundled services, 449, 450; closed access, 458–64; collocated partnerships, 462–63; competition, 403–04, 438–44, 464, 468, 469; content center hubs, 398; continuous connection, 400; control and regulation, 401–04; e-commerce needs, 399–400; e-mail portability, 448; experimentation and development, 438; implementation time, 401; investment in infrastructure, 455–58; joint dominance in broadband access, 467–71; nurturing innovation, 454–67, 469; openness, 437–41, 454, 464, 468; routers, 396, 399; streaming video and, 398, 399;
switching costs, 447–51; traffic patterns in, 398; ultra long haul transmission systems, 397; user-driven innovation, 441; wavelength division multiplexing, 397 Nike, 251, 339 Nippon Steel, 220 Nokia, 361 Nortel, 360 Northwest Airlines, 122 NTT DoCoMo. See DoCoMo OEMs in auto industry: build to order and, 184, 187–88; design tasks and, 147, 189–90; Internet’s effect on, 146–47, 201–02, 210; modular design and, 146, 185, 211; pressure on, 205, 206; unionized, 207 OFTEL, 454, 470 Open architecture, 435–44; automobile industry and e-procurement, 195–96; cable operators and, 436, 441, 442, 457; infrastructure trends, 438–44; next-generation access, 437–41, 454, 464, 468 Open Source Definition, 406, 430 Open Source software, 366–68, 406–34; case studies in, 414–18; characterization, 407; code-forking, 421–22, 429; commercialization, 430–33; complexity problems, 425–28; conflict resolution, 428–29; coordination problems, 421–25; credentialing for programmers provided by, 411–12; gift culture idea and, 412–13; macroeconomic approaches, 408–10, 421; microeconomic approaches, 410–13; production processes, 430–33; sanctioning mechanisms in community, 424–25; source code modularization, 427; successful project features, 418–20. See also Linux operating system
Optical virtual private networks (OVPNs), 400 Oracle, 388 Orbitz, 43, 122 Oticon, 230, 232, 233 Otopenia, 122 Outsourcing strategy, 20–21; automobile industry, 203; finance industry, 79 Overstock, 322 Packard Bell/NEC, 157, 166 Palm Pilots, 172 Payment systems, electronic, 58, 67–68 PC Connection, 462 PC industry, 143–45, 151–77; affiliate sales programs, 165–66; build-to-order systems, 139, 144, 158, 182, 184; channel assembly, 169; “channel stuffing,” 160; competition from other platforms, 172–75; demand-chain management, 169; direct marketing, 161–63; distributors, 168–70; ease of entry, 163–66; EDI systems and, 160; functionality and reliability of, 140; interchangeability of components, 138–39; Internet-enabled sales, 161–63; Internet’s effect on, 137, 160–63, 175–76; Internet services inclusion, 158–59; low end of market, 159; modularity, 138–39; online sales, 140; overview, 152–54; post-PC era, 172–75; pre-Internet value chain, 154–60; referral system from portals, 165; sales channels, 155–60; start-ups, 166; traditional assemblers, 166–68; value chain solutions, 170–72; web’s effect on, 160–61; “white boxes,” 152, 160, 166 pcOrder.com, 171 PDAs. See Personal digital assistants Peapod, 250, 258, 260, 275 Peer-to-peer technologies, 44–45
Perl, 426 Personal computers. See PC industry Personal digital assistants, 44, 374–75, 387, 388 Pharmaceuticals: custom interventions, 15; prescriptions online, 59; as services, 54 Philips, 232 Phonak, 232 Phone.com, 361 Pietrafeso, 323 Pinault-Printemps-Redoute, 324 Point of sale (POS) systems: U.K., 247, 283, 284, 286, 289; U.S., 250, 263–64, 267, 271, 273–75, 277–78 Pontoise’s place du marché, 38–39 Powell, Walter W., 430 Power Chip, 219 Priceline, 62, 63, 93, 119, 120–25, 260, 262 Pricing: air travel, 125–26; automobile sales, 208; bandwidth, 358–59; “death spiral,” 107; food industry, 249; mobile Internet, 387; retail financial services, 92, 104–11; switching between Internet access providers, 450 Privacy: e-finance, 80; medical records, 59; POS data, 271, 276 Proctor and Gamble, 263–64, 300 Procurement: automobile industry, 188–91; government, 58 ProcureNet, 195 Product design: air travel industry, 114; retail finance sector, 108–10 Production chains, 54 Production challenge, 19–23 Productivity, 3–4, 276 Profitability and skill-based competition, 107–11 Prudential Insurance Company, 7 Quaker Oats Company, 261–62 Qualcomm, 351, 446
Quisp, 261 QVC, 462 Raman, Ananth, 277 Raymond, Eric S., 413, 418, 420, 422, 425, 428 Realtor.com, 462 Red Dot Network, 134 Red Hat, 431 Reel.Com, 462 Regional Bell Operating Companies (RBOCs), 441, 452 Replication, 462 Residential broadband service, 436–37 ReSound, 232 Retail Food Industry Center (U. of Minn.), 265 Rion, 232 Roos, Daniel, 431 Rooster, 261 Rosenbluth, 62, 96, 97, 118 RosettaNet, 171 Routers, 394–95, 396, 399 SABRE automated reservation system, 7, 34, 119 SACEM (France), 133 Safeway, 258, 283 Sagawa Kyubin, 388 SAGE, 6 Sainsbury’s, 283, 287, 291, 305 Salomon Smith Barney, 108 Sarnoff Corporation, 237 Sawhney, Mohanbir, 261 Schneider, 351 Screen space, placement fees for, 44 Sears, 57, 246, 324, 327 Semiconductor Equipment and Materials International (SEMI), 216 Semiconductor industry, 55, 141, 148–50, 214–28; advantages of foundry-fabless
partnership, 216, 221–22, 226, 227; costs of fabs, 221, 227; e-commerce tools and reorganization, 224–25; economic forces, 221–24; electronic design automation software, 148–49, 224–25; evolution of business strategies, 216–17; “fabless” semiconductor firms, 55, 148–49, 214, 216, 225; fabrication plants, 55–56, 148–49, 215–16, 221; Internet’s effect on, 137, 214–16, 226, 227–28; processing power and, 6; pureplay foundries, 218–21, 226; regional distribution of foundry capacity, 218; standardization, 56; supply chain management tools, 225; time to market, 222; trends by factory type, 217 Sendmail, 407 Services, 53–63; definition of, 53–54 Shell Oil and PC purchases, 162 Shoplink, 260 Short Messaging Service (SMS), 363–64 Shulman, Richard, 268 Shunning, 424–25 SIAE, 133 Siemens, 232–33 Silicon Valley System, 19 Singapore, 69, 79, 219 Sinha, Kingshuk K., 260 Sky Message, 371 Smart appliances, 9 Smart cards, 67 Smart highways, 199 Smith, John, 343 Smith, Marc, 408 SMS. See Short Messaging Service Society of Automotive Engineers (SAE), 204 Software development, 366–68. See also Open source software Somerfield, 283 Songbird Medical, 237 Sonic Innovation, 237
Sonus, 238 Sony PlayStations, 174–75 Source code modularization, 427 South Korea, 69, 219 Southwest Airlines, 116, 117 Speta, Jim, 463 Spinner.com, 462 SportsLine, 462 Sprint, 446 Sprint PCS, 365 Stallman, Richard, 414–15 Standardization, 56–50 Starkey, 232, 233 Stickiness of sites, 29, 31 Stigler, George, 54 Stock trading. See Brokerage Streaming video and data, 395, 398, 399, 443, 451 Streamline, 260 Structure, 35–36 Sturgeon, Tim John, 157 SubmitOrder.com, 321 Sun Microsystems, 11, 173, 388, 430 Supermarkets. See Food industry Supervalu, 269, 270 Suppliers. See specific industry SureSource, 322 Taiwan: PC industry in, 156, 157; semiconductor industry in, 218–20, 223 Taiwan Semiconductor Manufacturing Company (TSMC), 55, 148, 218–19, 223, 227 Target, 57 Tax filings, 58 TCP/IP system versus mobile systems, 359–65 Tech Data, 168, 171 Telecommunications industry: attracting profitable customers, 110; Internet’s use of telephone infrastructure, 393, 438, 440; telephone vs. Internet, 391–92,
403, 404, 440. See also Next-generation access Tesco, 283, 287, 291, 303, 305 Texas Instruments, 220 Textile and apparel industries, 241–42, 245–47, 310–31; B2B commerce, 315, 324–31; B2C commerce, 242, 246–47, 315, 320–24; catalog sales online, 320, 321; color concerns, 316–17; depth vs. breadth of coverage, 329–31; distribution channels, 319, 321–22; electronic data interchange, 246–47, 314, 324, 326–27; exchanges founded by retailers, 324–26; factors affecting ecommerce adoption, 315–18; fashion triangle, 312; forecasting capability improvements, 328; incumbents’ entry into e-commerce, 320–21; intermediaries, 329; Internet’s effect on, 311; lean retailing, 313–14; mass customization, 322–24; order fulfillment and distribution, 321–22; performance impact of B2Bs, 326–28; physical and customized nature of products for sale, 246, 315; structure of, 311–14; trend expected for B2Bs, 328–29; websites, 316–17, 320–21 Texwatch.com, 326 TheRightSize, 318 Third-generation (3G) wireless technology, 364, 368, 386 Third-generation Internet. See Nextgeneration access Time Warner and AOL merger. See America Online Tirole, Jean, 411–12 Tomoyama, Shigeki, 200 Torvalds, Linus, 416, 417, 419–24, 426–29 Toshiba, 219, 220 Tourism: growth of, 112; view of market, 112–13
Toyota, 190, 200, 202 Tracking: online inquiries, 57, 327–28; vehicle tracking by manufacturer, 340 Tradeweave, 325 Transaction mechanisms, 34 Transparency: air travel industry, 125; apparel industry, 329; automobile industry, 195–96; retail financial services, 92, 93–96, 106; U.K. food retailing, 289 Transplace.com, 336 Transportation companies. See Trucking industry Transportation Intermediaries Association, 337 Travel agents, 113, 118–19. See also Air travel industry; Tourism TravelBids, 62, 63, 120, 123, 125 Travelocity, 119, 124, 125, 462 Trucking industry, 251–52, 332–54; acquisition and alliance activity within, 335, 353; adoption of web-based EDI and mobile communication, 351–52; background, 333–34; changed customer demands, 338–41; consolidation, 343–44; disintermediation, 337; freight brokers, 335, 336–38; Internet’s effect on, 335–41, 344–45, 353; loadmatching services, 336–38; logistics management, 243–44; organizational change, 346–51; private fleets, 333–34; restructuring of internal informationbased operations, 345–52; shipment information, 336–38; small firms’ transformation, 245; survey, 345–52; transformation to asset-based transportation management, 340; vertical combinations of firms, 344; virtual trucking, 341–43 TRW, 204, 205 UCCNet, 266–67, 273, 277
Ultra long haul (UHL) transmission systems, 397 Uniform Code Council (UCC), 266 United Airlines, 122, 126 United Kingdom, 69, 98, 454, 470 United Kingdom, food retailing, 247–49, 280–309; B2C firms, 242, 304–05, 307; category management, 301, 303; consumers, 289, 299; coordination of supply chain performance, 293–94; distribution, 248; efficient consumer response, 296; electronic data interchange, 247, 248, 283, 285–87, 290, 293–96, 303; EPOS systems, 247, 283, 284, 286, 289, 293; facilitating information flow, 303–04; fresh produce suppliers, 285–89, 292, 299, 305–06; global markets, 290–91; headquarters, 289; information systems in Internet age, 294–99, 307; Internet’s effect on, 299–305; labor, 291–92; major players, 283; niche activity opportunities, 307; operation of market, 293; preferred suppliers, 289–90, 299; pre-Internet, 281–94; quick response partnershipping, 281–82, 284–85, 286–87, 290, 303; relational contracting, 298; retailers, 248; shopping online, 304–05; standardization and coordination of variety, 297–99; supplier selforganization, 248–49; suppliers’ loss of power, 284, 293; vertical integration without ownership, 292 United Microelectronics Company (UMC Group), 218–20, 223, 227 United Parcel Service (UPS), 57, 244, 338–40 UNIVAC computer, 7 Unix, 421, 422–23 UPS e-Logistics, 321 USFreightways, 343
VA Linux, 431 Value added resellers (VARs), 144, 163 Viacore, 171 Video feeds, 395, 398, 399, 443, 451 Virtualrags.com, 325 Visicalc, 8 Vital records, order system, 58 Vixie, Paul, 419 Volkswagen, 195 Walled-garden effect, 386 Wal-Mart, 43, 57, 244, 245; apparel sales online, 322; Collaborative Planning, Forecasting, and Replenishment, 267–68; efficient consumer response project, 300; food industry entry, 250, 255, 265, 266; Internet-based purchasing system, 325; POS data, 263 WAP. See Wireless Application Protocol WAP Forum, 361 Wavelength division multiplexing (WDM), 397 Websites: airlines, 116–17; apparel retailers, 316–17, 320–21; PC firms, 161; trucking industry, 345 Webvan, 248, 250, 275 Whirlwind, 6 Wholesalers: food industry, 254, 269–70, 272; hearing aid industry, 237–39 Widex, 230, 232 Wireless Application Protocol (WAP), 361–65; architectural differences from i-mode, 362–63; dial-up character of, 363; i-mode use compared, 370–72, 384; WAP2 development, 364 Wireless networks: next-generation, 400; voice, 13. See also i-mode system; Mobile services; Wireless Application Protocol Womack, James P., 431 WorldWide Retail Exchange, 246, 267, 324
World Wide Web evolution, 440–41 WSMC, 219 Wurster, Thomas, 367 Xerox PARC, 18, 406 XML, 56, 195, 267
Yahoo, 165, 365, 469 Young people: fixed-line vs. mobile Internet applications, 376; mobile Internet access and, 379 Zoom technology, 246, 317