P1: GDZ/SPH WL040-FM-Vol.I
P2: GDZ/SPH
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
THE
INTERNET
ENCYCLOPEDIA Volume 1 A–F
Hossein Bidgoli Editor-in-Chief California State University Bakersfield, California
i
P1: GDZ/SPH WL040-FM-Vol.I
P2: GDZ/SPH
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
THE
INTERNET
ENCYCLOPEDIA Volume 1 A–F
Hossein Bidgoli Editor-in-Chief California State University Bakersfield, California
i
P1: GDZ/SPH
P2: GDZ/SPH
WL040-FM-Vol.I
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
∞ This book is printed on acid-free paper. C 2004 by John Wiley & Sons, Inc. All rights reserved. Copyright
Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, e-mail:
[email protected]. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. The publisher is not engaged in rendering professional services, and you should consult a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services please contact our Customer Care Department within the U.S. at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. For more information about Wiley products, visit our web site at www.Wiley.com. Library of Congress Cataloging-in-Publication Data: The Internet encyclopedia / edited by Hossein Bidgoli. p. cm. Includes bibliographical references and index. ISBN 0-471-22202-X (CLOTH VOL 1 : alk. paper) – ISBN 0-471-22204-6 (CLOTH VOL 2 : alk. paper) – ISBN 0-471-22203-8 (CLOTH VOL 3 : alk. paper) – ISBN 0-471-22201-1 (CLOTH SET : alk. paper) 1. Internet–Encyclopedias. I. Bidgoli, Hossein. TK5105.875.I57I5466 2003 004.67 8 03–dc21 2002155552 Printed in the United States of America 10
9
8
7
6
5
4
3
2
1
ii
P1: GDZ/SPH WL040-FM-Vol.I
P2: GDZ/SPH
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
To so many fine memories of my brother, Mohsen, for his uncompromising belief in the power of education.
iii
P1: GDZ/SPH WL040-FM-Vol.I
P2: GDZ/SPH
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
iv
Char Count= 0
P1: GDZ/SPH WL040-FM-Vol.I
P2: GDZ/SPH
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
About the Editor-in-Chief information systems, which have been published and presented throughout the world. Dr. Bidgoli also serves as the editor-in-chief of Encyclopedia of Information Systems. Dr. Bidgoli was selected as the California State University, Bakersfield’s 2001–2002 Professor of the Year.
Hossein Bidgoli, Ph.D., is Professor of Management Information Systems at California State University. Dr. Bidgoli helped set up the first PC lab in the United States. He is the author of 43 textbooks, 27 manuals, and over four dozen technical articles and papers on various aspects of computer applications, e-commerce, and
v
P1: GDZ/SPH
P2: GDZ/SPH
WL040-FM-Vol.I
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
Editorial Board Ephraim R. McLean Georgia State University, Atlanta
Eric T. Bradlow The Wharton School of the University of Pennsylvania
David E. Monarchi University of Colorado, Boulder
Kai Cheng Leeds Metropolitan University, United Kingdom
Raymond R. Panko University of Hawaii at Manoa
Mary J. Cronin Boston College
Norman M. Sadeh Carnegie Mellon University
James E. Goldman Purdue University
Judith C. Simon The University of Memphis
Marilyn Greenstein Arizona State University West
Vasja Vehovar University of Ljubljana, Slovenia
Varun Grover University of South Carolina
Russell S. Winer New York University
vi
P1: GDZ/SPH WL040-FM-Vol.I
P2: GDZ/SPH
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
Contents Chapter List by Subject Area Contributors Preface Guide to the Internet Encyclopedia
xii xv xxiii xxvi
Volume 1 Active Server Pages J. Christopher Sandvig
1
Click-and-Brick Electronic Commerce Charles Steinfield
185
Client/Server Computing Daniel J. McFarland
194
Collaborative Commerce (C-commerce) Rodney J. Heisterberg
204
Common Gateway Interface (CGI) Scripts Stan Kurkovsky
218
Computer Literacy Hossein Bidgoli
229
ActiveX Roman Erenshteyn
11
ActiveX Data Objects (ADO) Bhushan Kapoor
25
Application Service Providers (ASPs) H.-Arno Jacobsen
36
Computer Security Incident Response Teams (CSIRTs) Raymond R. Panko
48
Computer Viruses and Worms Robert Slade
248
Authentication Patrick McDaniel
57
Conducted Communications Media Thomas L. Pigg
261
Benchmarking Internet Vasja Vehovar and Vesna Dolnicar
72
Consumer Behavior Mary Finley Wolfinbarger and Mary C. Gilly
272
Biometric Authentication James. L. Wayman
Consumer-Oriented Electronic Commerce Henry Chan
284
Convergence of Data, Sound, and Video Gary J. Krug
294
Copyright Law Gerald R. Ferrera
303
BluetoothTM —A Wireless Personal Area Network Brent A. Miller Business Plans for E-commerce Projects Amy W. Ray
242
84
96
Business-to-Business (B2B) Electronic Commerce Julian J. Ray
106
Business-to-Business (B2B) Internet Business Models Dat-Dao Nguyen
120
Business-to-Consumer (B2C) Internet Business Models Diane M. Hamilton
129
Customer Relationship Management on the Web Russell S. Winer Cybercrime and Cyberfraud Camille Chin Cyberlaw: The Major Areas, Development, and Provisions Dennis M. Powers
315
326
337
Capacity Planning for Web Services Robert Oshana
139
Cyberterrorism Charles W. Jaeger
353
Cascading Style Sheets (CSS) Fred Condo
152
Databases on the Web A. Neil Yerkey
373
C/C++ Mario Giannini
164
Data Compression Chang-Su Kim and C.-C. Jay Kuo
384
Circuit, Message, and Packet Switching Robert H. Greenfield
176
Data Mining in E-commerce Sviatoslav Braynov
400
vii
P1: GDZ/SPH
P2: GDZ/SPH
WL040-FM-Vol.I
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
viii
September 24, 2003
18:37
Char Count= 0
CONTENTS
Data Warehousing and Data Marts Chuck Kelley
412
Encryption Ari Juels
686
Denial of Service Attacks E. Eugene Schultz
424
Enhanced TV Jim Krause
695
Developing Nations Nanette S. Levinson
434
Enterprise Resource Planning (ERP) Zinovy Radovilsky
707
444
E-systems for the Support of Manufacturing Operations Robert H. Lowson
Digital Communication Robert W. Heath Jr., and Atul A. Salvekar
457
Extensible Markup Language (XML) Rich Dorfman
732
Digital Divide ´ Jaime J. Davila
468
Extensible Stylesheet Language (XSL) Jesse M. Heines
755
Digital Economy Nirvikar Singh
477
Extranets Stephen W. Thorpe
793
Digital Identity Drummond Reed and Jerry Kindall
493
Feasibility of Global E-business Projects Peter Raven and C. Patrick Fleenor
803
Digital Libraries Cavan McCarthy
505
File Types Jennifer Lagier
819
Digital Signatures and Electronic Signatures Raymond R. Panko
526
Firewalls James E. Goldman
831
Disaster Recovery Planning Marco Cremonini and Pierangela Samarati
535
Fuzzy Logic Yan-Qing Zhang
841
Distance Learning (Virtual Learning) Chris Dede, Tara Brown-L’Bahy, Diane Ketelhut, and Pamela Whitehouse
549
Volume 2
Downloading from the Internet Kuber Maharjan
561
E-business ROI Simulations Edwin E. Lewis
577
E-government Shannon Schelin and G. David Garson
590
DHTML (Dynamic HyperText Markup Language) Craig D. Knuckles
Electronic Commerce and Electronic Business Charles Steinfield
Game Design: Games for the World Wide Web Bruce R. Maxim
601
Gender and Internet Usage Ruby Roy Dholakia, Nikhilesh Dholakia, and Nir Kshetri Geographic Information Systems (GIS) and the Internet Haluk Cetin
718
1
12
23
Global Diffusion of the Internet Nikhilesh Dholakia, Ruby Roy Dholakia, and Nir Kshetri
38
Electronic Data Interchange (EDI) Matthew K. McGowan
613
Global Issues Babita Gupta
52
624
Groupware Pierre A. Balthazard and Richard E. Potter
65
Electronic Funds Transfer Roger Gate and Alec Nacamuli Electronic Payment Donal O’Mahony
635
Guidelines for a Comprehensive Security System Margarita Maria Lenk
Electronic Procurement Robert H. Goffman
645
Health Insurance and Managed Care Etienne E. Pracht
E-mail and Instant Messaging Jim Grubbs
660
Health Issues David Lukoff and Jayne Gackenbach
104
E-marketplaces Paul R. Prabhaker
671
History of the Internet John Sherry and Colleen Brown
114
76
89
P1: GDZ/SPH WL040-FM-Vol.I
P2: GDZ/SPH
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
CONTENTS HTML/XHTML (HyperText Markup Language/Extensible HyperText Markup Language) Mark Michael
ix
Java Judith C. Simon and Charles J. Campbell
379
JavaBeans and Software Architecture Nenad Medvidovic and Nikunj R. Mehta
388
JavaScript Constantine Roussos
401
JavaServer Pages (JSP) Frederick Pratter
415
Knowledge Management Ronald R. Tidd
431
Law Enforcement Robert Vaughn and Judith C. Simon
443
180
Law Firms Victoria S. Dennis and Judith C. Simon
457
Intelligent Agents Daniel Dajun Zeng and Mark E. Nissen
192
Legal, Social, and Ethical Issues Kenneth Einar Himma
464
Interactive Multimedia on the Web Borko Furht and Oge Marques
204
Library Management Clara L. Sitter
477
International Cyberlaw Julia Alpert Gladstone
216
Linux Operating System Charles Abzug
486
Load Balancing on the Internet Jianbin Wei, Cheng-Zhong Xu, and Xiaobo Zhou
499
233
124
Human Factors and Ergonomics Robert W. Proctor and Kim-Phuong L. Vu
141
Human Resources Management Dianna L. Stone, Eduardo Salas, and Linda C. Isenhour
150
Information Quality in Internet and E-business Environments Larry P. English Integrated Services Digital Network (ISDN): Narrowband and Broadband Services and Applications John S. Thompson
International Supply Chain Management Gary LaPoint and Scott Webster
163
Internet Architecture Graham Knight
244
Local Area Networks Wayne C. Summers
515
Internet Censorship Julie Hersberger
264
Machine Learning and Data Mining on the Web Qiang Yang
527
Internet Etiquette (Netiquette) Joseph M. Kayany
274
Managing a Network Environment Haniph A. Latchman and Jordan Walters
537
Internet Literacy Hossein Bidgoli
286
Internet Navigation (Basics, Services, and Portals) Pratap Reddy
298
Managing the Flow of Materials Across the Supply Chain Matthias Holweg and Nick Rich
311
Marketing Communication Strategies Judy Strauss
562
Internet Relay Chat (IRC) Paul L. Witt
320
Marketing Plans for E-commerce Projects Malu Roldan
574
Internet Security Standards Raymond R. Panko
334
Medical Care Delivery Steven D. Schwaitzberg
586
Internet2 Linda S. Bruenjes, Carolyn J. Siccama, and John LeBaron
Middleware Robert Simon
603
Intranets William T. Schiano
346 Mobile Commerce Mary J. Cronin
614
Intrusion Detection Techniques Peng Ning and Sushil Jajodia
355 Mobile Devices and Protocols Julie R. Mariga and Benjamin R. Pobanz
627
Inventory Management Janice E. Carrillo, Michael A. Carrillo, and Anand Paul
368 Mobile Operating Systems and Applications Julie R. Mariga
635
551
P1: GDZ/SPH
P2: GDZ/SPH
WL040-FM-Vol.I
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
x
18:37
Char Count= 0
CONTENTS
Multimedia Joey Bargsten
642
Politics Paul Gronke
84
Multiplexing Dave Whitmore
664
Privacy Law Ray Everett-Church
96
Nonprofit Organizations Dale Nesbary
675
Online Analytical Processing (OLAP) Joseph Morabito and Edward A. Stohr
685
Propagation Characteristics of Wireless Channels P. M. Shankar
Online Auctions Gary C. Anders
699
Online Auction Site Management Peter R. Wurman
709
124
Prototyping Eric H. Nyberg
135
Public Accounting Firms C. Janie Chang and Annette Nellen
145
Public Key Infrastructure (PKI) Russ Housley
156
720
Public Networks Dale R. Thompson and Amy W. Apon
166
Online Communities Lee Sproull
733
Online Dispute Resolution Alan Gaitenby
745
Radio Frequency and Wireless Communications Okechukwu C. Ugweje
Online News Services (Online Journalism) Bruce Garrison
755
Online Public Relations Kirk Hallahan
769
Online Publishing Randy M. Brooks
784
Online Religion T. Matthew Ciolek
798
Online Stalking David J. Loundy
812
Open Source Development and Licensing Steven J. Henry
819
Organizational Impact John A. Mendonca
832
Online Banking and Beyond: Internet-Related Offerings from U.S. Banks Siaw-Peng Wan
Volume 3 Passwords Jeremy Rasmussen
1
177
Real Estate Ashok Deo Bardhan and Dwight Jaffee
192
Research on the Internet Paul S. Piper
201
Return on Investment Analysis for E-business Projects Mark Jeffery
211
Risk Management in Internet-Based Software Projects Roy C. Schmidt
229
Rule-Based and Expert Systems Robert J. Schalkoff
237
Secure Electronic Transactions (SET) Mark S. Merkow
247
Secure Sockets Layer (SSL) Robert J. Boncella
261
Securities Trading on the Internet Marcia H. Flicker
274
Software Design and Implementation in the Web Environment Jeff Offutt
286
Patent Law Gerald Bluhm
14
Peer-to-Peer Systems L. Jean Camp
25
Software Piracy Robert K. Moniot
297
Perl David Stotts
34
Speech and Audio Compression Peter Kroon
307
Personalization and Customization Technologies Sviatoslav Braynov
51
Standards and Protocols in Data Communications David E. Cook
320
Physical Security Mark Michael
64
Storage Area Networks (SANs) Vladimir V. Riabov
329
P1: GDZ/SPH WL040-FM-Vol.I
P2: GDZ/SPH
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
CONTENTS
xi
Strategic Alliances Patricia Adams
340
Virtual Teams Jamie S. Switzer
600
Structured Query Language (SQL) Erick D. Slazinski
353
Visual Basic Dennis O. Owen
608
Supply Chain Management Gerard J. Burke and Asoo J. Vakharia
365
Supply Chain Management and the Internet Thomas D. Lairson
374
Visual Basic Scripting Edition (VBScript) Timothy W. Cole
Supply Chain Management Technologies Mark Smith
387
620
Visual C++ (Microsoft) Blayne E. Mayfield
635
Voice over Internet Protocol (IP) Roy Morris
647
398
Web-Based Training Patrick J. Fahy
661
Taxation Issues Annette Nellen
413
Webcasting Louisa Ha
674
TCP/IP Suite Prabhaker Mateti
424
Web Content Management Jian Qin
687
Telecommuting and Telework Ralph D. Westfall
436
Web Hosting Doug Kaye
699
Trademark Law Ray Everett-Church
448
Web Quality of Service Tarek Abdelzaher
711
Travel and Tourism Daniel R. Fesenmaier, Ulrike Gretzel, Yeong-Hyeon Hwang, and Youcheng Wang
459
Web Search Fundamentals Raymond Wisman
724
Web Search Technology Clement Yu and Weiyi Meng
738
Web Services Akhil Sahai, Sven Graupner, and Wooyoung Kim
754
Web Site Design Robert E. Irie
768
Supply Networks: Developing and Maintaining Relationships and Strategies Robert H. Lowson
Universally Accessible Web Resources: Designing for People with Disabilities Jon Gunderson Unix Operating System Mark Shacklette Usability Testing: An Evaluation Process for Internet Communications Donald E. Zimmerman and Carol A. Akerelrea
477
494
512
Value Chain Analysis Brad Kleindl
525
Video Compression Immanuel Freedman
537
Video Streaming Herbert Tuttle
554
Virtual Enterprises J. Cecil
567
Wide Area and Metropolitan Area Networks Lynn A. DeNoia
776
Windows 2000 Security E. Eugene Schultz
792
Wireless Application Protocol (WAP) Lillian N. Cassel
805
Wireless Communications Applications Mohsen Guizani
817
Wireless Internet Magda El Zarki, Geert Heijenk and Kenneth S. Lee
831
850
Virtual Private Networks: Internet Protocol (IP) Based David E. McDysan
579
Wireless Marketing Pamela M. H. Kwok
Virtual Reality on the Internet: Collaborative Virtual Reality Andrew Johnson and Jason Leigh
591
XBRL (Extensible Business Reporting Language): Business Reporting with XML J. Efrim Boritz and Won Gyun No
863
P1: GDZ/SPH
P2: GDZ/SPH
WL040-FM-Vol.I
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
Chapter List by Subject Area Applications Delivery of Medical Care Developing Nations Digital Libraries Distance Learning (Virtual Learning) Downloading from the Internet Electronic Funds Transfer E-mail and Instant Messaging Enhanced TV Game Design: Games for the World Wide Web GroupWare Health Insurance and Managed Care Human Resources Management Interactive Multimedia on the Web Internet Relay Chat (IRC) Law Enforcement Law Firms Library Management Nonprofit Organizations Online Banking and Beyond: Internet-Related Offerings from U.S. Banks Online Communities Online Dispute Resolution Online News Services (Online Journalism) Online Public Relations Online Publishing Online Religion Politics Public Accounting Firms Real Estate Research on the Internet Securities Trading on the Internet Telecommuting and Telework Travel and Tourism Video Streaming Virtual Enterprises Virtual Teams Web-Based Training Webcasting
Design, Implementation, and Management Application Service Providers (ASPs) Benchmarking Internet Capacity Planning for Web Services Client/Server Computing E-business ROI Simulations Enterprise Resource Planning (ERP) Human Factors and Ergonomics Information Quality in Internet and E-business Environments xii
Load Balancing on the Internet Managing a Network Environment Managing Risk in Internet-Based Software Projects Peer-to-Peer Systems Project Management Techniques Prototyping Return on Investment Analysis for E-business Projects Software Design and Implementation in the Web Environment Structured Query Language (SQL) Universally Accessible Web Resources: Designing for People with Disabilities Usability Testing: An Evaluation Process for Internet Communications Virtual Reality on the Internet: Collaborative Virtual Reality Web Hosting Web Quality of Service Electronic Commerce Business Plans for E-commerce Projects Business-to-Business (B2B) Electronic Commerce Business-to-Business (B2B) Internet Business Models Business-to-Consumer (B2C) Internet Business Models Click-and-Brick Electronic Commerce Collaborative Commerce Consumer-Oriented Electronic Commerce E-government Electronic Commerce and Electronic Business Electronic Data Interchange (EDI) Electronic Payment E-marketplaces Extranets Intranets Online Auction Site Management Online Auctions Web Services Foundation Computer Literacy Digital Economy Downloading from the Internet Electronic Commerce and Electronic Business File Types Geographic Information Systems (GIS) and the Internet History of the Internet Internet Etiquette (Netiquette) Internet Literacy Internet Navigation (Basics, Services, and Portals) Multimedia
P1: GDZ/SPH WL040-FM-Vol.I
P2: GDZ/SPH
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
CHAPTER LIST BY SUBJECT AREA
Value Chain Analysis Web Search Fundamentals Web Search Technology Infrastructure Circuit, Message, and Packet Switching Conducted Communications Media Convergence of Data, Sound, and Video Data Compression Digital Communication Integrated Services Digital Network (ISDN): Narrowband and Broadband Services and Applications Internet Architecture Internet2 Linux Operating System Local Area Networks Middleware Multiplexing Public Networks Speech and Audio Compression Standards and Protocols in Data Communications Storage Area Networks (SANs) TCP/IP Suite Unix Operating System Video Compression Voice over Internet Protocol (IP) Virtual Private Networks: Internet-Protocol (IP) Based Wide Area and Metropolitan Area Networks Legal, Social, Organizational, International, and Taxation Issues Copyright Law Cybercrime and Cyberfraud Cyberlaw: The Major Areas, Development, and Provisions Cyberterrorism Digital Divide Digital Identity Feasibility of Global E-business Projects Gender and Internet Usage Global Issues Health Issues International Cyberlaw Internet Censorship Internet Diffusion Legal, Social, and Ethical Issues Online Stalking Open Source Development and Licensing Organizational Impact Patent Law Privacy Law Software Piracy Taxation Issues Trademark Law
xiii
Marketing and Advertising on the Web Consumer Behavior Customer Relationship Management on the Web Data Mining in E-commerce Data Warehousing and Data Marts Databases on the Web Fuzzy Logic Intelligent Agents Knowledge Management Machine Learning and Data Mining on the Web Marketing Communication Strategies Marketing Plans for E-commerce Projects Online Analytical Processing (OLAP) Personalizations and Customization Technologies Rule-Based and Expert Systems Wireless Marketing Security Issues and Measures Authentication Biometric Authentication Computer Security Incident Response Teams (CSIRTs) Computer Viruses and Worms Denial of Service Attacks Digital Signatures and Electronic Signatures Disaster Recovery Planning Encryption Firewalls Guidelines for a Comprehensive Security System Internet Security Standards Intrusion Detection System Passwords Physical Security Public Key Infrastructure (PKI) Secure Electronic Transmissions (SET) Secure Sockets Layer (SSL) Virtual Private Networks: Internet Protocol (IP) Based Windows 2000 Security Supply Chain Management Electronic Procurement E-systems for the Support of Manufacturing Operations International Supply Chain Management Inventory Management Managing the Flow of Materials Across the Supply Chain Strategic Alliances Supply Chain Management Supply Chain Management and the Internet Supply Chain Management Technologies Supply Networks: Developing and Maintaining Relationships and Stratedies Value Chain Analysis Web Design and Programming Active Server Pages (ASP) ActiveX ActiveX Data Objects (ADO)
P1: GDZ/SPH
P2: GDZ/SPH
WL040-FM-Vol.I
xiv
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
CHAPTER LIST BY SUBJECT AREA
C/C++ Cascading Style Sheets (CSS) Common Gateway Interface (CGI) Scripts DHTML (Dynamic HyperText Markup Language) Extensible Markup Language (XML) Extensible Stylesheet Language (XSL) HTML/XHTML (Hypertext Markup Language/Extensible HyperText Markup Language) Java Java Server Pages (JSP) JavaBeans and Software Architecture JavaScript Perl Visual Basic Scripting Edition (VBScript) Visual Basic Visual C++ (Microsoft)
Web Content Management Web Site Design XBRL (Extensible Business Reporting Language): Business Reporting with XML Wireless Internet and E-commerce BluetoothTM —A Wireless Personal Area Network Mobile Commerce Mobile Devices and Protocols Mobile Operating Systems and Applications Propagation Characteristics of Wireless Channels Radio Frequency and Wireless Communications Wireless Application Protocol (WAP) Wireless Communications Applications Wireless Internet Wireless Marketing
P1: GDZ/SPH WL040-FM-Vol.I
P2: GDZ/SPH
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
Contributors Tarek Abdelzaher University of Virginia Web Quality of Service Charles Abzug James Madison University Linux Operating System Patricia Adams Education Resources Strategic Alliances Carol A. Akerelrea Colorado State University Usability Testing: An Evaluation Process for Internet Communications Gary C. Anders Arizona State University West Online Auctions Amy W. Apon University of Arkansas Public Networks Pierre A. Balthazard Arizona State University West Groupware Ashok Deo Bardhan University of California, Berkeley Real Estate Joey Bargsten University of Oregon Multimedia Hossein Bidgoli California State University, Bakersfield Computer Literacy Internet Literacy Gerald Bluhm Tyco Fire & Security Patent Law Robert J. Boncella Washburn University Secure Sockets Layer (SSL) J. Efrim Boritz University of Waterloo, Canada XBRL (Extensible Business Reporting Language): Business Reporting with XML Sviatoslav Braynov State University of New York at Buffalo Data Mining in E-commerce Personalization and Customization Technologies Randy M. Brooks Millikin University Online Publishing Colleen Brown Purdue University History of the Internet
Tara Brown-L’Bahy Harvard University Distance Learning (Virtual Learning) Linda S. Bruenjes Lasell College Internet2 Gerard J. Burke University of Florida Supply Chain Management L. Jean Camp Harvard University Peer-to-Peer Systems Charles J. Campbell The University of Memphis Java Janice E. Carrillo University of Florida Inventory Management Michael A. Carrillo Oracle Corporation Inventory Management Lillian N. Cassel Villanova University Wireless Application Protocol (WAP) J. Cecil New Mexico State University Virtual Enterprises Haluk Cetin Murray State University Geographic Information Systems (GIS) and the Internet Henry Chan The Hong Kong Polytechnic University, China Consumer-Oriented Electronic Commerce C. Janie Chang San Jos´e State University Public Accounting Firms Camille Chin West Virginia University Cybercrime and Cyberfraud T. Matthew Ciolek The Australian National University, Australia Online Religion Timothy W. Cole University of Illinois at Urbana-Champaign Visual Basic Scripting Edition (VBScript) Fred Condo California State University, Chico Cascading Style Sheets (CSS) David E. Cook University of Derby, United Kingdom Standards and Protocols in Data Communications Marco Cremonini Universita` di Milano, Italy Disaster Recovery Planning xv
P1: GDZ/SPH
P2: GDZ/SPH
WL040-FM-Vol.I
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
xvi
Mary J. Cronin Boston College Mobile Commerce ´ Jaime J. Davila Hampshire College Digital Divide Chris Dede Harvard University Distance Learning (Virtual Learning) Victoria S. Dennis Minnesota State Bar Association Law Firms Lynn A. DeNoia Rensselaer Polytechnic Institute Wide Area and Metropolitan Area Networks Nikhilesh Dholakia University of Rhode Island Gender and Internet Usage Global Diffusion of the Internet Ruby Roy Dholakia University of Rhode Island Gender and Internet Usage Global Diffusion of the Internet Vesna Dolnicar University of Ljubljana, Slovenia Benchmarking Internet Rich Dorfman WebFeats! and Waukesha County Technical College Extensible Markup Language (XML) Magda El Zarki University of California—Irvine Wireless Internet Larry P. English Information Impact International, Inc. Information Quality in Internet and E-business Environments Roman Erenshteyn Goldey-Beacom College ActiveX Ray Everett-Church ePrivacy Group, Inc. Privacy Law Trademark Law Patrick J. Fahy Athabasca University Web-Based Training Gerald R. Ferrera Bentley College Copyright Law Daniel R. Fesenmaier University of Illinois at Urbana–Champaign Travel and Tourism C. Patrick Fleenor Seattle University Feasibility of Global E-business Projects Marcia H. Flicker Fordham University Securities Trading on the Internet Immanuel Freedman Dr. Immanuel Freedman, Inc. Video Compression
18:37
Char Count= 0
CONTRIBUTORS
Borko Furht Florida Atlantic University Interactive Multimedia on the Web Jayne Gackenbach Athabasca University, Canada Health Issues Alan Gaitenby University of Massachusetts, Amherst Online Dispute Resolution Bruce Garrison University of Miami Online News Services (Online Journalism) G. David Garson North Carolina State University E-government Roger Gate IBM United Kingdom Ltd., United Kingdom Electronic Funds Transfer Mario Giannini Code Fighter, Inc., and Columbia University C/C++ Julia Alpert Gladstone Bryant College International Cyberlaw Mary C. Gilly University of California, Irvine Consumer Behavior Robert H. Goffman Concordia University Electronic Procurement James E. Goldman Purdue University Firewalls Sven Graupner Hewlett-Packard Laboratories Web Services Robert H. Greenfield Computer Consulting Circuit, Message, and Packet Switching Ulrike Gretzel University of Illinois at Urbana–Champaign Travel and Tourism Paul Gronke Reed College Politics Jim Grubbs University of Illinois at Springfield E-mail and Instant Messaging Mohsen Guizani Western Michigan University Wireless Communications Applications Jon Gunderson University of Illinois at Urbana–Champaign Universally Accessible Web Resources: Designing for People with Disabilities Babita Gupta California State University, Monterey Bay Global Issues Louisa Ha Bowling Green State University Webcasting
P1: GDZ/SPH WL040-FM-Vol.I
P2: GDZ/SPH
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
CONTRIBUTORS
Kirk Hallahan Colorado State University Online Public Relations Diane M. Hamilton Rowan University Business-to-Consumer (B2C) Internet Business Models Robert W. Heath Jr. The University of Texas at Austin Digital Communication Geert Heijenk University of Twente, The Netherlands Wireless Internet Jesse M. Heines University of Massachusetts Lowell Extensible Stylesheet Language (XSL) Rodney J. Heisterberg Notre Dame de Namur University and Rod Heisterberg Associates Collaborative Commerce Steven J. Henry Wolf, Greenfield & Sacks, P.C. Open Source Development and Licensing Julie Hersberger University of North Carolina at Greensboro Internet Censorship Kenneth Einar Himma University of Washington Legal, Social, and Ethical Issues Matthias Holweg Massachusetts Institute of Technology Managing the Flow of Materials Across the Supply Chain Russ Housley Vigil Security, LLC Public Key Infrastructure (PKI) Yeong-Hyeon Hwang University of Illinois at Urbana–Champaign Travel and Tourism Robert E. Irie SPAWAR Systems Center San Diego Web Site Design Linda C. Isenhour University of Central Florida Human Resources Management H.-Arno Jacobsen University of Toronto, Canada Application Service Providers (ASPs) Charles W. Jaeger Southerrn Oregon University Cyberterrorism Dwight Jaffee University of California, Berkeley Real Estate Sushil Jajodia George Mason University Intrusion Detection Techniques Mark Jeffery Northwestern University Return on Investment Analysis for E-business Projects Andrew Johnson University of Illinois at Chicago Virtual Reality on the Internet: Collaborative Virtual Reality
Ari Juels RSA Laboratories Encryption Bhushan Kapoor California State University, Fullerton ActiveX Data Objects (ADO) Joseph M. Kayany Western Michigan University Internet Etiquette (Netiquette) Doug Kaye RDS Strategies LLC Web Hosting Chuck Kelley Excellence In Data, Inc. Data Warehousing and Data Marts Diane Ketelhut Harvard University Distance Learning (Virtual Learning) Chang-Su Kim Seoul National University, Korea Data Compression Wooyoung Kim University of Illinois at Urbana-Champaign Web Services Jerry Kindall Epok Inc. Digital Identity Brad Kleindl Missouri Southern State University–Joplin Value Chain Analysis Graham Knight University College London, United Kingdom Internet Architecture Craig D. Knuckles Lake Forest College DHTML (Dynamic HyperText Markup Language) Jim Krause Indiana University Enhanced TV Peter Kroon Agere Systems Speech and Audio Compression Gary J. Krug Eastern Washington University Convergence of Data, Sound, and Video Nir Kshetri University of North Carolina Gender and Internet Usage Global Diffusion of the Internet C.-C. Jay Kuo University of Southern California Data Compression Stan Kurkovsky Columbus State University Common Gateway Interface (CGI) Scripts Pamela M. H. Kwok Hong Kong Polytechnic University, China Wireless Marketing Jennifer Lagier Hartnell College File Types
xvii
P1: GDZ/SPH
P2: GDZ/SPH
WL040-FM-Vol.I
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
xviii
Thomas D. Lairson Rollins College Supply Chain Management and the Internet Gary LaPoint Syracuse University International Supply Chain Management Haniph A. Latchman University of Florida Managing a Network Environment John LeBaron University of Massachusetts Lowell Internet2 Kenneth S. Lee University of Pennsylvania Wireless Internet Jason Leigh University of Illinois at Chicago Virtual Reality on the Internet: Collaborative Virtual Reality Margarita Maria Lenk Colorado State University Guidelines for a Comprehensive Security System Nanette S. Levinson American University Developing Nations Edwin E. Lewis Jr. Johns Hopkins University E-business ROI Simulations David J. Loundy DePaul University Online Stalking Robert H. Lowson University of East Anglia, United Kingdom E-systems for the Support of Manufacturing Operations Supply Networks: Developing and Maintaining Relationships and Strategies David Lukoff Saybrook Graduate School and Research Center Health Issues Kuber Maharjan Purdue University Downloading from the Internet Julie R. Mariga Purdue University Mobile Devices and Protocols Mobile Operating Systems and Applications Oge Marques Florida Atlantic University Interactive Multimedia on the Web Prabhaker Mateti Wright State University TCP/IP Suite Bruce R. Maxim University of Michigan–Dearborn Game Design: Games for the World Wide Web Blayne E. Mayfield Oklahoma State University Visual C++ (Microsoft) Cavan McCarthy Louisiana State University Digital Libraries
18:37
Char Count= 0
CONTRIBUTORS
Patrick McDaniel AT&T Labs Authentication David E. McDysan WorldCom Virtual Private Networks: Internet Protocol (IP) Based Daniel J. McFarland Rowan University Client/Server Computing Matthew K. McGowan Bradley University Electronic Data Interchange (EDI) Nenad Medvidovic University of Southern California JavaBeans and Software Architecture Nikunj R. Mehta University of Southern California JavaBeans and Software Architecture John A. Mendonca Purdue University Organizational Impact Weiyi Meng State University of New York at Binghamton Web Search Technology Mark S. Merkow E-commerce Guide Secure Electronic Transactions (SET) Mark Michael King’s College HTML/XHTML (HyperText Markup Language/ Extensible HyperText Markup Language) Physical Security Brent A. Miller IBM Corporation BluetoothT M —A Wireless Personal Area Network Robert K. Moniot Fordham University Software Piracy Joseph Morabito Stevens Institute of Technology Online Analytical Processing (OLAP) Roy Morris Capitol College Voice over Internet Protocol (IP) Alec Nacamuli IBM United Kingdom Ltd., United Kingdom Electronic Funds Transfer Annette Nellen San Jos´e State University Public Accounting Firms Taxation Issues Dale Nesbary Oakland University Nonprofit Organizations Dat-Dao Nguyen California State University, Northridge Business-to-Business (B2B) Internet Business Models Peng Ning North Carolina State University Intrusion Detection Techniques
P1: GDZ/SPH WL040-FM-Vol.I
P2: GDZ/SPH
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
CONTRIBUTORS
Mark E. Nissen Naval Postgraduate School Intelligent Agents Won Gyun No University of Waterloo, Canada XBRL (Extensible Business Reporting Language): Business Reporting with XML Eric H. Nyberg Carnegie Mellon University Prototyping Jeff Offutt George Mason University Software Design and Implementation in the Web Environment Donal O’Mahony University of Dublin, Ireland Electronic Payment Robert Oshana Southern Methodist University Capacity Planning for Web Services Dennis O. Owen Purdue University Visual Basic Raymond R. Panko University of Hawaii at Manoa Computer Security Incident Response Teams (CSIRTs) Digital Signatures and Electronic Signatures Internet Security Standards Anand Paul University of Florida Inventory Management Thomas L. Pigg Jackson State Community College Conducted Communications Media Paul S. Piper Western Washington University Research on the Internet Benjamin R. Pobanz Purdue University Mobile Devices and Protocols Richard E. Potter University of Illinois at Chicago Groupware Dennis M. Powers Southern Oregon University Cyberlaw: The Major Areas, Development, and Provisions Paul R. Prabhaker Illinois Institute of Technology E-marketplaces Etienne E. Pracht University of South Florida Health Insurance and Managed Care Frederick Pratter Eastern Oregon University JavaServer Pages (JSP) Robert W. Proctor Purdue University Human Factors and Ergonomics Jian Qin Syracuse University Web Content Management
Zinovy Radovilsky California State University, Hayward Enterprise Resource Planning (ERP) Jeremy Rasmussen Sypris Electronics, LLC Passwords Peter Raven Seattle University Feasibility of Global E-business Projects Amy W. Ray Bentley College Business Plans for E-commerce Projects Julian J. Ray Western New England College Business-to-Business (B2B) Electronic Commerce Pratap Reddy Raritan Valley Community College Internet Navigation (Basics, Services, and Portals) Drummond Reed OneName Corporation Digital Identity Vladimir V. Riabov Rivier College Storage Area Networks (SANs) Nick Rich Cardiff Business School, United Kingdom Managing the Flow of Materials Across the Supply Chain Malu Roldan San Jose State University Marketing Plans for an E-commerce Project Constantine Roussos Lynchburg College JavaScript Akhil Sahai Hewlett-Packard Laboratories Web Services Eduardo Salas University of Central Florida Human Resources Management Atul A. Salvekar Intel Corp. Digital Communication Pierangela Samarati Universita` di Milano, Italy Disaster Recovery Planning J. Christopher Sandvig Western Washington University Active Server Pages Robert J. Schalkoff Clemson University Rule-Based and Expert Systems Shannon Schelin North Carolina State University E-government William T. Schiano Bentley College Intranets Roy C. Schmidt Bradley University Risk Management in Internet-Based Software Projects
xix
P1: GDZ/SPH
P2: GDZ/SPH
WL040-FM-Vol.I
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
xx
E. Eugene Schultz University of California–Berkley Lab Denial of Service Attacks Windows 2000 Security Steven D. Schwaitzberg Tufts-New England Medical Center Medical Care Delivery Kathy Schwalbe Augsburg College Project Management Techniques Mark Shacklette The University of Chicago Unix Operating System P. M. Shankar Drexel University Propagation Characteristics of Wireless Channels John Sherry Purdue University History of the Internet Carolyn J. Siccama University of Massachusetts Lowell Internet2 Judith C. Simon The University of Memphis Java Law Enforcement Law Firms Robert Simon George Mason University Middleware Nirvikar Singh University of California, Santa Cruz Digital Economy Clara L. Sitter University of Denver Library Management Robert Slade Consultant Computer Viruses and Worms Erick D. Slazinski Purdue University Structured Query Language (SQL) Mark Smith Purdue University Supply Chain Management Technologies Lee Sproull New York University Online Communities Charles Steinfield Michigan State University Click-and-Brick Electronic Commerce Electronic Commerce and Electronic Business Edward A. Stohr Stevens Institute of Technology Online Analytical Processing (OLAP) Dianna L. Stone University of Central Florida Human Resources Management David Stotts University of North Carolina at Chapel Hill Perl
18:37
Char Count= 0
CONTRIBUTORS
Judy Strauss University of Nevada, Reno Marketing Communication Strategies Wayne C. Summers Columbus State University Local Area Networks Jamie S. Switzer Colorado State University Virtual Teams Dale R. Thompson University of Arkansas Public Networks John S. Thompson University of Colorado at Boulder Integrated Services Digital Network (ISDN): Narrowband and Broadband Services and Applications Stephen W. Thorpe Neumann College Extranets Ronald R. Tidd Central Washington University Knowledge Management Herbert Tuttle The University of Kansas Video Streaming Okechukwu C. Ugweje The University of Akron Radio Frequency and Wireless Communications Asoo J. Vakharia University of Florida Supply Chain Management Robert Vaughn University of Memphis Law Enforcement Vasja Vehovar University of Ljubljana, Slovenia Benchmarking Internet Kim-Phuong L. Vu Purdue University Human Factors and Ergonomics Jordan Walters BCN Associates, Inc. Managing a Network Environment Siaw-Peng Wan Elmhurst College Online Banking and Beyond: Internet-Related Offerings from U.S. Banks Youcheng Wang University of Illinois at Urbana–Champaign Travel and Tourism James. L. Wayman San Jose State University Biometric Authentication Scott Webster Syracuse University International Supply Chain Management Jianbin Wei Wayne State University Load Balancing on the Internet Ralph D. Westfall California State Polytechnic University, Pomona Telecommuting and Telework
P1: GDZ/SPH WL040-FM-Vol.I
P2: GDZ/SPH
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
CONTRIBUTORS
Pamela Whitehouse Harvard University Distance Learning (Virtual Learning) Dave Whitmore Champlain College Multiplexing Russell S. Winer New York University Customer Relationship Management on the Web Raymond Wisman Indiana University Southeast Web Search Fundamentals Paul L. Witt University of Texas at Arlington Internet Relay Chat (IRC) Mary Finley Wolfinbarger California State University Long Beach Consumer Behavior Peter R. Wurman North Carolina State University Online Auction Site Management Cheng-Zhong Xu Wayne State University Load Balancing on the Internet
Qiang Yang Hong Kong University of Science and Technology, China Machine Learning and Data Mining on the Web A. Neil Yerkey University at Buffalo Databases on the Web Clement Yu University of Illinois at Chicago Web Search Technology Daniel Dajun Zeng University of Arizona Intelligent Agents Yan-Qing Zhang Georgia State University Fuzzy Logic Xiaobo Zhou University of Colorado at Colorado Springs Load Balancing on the Internet Donald E. Zimmerman Colorado State University Usability Testing: An Evaluation Process for Internet Communications
xxi
P1: GDZ/SPH WL040-FM-Vol.I
P2: GDZ/SPH
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
xxii
18:37
Char Count= 0
P1: GDZ/SPH WL040-FM-Vol.I
P2: GDZ/SPH
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
Preface The Internet Encyclopedia is the first comprehensive examination of the core topics in the Internet field. The Internet Encyclopedia, a three-volume reference work with 205 chapters and more than 2,600 pages, provides comprehensive coverage of the Internet as a business tool, IT platform, and communications and commerce medium. The audience includes the libraries of two-year and four-year colleges and universities with MIS, IT, IS, data processing, computer science, and business departments; public and private libraries; and corporate libraries throughout the world. It is the only comprehensive source for reference material for educators and practitioners in the Internet field. Education, libraries, health, medical, biotechnology, military, law enforcement, accounting, law, justice, manufacturing, financial services, insurance, communications, transportation, aerospace, energy, and utilities are among the fields and industries expected to become increasingly dependent upon the Internet and Web technologies. Companies in these areas are actively researching the many issues surrounding the design, utilization, and implementation of these technologies. This definitive three-volume encyclopedia offers coverage of both established and cutting-edge theories and developments of the Internet as a technical tool and business/communications medium. The encyclopedia contains chapters from global experts in academia and industry. It offers the following unique features: 1) Each chapter follows a format which includes title and author, chapter outline, introduction, body, conclusion, glossary, cross references, and references. This unique format enables the readers to pick and choose among various sections of a chapter. It also creates consistency throughout the entire series. 2) The encyclopedia has been written by more than 240 experts and reviewed by more than 840 academics and practitioners chosen from around the world. This diverse collection of expertise has created the most definitive coverage of established and cutting edge theories and applications in this fast-growing field. 3) Each chapter has been rigorously peer reviewed. This review process assures the accuracy and completeness of each topic. 4) Each chapter provides extensive online and offline references for additional readings. This will enable readers to further enrich their understanding of a given topic. 5) More than 1,000 illustrations and tables throughout the series highlight complex topics and assist further understanding. 6) Each chapter provides extensive cross references. This helps the readers identify other chapters within
7)
8)
9)
10)
the encyclopedia related to a particular topic, which provides a one-stop knowledge base for a given topic. More than 2,500 glossary items define new terms and buzzwords throughout the series, which assists readers in understanding concepts and applications. The encyclopedia includes a complete table of contents and index sections for easy access to various parts of the series. The series emphasizes both technical and managerial issues. This approach provides researchers, educators, students, and practitioners with a balanced understanding of the topics and the necessary background to deal with problems related to Internetbased systems design, implementation, utilization, and management. The series has been designed based on the current core course materials in several leading universities around the world and current practices in leading computer- and Internet-related corporations. This format should appeal to a diverse group of educators, practitioners, and researchers in the Internet field.
We chose to concentrate on fields and supporting technologies that have widespread applications in the academic and business worlds. To develop this encyclopedia, we carefully reviewed current academic research in the Internet field at leading universities and research institutions around the world. Management information systems, decision support systems (DSS), supply chain management, electronic commence, network design and management, and computer information systems (CIS) curricula recommended by the Association of Information Technology Professionals (AITP) and the Association for Computing Management (ACM) were carefully investigated. We also researched the current practices in the Internet field used by leading IT corporations. Our work enabled us to define the boundaries and contents of this project.
TOPIC CATEGORIES Based on our research we identified 11 major topic areas for the encyclopedia: r r r r r r r r
Foundation; Infrastructure; Legal, social, organizational, international, and taxation issues; Security issues and measures; Web design and programming; Design, implementation, and management; Electronic commerce; Marketing and advertising on the Web; xxiii
P1: GDZ/SPH
P2: GDZ/SPH
WL040-FM-Vol.I
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
xxiv
September 24, 2003
Supply chain management; Wireless Internet and e-commerce; and r Applications. r
Although these 11 categories of topics are interrelated, each addresses one major dimension of the Internetrelated fields. The chapters in each category are also interrelated and complementary, enabling readers to compare, contrast, and draw conclusions that might not otherwise be possible. Although the entries have been arranged alphabetically, the light they shed knows no bounds. The encyclopedia provides unmatched coverage of fundamental topics and issues for successful design, implementation, and utilization of Internet-based systems. Its chapters can serve as material for a wide spectrum of courses, such as the following:
r r r r r r r r
Char Count= 0
PREFACE
r
r
18:37
Web technology fundamentals; E-commerce; Security issues and measures for computers, networks, and online transactions; Legal, social, organizational, and taxation issues raised by the Internet and Web technology; Wireless Internet and e-commerce; Supply chain management; Web design and programming; Marketing and advertising on the Web; and The Internet and electronic commerce applications.
Successful design, implementation, and utilization of Internet-based systems require a thorough knowledge of several technologies, theories, and supporting disciplines. Internet and Web technologies researchers and practitioners have had to consult many resources to find answers. Some of these sources concentrate on technologies and infrastructures, some on social and legal issues, and some on applications of Internet-based systems. This encyclopedia provides all of this relevant information in a comprehensive three-volume set with a lively format. Each volume incorporates core Internet topics, practical applications, and coverage of the emerging issues in the Internet and Web technologies field. Written by scholars and practitioners from around the world, the chapters fall into the 11 major subject areas mentioned previously.
Foundation Chapters in this group examine a broad range of topics. Theories and concepts that have a direct or indirect effect on the understanding, role, and the impact of the Internet in public and private organizations are presented. They also highlight some of the current issues in the Internet field. These articles explore historical issues and basic concepts as well as economic and value chain concepts. They address fundamentals of Web-based systems as well as Web search issues and technologies. As a group they provide a solid foundation for the study of the Internet and Web-based systems.
Infrastructure Chapters in this group explore the hardware, software, operating systems, standards, protocols, network systems, and technologies used for design and implementation of the Internet and Web-based systems. Thorough discussions of TCP/IP, compression technologies, and various types of networks systems including LANs, MANS, and WANs are presented.
Legal, Social, Organizational, International, and Taxation Issues These chapters look at important issues (positive and negative) in the Internet field. The coverage includes copyright, patent and trademark laws, privacy and ethical issues, and various types of cyberthreats from hackers and computer criminals. They also investigate international and taxation issues, organizational issues, and social issues of the Internet and Web-based systems.
Security Issues and Measures Chapters in this group provide a comprehensive discussion of security issues, threats, and measures for computers, network systems, and online transactions. These chapters collectively identify major vulnerabilities and then provide suggestions and solutions that could significantly enhance the security of computer networks and online transactions.
Web Design and Programming The chapters in this group review major programming languages, concepts, and techniques used for designing programs, Web sites, and virtual storefronts in the ecommerce environment. They also discuss tools and techniques for Web content management.
Design, Implementation, and Management The chapters in this group address a host of issues, concepts, theories and techniques that are used for design, implementation, and management of the Internet and Web-based systems. These chapters address conceptual issues, fundamentals, and cost benefits and returns on investment for Internet and e-business projects. They also present project management and control tools and techniques for the management of Internet and Web-based systems.
Electronic Commerce These chapters present a thorough discussion of electronic commerce fundamentals, taxonomies, and applications. They also discuss supporting technologies and applications of e-commerce inclining intranets, extranets, online auctions, and Web services. These chapters clearly demonstrate the successful applications of the Internet and Web technologies in private and public sectors.
Marketing and Advertising on the Web The chapters in this group explore concepts, theories, and technologies used for effective marketing and advertising
P1: GDZ/SPH WL040-FM-Vol.I
P2: GDZ/SPH
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
TOPIC CATEGORIES
on the Web. These chapters examine both qualitative and quantitative techniques. They also investigate the emerging technologies for mass personalization and customization in the Web environment.
Supply Chain Management The chapters in this group discuss the fundamentals concepts and theories of value chain and supply chain management. The chapters examine the major role that the Internet and Web technologies play in an efficient and effective supply chain management program.
Wireless Internet and E-commerce These chapters look at the fundamental concepts and technologies of wireless networks and wireless computing as they relate to the Internet and e-commerce operations. They also discuss mobile commerce and wireless marketing as two of the growing fields within the e-commerce environment.
Applications The Internet and Web-based systems are everywhere. In most cases they have improved the efficiency and effectiveness of managers and decision makers. Chapters in this group highlight applications of the Internet in several fields, such as accounting, manufacturing, education, and human resources management, and their unique applications in a broad section of the service industries including law, law enforcement, medical delivery, health insurance and managed care, library management, nonprofit organizations, banking, online communities, dispute resolution, news services, public relations, publishing, religion, politics, and real estate. Although these disciplines are different in scope, they all utilize the Internet to improve productivity and in many cases to increase customer service in a dynamic business environment. Specialists have written the collection for experienced and not-so-experienced readers. It is to these contributors that I am especially grateful. This remarkable collection of scholars and practitioners has distilled their knowledge
xxv
into a fascinating and enlightening one-stop knowledge base in Internet-based systems that “talk” to readers. This has been a massive effort but one of the most rewarding experiences I have ever undertaken. So many people have played a role that it is difficult to know where to begin. I should like to thank the members of the editorial board for participating in the project and for their expert advice on the selection of topics, recommendations for authors, and review of the materials. Many thanks to the more than 840 reviewers who devoted their times by proving advice to me and the authors on improving the coverage, accuracy, and comprehensiveness of these materials. I thank my senior editor at John Wiley & Sons, Matthew Holt, who initiated the idea of the encyclopedia back in spring of 2001. Through a dozen drafts and many reviews, the project got off the ground and then was managed flawlessly by Matthew and his professional team. Matthew and his team made many recommendations for keeping the project focused and maintaining its lively coverage. Tamara Hummel, our superb editorial coordinator, exchanged several hundred e-mail messages with me and many of our authors to keep the project on schedule. I am grateful to all her support. When it came to the production phase, the superb Wiley production team took over. Particularly I want to thank Deborah DeBlasi, our senior production editor at John Wiley & Sons, and Nancy J. Hulan, our project manager at TechBooks. I am grateful to all their hard work. Last, but not least, I want to thank my wonderful wife Nooshin and my two lovely children Mohsen and Morvareed for being so patient during this venture. They provided a pleasant environment that expedited the completion of this project. Nooshin was also a great help in designing and maintaining the author and reviewer databases. Her efforts are greatly appreciated. Also, my two sisters Azam and Akram provided moral support throughout my life. To this family, any expression of thanks is insufficient. Hossein Bidgoli California State University, Bakersfield
P1: GDZ/SPH
P2: GDZ/SPH
WL040-FM-Vol.I
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
Guide to the Internet Encyclopedia The Internet Encyclopedia is a comprehensive summary of the relatively new and very important field of the Internet. This reference work consists of three separate volumes and 205 chapters on various aspects of this field. Each chapter in the encyclopedia provides a comprehensive overview of the selected topic intended to inform a board spectrum of readers ranging from computer professionals and academicians to students to the general business community. In order that you, the reader, will derive the greatest possible benefit from The Internet Encyclopedia, we have provided this Guide. It explains how the information within the encyclopedia can be located.
ORGANIZATION The Internet Encyclopedia is organized to provide maximum ease of use for its readers. All of the chapters are arranged in alphabetical sequence by title. Chapters titles that begin with the letters A to F are in Volume 1, chapter titles from G to O are in Volume 2, and chapter titles from P to Z are in Volume 3. So that they can be easily located, chapter titles generally begin with the key word or phrase indicating the topic, with any descriptive terms following. For example, “Virtual Reality on the Internet: Collaborative Virtual Reality” is the chapter title rather than “Collaborative Virtual Reality.”
Table of Contents A complete table of contents for the entire encyclopedia appears in the front of each volume. This list of titles represents topics that have been carefully selected by the editor-in-chief, Dr. Hossein Bidgoli, and his colleagues on the Editorial Board. Following this list of chapters by title is a second complete list, in which the chapters are grouped according to subject area. The encyclopedia provides coverage of 11 specific subject areas, such as E-commerce and Supply Chain Management. Please see the Preface for a more detailed description of these subject areas.
Index The Subject Index is located at the end of Volume 3. This index is the most convenient way to locate a desired topic within the encyclopedia. The subjects in the index are listed alphabetically and indicate the volume and page number where information on this topic can be found.
Chapters Each chapter in The Internet Encyclopedia begins on a new page, so that the reader may quickly locate it. The author’s name and affiliation are displayed at the beginning of the article. xxvi
All chapters in the encyclopedia are organized according to a standard format, as follows: r r r r r r r r
Title and author, Outline, Introduction, Body, Conclusion, Glossary, Cross References, and References.
Outline Each chapter begins with an outline indicating the content to come. This outline provides a brief overview of the chapter so that the reader can get a sense of the information contained there without having to leaf through the pages. It also serves to highlight important subtopics that will be discussed within the chapter. For example, the chapter “Computer Literacy” includes sections entitled Defining a Computer, Categories of Computers According to Their Power, and Classes of Data Processing Systems. The outline is intended as an overview and thus lists only the major headings of the chapter. In addition, lower-level headings will be found within the chapter.
Introduction The text of each chapter begins with an introductory section that defines the topic under discussion and summarizes the content. By reading this section the readers get a general idea about the content of a specific chapter.
Body The body of each chapter discusses the items that were listed in the outline section.
Conclusion The conclusion section provides a summary of the materials discussed in each chapter. This section imparts to the readers the most important issues and concepts discussed within each chapter.
Glossary The glossary contains terms that are important to an understanding of the chapter and that may be unfamiliar to the reader. Each term is defined in the context of the particular chapter in which it is used. Thus the same term may be defined in two or more chapters with the detail of the definition varying slightly from one to another. The encyclopedia includes approximately 2,500 glossary terms.
P1: GDZ/SPH WL040-FM-Vol.I
P2: GDZ/SPH
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
Char Count= 0
ORGANIZATION
For example, the article “Computer Literacy” includes the following glossary entries: Computer A machine that accepts data as input, processes the data without human interference using a set of stored instructions, and outputs information. Instructions are step-by-step directions given to a computer for performing specific tasks. Computer generations Different classes of computer technology identified by a distinct architecture and technology; the first generation was vacuum tubes, the second transistors, the third integrated circuits, the fourth very-large-scale integration, and the fifth gallium arsenide and parallel processing.
Cross References All the chapters in the encyclopedia have cross references to other chapters. These appear at the end of the chapter, following the text and preceding the references. The cross references indicate related chapters which can be
xxvii
consulted for further information on the same topic. The encyclopedia contains more than 2,000 cross references in all. For example, the chapter “Java” has the following cross references: JavaBeans and Software Architecture; Software Design and Implementation in the Web Environment.
References The reference section appears as the last element in a chapter. It lists recent secondary sources to aid the reader in locating more detailed or technical information. Review articles and research papers that are important to an understanding of the topic are also listed. The references in this encyclopedia are for the benefit of the reader, to provide direction for further research on the given topic. Thus they typically consist of one to two dozen entries. They are not intended to represent a complete listing of all materials consulted by the author in preparing the chapter. In addition, some chapters contain a Further Reading section, which includes additional sources readers may wish to consult.
P1: GDZ/SPH WL040-FM-Vol.I
P2: GDZ/SPH
QC: GDZ/SPH
WL040/Bidgolio-Vol I
T1: GDZ
WL040-Sample-v1.cls
September 24, 2003
18:37
xxviii
Char Count= 0
P1: IML/FFX
P2: IML/FFX
Active˙Server˙Pages˙OLE
QC: IML/FFX
T1: IML
WL040/Bidgoli-Vol I-Ch-01
June 23, 2003
14:44
Char Count= 0
A Active Server Pages J. Christopher Sandvig, Western Washington University
Introduction Introduction of ASP and ASP.NET Framework Scripting Versus Object-Oriented Programming Languages Code Examples ASP ASP.NET Microsoft’s .NET Strategy Web Services XML and SOAP Compiled Code Common Language Run Time
1 1 2 2 2 2 4 5 5 6 6 6
INTRODUCTION Active server pages (ASP) and ASP.NET are serverside programming technologies developed by Microsoft Corporation. Server-side programs run on Web servers and are used for dynamically generating Web pages. Many data-intensive Web applications, such as Web-based e-mail, online banking, news, weather, and search engines, require the use of server-side programs. Server-side programming is also useful for many other applications, such as collecting data, processing online payments, and controlling access to information. Server-side programs are executed each time a Web user requests a dynamically generated Web page. ASP and ASP.NET Web pages are identified by the file extensions .asp and .aspx, respectively. The primary advantage of server-side programming technologies is their ability to utilize databases. Databases are very efficient and powerful tools used for storing and retrieving data. Most sophisticated Web applications utilize databases for storing their data. Server-side programming is also very reliable. Servers provide a more stable, secure, and controllable programming environment then the client’s browser. In addition to ASP and ASP.NET other popular server-side Web technologies include Perl, PHP, J2EE, Java server pages, Python, and ColdFusion. All of these technologies run on the Web server and generate their output in HTML (hypertext markup language). The Web server sends the HTML to the client’s browser, which interprets it and displays it as formatted text. Most server-side technologies have similar capabilities; the primary functional differences between them are scalability and programming complexity. These issues are discussed later in the chapter.
Language and Platform Independence Separation of Code and Content Support for Multiple Client Types Modularity Base Class Library Session-State Management View State Scalability and Reliability Debugging and Error Handling Future of ASP and ASP.NET Glossary Cross References References
6 6 6 6 7 7 7 9 9 9 10 10 10
Server-side technologies may work in conjunction with client-side technologies. Client-side technologies are executed by the user’s browser and include JavaScript, Java applets, and ActiveX controls. Client-side technologies are typically used for controlling browser display features, such as mouse rollovers, dynamic images, and opening new browser windows. The advantage of client-side technologies is that the processing is done on the client’s machine, thus reducing the load on the Web server. Client-side technologies can execute more quickly because they do not require the back-and-forth transmission of data between the client and the server. However, the functionality of client-side technologies is limited due to their inability to access databases. They also require sending more data to the client, which can increase the time required to load a Web page. Server-side technologies are more reliable than clientside technologies. Because client-side technologies run on the user’s browser, they are dependent on the capabilities of the browser. Because browsers vary in their capabilities, client-side code that works well on one browser may not work on another. Server-side technologies, on the other hand, are always executed on the server. Because developers know the capabilities of their server, the results are very predictable. They send only HTML to the client’s browser, which all browsers support fairly consistently.
Introduction of ASP and ASP.NET Microsoft introduced ASP in December 1997 as a feature of its Internet Information Server (IIS) 3.0. Most of the other popular server-side technologies were introduced prior to 1995, making ASP a relatively late entrant into the world of server-side technologies. One year later, in 1
P1: IML/FFX
P2: IML/FFX
Active˙Server˙Pages˙OLE
QC: IML/FFX
T1: IML
WL040/Bidgoli-Vol I-Ch-01
2
June 23, 2003
14:44
Char Count= 0
ACTIVE SERVER PAGES
December 1998, Microsoft released ASP 2.0 as part of the Windows NT4 option pack (a free download). Two years later, IIS 3.0 was introduced as part of Windows 2000. Each release introduced modest improvements in both functionality and performance. Despite its late introduction, ASP quickly became a popular server-side technology. Its popularity was driven by both its ease of programming and the widespread availability of the IIS server, which is bundled free with several versions of Microsoft’s Windows operating system. Major Web sites that use ASP include Barnes and Noble (http://www.bn.com), Dell Computer (http://www.dell. com), JC Penney (http://www.jcpenney.com), MSNBC (http://www.msnbc.com), Ask.com (http://www.ask.com), and Radio Shack (http://www.radioshack.com). ASP.NET 1.0 was introduced by Microsoft in February 2002. ASP.NET differs significantly from ASP in its syntax, performance, and functionality. In many ways ASP.NET is a new product rather than simply an upgrade of ASP. ASP.NET was designed to support Microsoft’s .NET strategy and has extensive support for two technologies that are at the core of the strategy: XML (extensible markup language) and SOAP (simple object access protocol). It was also designed to overcome some of the weakness of ASP in the area of scalability and reliability. The differences between ASP and ASP.NET are discussed in greater detail later in this chapter. At the time of this writing, ASP.NET had been just recently introduced and had not yet been adopted by any major Webs sites. However, it has received many positive reviews from developers and is expected to capture 30% of the enterprise development market by 2004 (Sholler, 2002). ASP and ASP.NET are both supported by Microsoft’s IIS server. ASP is supported by all versions of IIS from 3.0 up and later. ASP engines are also available from thirdparty developers that support ASP on a number of nonMicrosoft operating systems and Web servers. Vendors include ChilliSoft and Stryon. The ASP.NET framework is a free download from Microsoft and runs with IIS under Windows 2000 and later. It is designed for portability between operating systems, and it is expected that third-party vendors will provide ASP.NET compilers for non-Microsoft servers and operating systems.
Framework Both ASP and ASP.NET are programming frameworks rather than programming languages. A framework is a bundle of technologies that work together to provide the tools needed for creating dynamic Web pages. The ASP framework provides support for two scripting languages, seven server objects, and Microsoft’s ActiveX data objects (ADO). The two scripting languages supported by Microsoft’s IIS server are VBScript and Jscript, of which VBScript is the most popular because of its similarity to the widely used Visual Basic programming language. Third-party vendors offer ASP scripting engines that support other scripting languages, such as Perl and Python. Much of ASP’s functionality is derived from a collection of seven server objects. These intrinsic objects provide the
tools for sending output to the user’s browser, receiving input from the client, accessing server resources, storing data, writing files, and many other useful capabilities. The seven objects are application, ASPerror, objectcontext, request, response, session, and server. Each object has a set of properties, methods, and events that are employed by the ASP programmer to access the object’s functionality. A detailed description of ASP’s server objects is beyond the scope of this chapter, but an excellent reference is available from Microsoft’s developer library: http: // msdn.microsoft.com / library / default.asp ? URL=/ library/psdk/iisref/vbob74bw.htm. The ASP framework provides database access through the recordset object, which is a member of Microsoft’s ActiveX data objects (ADO). The recordset allows ASP scripts to utilize most commercial database products, including Microsoft Access, Microsoft SQL Server, Informix, and Oracle (Mitchell and Atkinson, 2000). The ASP.NET framework supports a large number of programming languages and operating systems. Microsoft’s .NET framework provides native support for VB.NET, C# (pronounced C sharp), and Jscript.NET. Third-party vendors have announced plans to produce “.NET-compliant” compilers for over a dozen other languages, including Eiffel, Pascal, Python, C++, COBOL, Perl, and Java (Kiely, 2002; Ullman, Ollie, Libre, & Go, 2001). The ASP.NET framework replaces ASP’s seven server objects with an extensive “base class library.” The class library contains hundreds of classes and offers considerably more functionality than do ASP’s seven server objects. ASP.NET programmers access this code by initiating objects from the class library. All ASP.NET programs utilize the same base class library regardless of the programming language used.
Scripting Versus Object-Oriented Programming Languages An important difference between ASP and ASP.NET is that ASP uses scripting languages whereas ASP.NET supports object-oriented programming languages. Scripting languages are generally easier to write and understand, but their simple structure does not lend itself well to complex programs. Scripting languages are usually interpreted languages, meaning that the server compiles them each time a page is served. ASP.NET supports object-oriented event-driven programming languages. Object-oriented languages organize computer code into small units of functionality, called objects. The advantage of these languages is that they encapsulate functionality into small reusable objects. Writing and understanding object-oriented programs can be more difficult than understanding scripts, but the encapsulation of program functionality into discrete, reusable objects offers considerable advantages in complex programs.
CODE EXAMPLES ASP The following two code samples illustrate how ASP and ASP.NET work. The ASP code in Listing 1 displays a time-of-day message. The scripting language used is
P1: IML/FFX
P2: IML/FFX
Active˙Server˙Pages˙OLE
QC: IML/FFX
T1: IML
WL040/Bidgoli-Vol I-Ch-01
June 23, 2003
14:44
Char Count= 0
CODE EXAMPLES
3
Figure 1: ASP example: code Listing 1 viewed through a Web browser. VBScript and the file is saved on the Web server as TimeGreeting.asp. The code enclosed by the tags is executed on the server and is a mixture of VBScript and ASP server objects. The code inside the < > tags is HTML, which describes how the output should be formatted on the user’s browser. A close look at Listing 1 illustrates how ASP works. The first tag, , tells the server which scripting language is used. Because VBScript is ASP’s default language, this tag is not required, but it is considered good programming practice to include it. The second set of server tags, , instructs the server to insert the current time into the output. Time() is a VBScript function that returns the current time of day from the server’s internal clock. The response object’s write method is used to direct the output to the user’s browser. The third set of tags, , is similar to the previous line but the time function is nested inside the VBScript hour() function. The hour() function strips the minutes and seconds from the time and returns only the hour portion of the time as an integer value between 0 and 23. The fourth set of tags uses an “If then . . . End If” statement to send an appropriate time-of-day message. The If then . . . End If statement evaluates expressions as either true or false. If an expression is true, then the statements that immediately follow it are executed, otherwise execution jumps to the next conditional statement (ElseIf). Each successive statement is evaluated until the first true one is found. If none of the conditions is true, then the statements following the “Else” statement are executed. If more than one expression is true, then only the first is evaluated and its associated statements are executed. Figure 1
shows the output of this script displayed on a Web browser. Listing 1: Source code for TimeGreeting.asp. Time of Day Greeting Time of Day Greeting The current time is
The current hour is
Listing 2 shows the HTML output generated by the ASP code in Listing 1. (This output was obtained by viewing the page on a browser, clicking on the right mouse button, and selecting “View Source.”) Note that all of
P1: IML/FFX
P2: IML/FFX
Active˙Server˙Pages˙OLE
4
QC: IML/FFX
T1: IML
WL040/Bidgoli-Vol I-Ch-01
June 23, 2003
14:44
Char Count= 0
ACTIVE SERVER PAGES
Figure 2: ASP.NET example: code Listing 3 viewed through a Web browser. the code inside the tags has been processed by the server and replaced with the appropriate output. The client can view the HTML output that results from executing the server-side code enclosed within the tags but cannot view the server-side code that generated the output. Listing 2: HTML sent to the browser after the server has processed TimeGreeting.asp. Time of Day Greeting Time of Day Greeting The current time is 8:29:22 AM
The current hour is 8
Good Morning
ASP.NET ASP.NET code differs from ASP code in many important ways. The most visible difference is the separation of the server-side code and the HTML. This is illustrated in Listing 3. The server-side code is located at the top of the page within tags and the HTML is located below it. ASP.NET uses “Web controls” to insert the output of the server-side code into the HTML. The Web controls are themselves objects with properties, methods, and events.
The first line of code in Listing 3 contains a tag. This tag instructs the compiler that the programming language is VB.NET and that the code enclosed within the tags should be executed on the server. The second line of Listing 3 defines a subroutine named Page Load(). The Page Load subroutine runs automatically each time the page is loaded. The first line within the Page Load subroutine, lbTime.text = DateTime.Now.ToString("T") obtains the current time from the system clock and formats it as a string. The DateTime object is a member of ASP.NET’s base class library. The current time obtained from the DateTime object is assigned to the text property of a label named lbTime. The label object is instantiated and named in the HTML portion of the page with the statement The current hour and time greeting are assigned to the label controls named lbHour and lbGreeting, respectively. This output is inserted into the HTML as before. The output produced by TimeGreeting.aspx is shown in Figure 2. Note that despite the differences in the code, the output is same as that produced by the ASP code shown in Listing 1. Underscore ( ) indicates continuation of a line. Listing 3: ASP.NET source code for TimeGreeting.aspx. Sub Page_Load() lbTime.text = _ DateTime.Now.ToString("T") lbHour.text = _ hour(DateTime.Now.ToString("T"))
P1: IML/FFX
P2: IML/FFX
Active˙Server˙Pages˙OLE
QC: IML/FFX
T1: IML
WL040/Bidgoli-Vol I-Ch-01
June 23, 2003
14:44
Char Count= 0
MICROSOFT’S .NET STRATEGY
If hour(DateTime.Now. _ ToString("T")) The following ASP program contains code to insert a record into a database. After insertion, this program will display all its records as an HTML table. ADO EXAMPLE
P1: IXL/GEG
P2: IXL
ActiveX˙Data˙Objects
WL040/Bidgoli-Vol I-Ch-03
June 23, 2003
14:46
Char Count= 0
SUMMARY
The following are results from the processed ASP script: The following snippet contains code to update a database record: The following snippet contains code to delete a database record:
33
Step 4: Close the Connection and Recordsets In the final step, the connection and all its recordsets should be closed. Recordsets should be closed before closing the connection. After closing, one should also release the computer memory that was used to store these objects by setting their values to nothing. This process is sometimes termed garbage collection. The following snippet contains code to close the connection and its recordsets and also to release memory. In this snippet, the variable “Conn” has been used to represent a connection and two variables, rs and rs1, to represent two recordsets.
SUMMARY Progressive organizations have two major goals for their information systems. The first is to convert their existing applications to Web-based applications. The second goal is to write additional Web-based applications to take advantage of the opportunities created by intranet and the Internet. This is an information age and the size and complexity of information continues to grow. Organizations have their important information distributed in various forms and locations. Universal data access (UDA) is a Microsoft strategy designed to provide a comprehensive means to access data from a wide range of data stores distributed across intranets or the Internet. Microsoft has developed or patronized several technologies tied to its UDA strategy. These technologies can be grouped into three generations. The first generation of UDA technologies contains ODBC, RDO, and DAO. The second-generation UDA technology consists of OLE DB. The third generation is the latest generation of UDA technologies and contains ADO. The ODBC provides a common interface for accessing data stored in almost any relational DBMS or even some flat-file systems. ODBC uses SQL as a standard language for accessing data. Microsoft has created two high-level programming models, DAO and RDO, to simplify the ODBC model. DAO is written to work primarily with fileserver-based systems and RDO is designed for databaseserver systems. RDO is well suited for developing largescale multi-tier applications.
P1: IXL/GEG
P2: IXL
ActiveX˙Data˙Objects
WL040/Bidgoli-Vol I-Ch-03
34
June 23, 2003
14:46
Char Count= 0
ACTIVEX DATA OBJECTS (ADO)
OLE DB is a low-level programming interface to diverse data stores, including flat files, relational databases, and object-oriented databases. OLE DB provides applications with uniform access to diverse data sources. Another important advantage of the OLE DB model is its high-performance design and support for multi-tier client/server and Web-based applications. The ADO object model is made up of five top-level objects, Connection, Recordset, Command, Record, and Stream; four subordinate collections, Parameters, Fields, Properties, and Errors; and four associated objects, Parameter, Field, Property, and Error, within its own collections. Each top-level object can exist independently of the other top-level objects. The subordinate objects/collections cannot be created independently of their parent objects. The following steps may be used to access and manipulate a database from an ASP Web page: (a) create and open an ADO connection to a data store, (b) create and open an ADO recordset to a data store, (c) create one or more recordsets and manipulate their records, and (d) close the connection and recordsets. ADO enjoys broad industry support because it provides a consistent, easy-to-use, high-level interface between applications and diverse data stores and does it for both traditional client/server and Web-based applications.
GLOSSARY ADO (ActiveX Data Objects) ADO is the latest Microsoft data-access programming model. It provides a consistent high-level interface with many different types of data stores, including relational databases, flat files, e-mail files, spreadsheets, text documents, and graphics files. ADO collections A collection consists of one or more objects referred to collectively. ADO model contains four collections: Parameters, Fields, Properties, and Errors. These collections contain Parameter, Field, Property, and Error objects, respectively. ADO Object Model ADO object model is made up of five top-level objects (Connection, Recordset, Command, Record, and Stream), four subordinate collections (Parameters, Fields, Properties, and Errors), and four associated objects (Parameter, Field, Property, and Error) each within its own collection. Each top-level object can exist independently of the other top-level objects. The subordinate objects/collections cannot be created independently of their parent objects. Cursor type There are four types of cursors: forward only or read only, keyset, dynamic, and static cursors. A cursor type must be set before opening a recordset. DAO (Data Access Objects) DAO is a high-level programming model developed to simplify the ODBC programming model. DAO is designed to work primarily with file-server-based systems. Database server system In a database server model for a Web based system, the data logic is processed on the database server, the presentation logic is processed on the client, and the business logic processing is shared between the Web server, database server, and client.
DSN connection DSN (data source name) connection between a Web server and a database server is established in two steps: (a) create an ODBC DSN and (b) execute the “Open” method of the connection object. DSN-less connection There are two types of DSN-less connections: explicit and implicit. An explicit DSNless connection is established through the ConnectionString property of the Connection object. An implicit DSN-less connection is established based on the Recordset object or the Command object. File-server system In a file-server system, the client is responsible for executing data logic, presentation logic, and business logic. The file server’s role is simply to provide shared access of data files to the client. Lock type There are four cursor lock types: read only, pessimistic, optimistic, and batch optimistic. A cursor lock type must be set before opening a recordset. MDAC (Microsoft Data Access Components) Microsoft provides MDAC in order to make universal data access possible. MDAC consists of three important technologies: ODBC, OLE DB, and ADO. ODBC (Open Database Connectivity) ODBC technology provides a common interface to access data stored in almost any relational database management system and some flat-file systems, including ISAM/VSAM file systems. OLE DB data consumers OLE DB data consumers are software components that consume OLE DB data. Important data consumers are high-level data access models, such as ADO. OLE DB data providers OLE DB data providers are software components that allow one to access diverse data stores, including both relational and non-relational databases, with a standard level of uniformity and functionality. OLE DB services OLE DB services are software components that extend the functionality of OLE DB data providers. RDO (Remote Data Objects) RDO is a high-level programming model developed to simplify the ODBC programming model. RDO is written to work primarily with database-server-based multi-tier systems. RDO facilitates access to data stored in almost any SQL database. Recordset A recordset or a cursor is a set of records that are the result of a SQL query, a command, or a stored procedure. Stored procedure A stored procedure is made up of precompiled SQL statements that carry out a series of tasks. Stored procedures are stored and executed on the database server and are created to execute the data logic and some business logic. UDA (universal data access) The UDA approach is a Microsoft strategy designed to provide a comprehensive means of accessing data from a wide range of data stores across intranets or the Internet.
CROSS REFERENCES See Active Server Pages; Client/Server Computing; Databases on the Web; Electronic Commerce and Electronic Business; HTML/XHTML (HyperText Markup Language/
P1: IXL/GEG
P2: IXL
ActiveX˙Data˙Objects
WL040/Bidgoli-Vol I-Ch-03
June 23, 2003
14:46
Char Count= 0
FURTHER READING
Extensible HyperText Markup Language); JavaScript; Structured Query Language (SQL); Visual Basic; Visual Basic Scripting Edition (VBScript).
FURTHER READING ASP 101. Retrieved February 5, 2003, from http://www. asp101.com Brinkster. Retrieved February 5, 2003, from http://www. brinkster.com Crawford, C., & Caison, X., Jr. (1999). Professional ADO RDS programming with ASP. Wrox Press Ltd. Deitel, X., Deitel, X., & Nieto, X. (2001). E-business and E-commerce. New York: Prentice Hall. Goldman, J. E., Rawles, P. T., & Mariga, J. R. (1999). Client/server information systems. New York: John Wiley & Sons. Gottleber, T. T., & Trainor, T. N. (2000). HTML with an introduction to JavaScript. New York: Inwin McGraw– Hill. Gunderloy, M. (1999). Visual Basic developer’s guide to ADO. San Francisco: Sybex. Holzner, S. (1999). ADO Programming in visual basic 6. New York: Prentice Hall PTR. JavaScript Source. Retrieved February 5, 2003, from http://javascript.internet.com
35
Kalata, K. (2001). Internet programming (with VBScript and JavaScript). Boston: Course Technology. Kauffman, J. (1999). Beginning ASP databases. Birmingham, UK: Wrox Press. Krumm, R. (2000). ADO Programming for dummies. New York: John Wiley & Sons. MacDonald, R. (2000). Serious ADO : Universal data access with Visual Basic. Apress. Martiner, W. (1999). Building distributed applications with ADO. New York: John Wiley & Sons. Microsoft Universal Data Access. Retrieved February 5, 2003, from http://www.microsoft.com/data/ Morneau, K., & Batistick, J. (2001). Active server pages. Boston: Course Technology. Papa, J. (2000). Professional ADO 2.5 RDS programming with ASP 3.0. Birmingham, UK: Wrox Press. Roff, J. T. (2001). ADO: ActiveX data objects. Sebastopol, CA: O’Reilly & Associates. Sussman, D. (2000). Professional ADO 2.5 programming (Wrox professional guide). Birmingham, UK: Wrox Press. Sussman, D., & Sussman, D. (2000). ADO 2.6, programmer’s reference. Birmingham, UK: Wrox Press. Vaughn, W. R. (2000). ADO Examples and best practices. Apress.
P1: A-05A-Jacobsen Jacobsen
WL040/Bidgoli-Vol I-Ch-04
September 14, 2003
11:51
Char Count= 0
Application Service Providers (ASPs) Hans-Arno Jacobsen, University of Toronto, Canada
Introduction From Early Developments to Research Trends Application Service Providers Wireless Application Service Provider Outsourcing as a General Business Concept ASP Examples Income Tax ASP System Management ASP System Backup and Testing ASP Software Deployment Models Software Licensing and Distribution Models
36 36 37 39 39 39 40 40 40 40 42
INTRODUCTION The increase in network bandwidth, the growth of computing server performance, and the growing acceptance of the Internet as communication medium has given rise to a new software distribution model: application outsourcing and software leasing. Application outsourcing refers to the emerging trend of deploying applications over the Internet, rather than installing them in the local environment. Application outsourcing shifts the burden of installing, maintaining, and upgrading an application from the application user to the remote computing center, henceforth referred to as application service provider or ASP. Software leasing refers to the emerging trend of offering applications on a subscription basis, rather than through one of the traditional software licensing models. In the ASP model, system administration and application management is performed entirely by the provider. It thus becomes possible to charge a user on a pay-peruse basis, differentiable on a very fine-granular basis. This fine-grained differentiation can go as far as taking the specific functionality required by individual customers into account and metering, for computation consumed, the resources for exact billing. Thus, rather than selling a software license—giving a user “all-or-nothing” of a product— the software may be leased to the user, offering a “pay-byneed” and “pay-on-demand” model. The customer only pays for the actual functionality used and resources consumed. A large spectrum of ASPs has become popular. Early models include hosting of database-backed Web space that offer customers solutions for hosting corporate or individual Web sites, including the access to database management systems for managing dynamic content and input. Other ASP models include the leasing of machines from computational server farms that are securely managed in reinforced buildings with high-capacity network links and power generators to guarantee uptime despite power failures. Either a customer deploys and manages its own set of machines or is assigned a dedicated set of machines on which its applications are run. Further prominent examples include online (financial) computing 36
Pricing Models Legal Issues and Liabilities Implementation Issues, Privacy, and Security Considerations Service Level Agreements Privacy and Security Considerations Outlook Glossary Cross References References
42 43 44 44 44 45 46 46 46
services, remote e-mail and document management, online accounting and billing, and Web information systems of all sorts. The ASP-model also includes more complex application scenarios, however, such as the online access to enterprise resource planning systems (ERP), business administration applications, human resource management systems, customer relationship management systems, health care and insurance management systems, and system security management. All these application scenarios offered as ASP solution are attractive for enterprises and individual customers alike who do not want to afford, cannot afford, or do not have the capacity to operate full-fledged stand-alone information technology (IT) systems of the described nature. In this chapter, I provide a definition of the ASP model, describe its characteristics and facets, and discuss its implications. I begin by reviewing early developments and research trends; I then provide a detailed description of the application service-provisioning model and present a few detailed ASP examples, which lead to the identification of three ASP deployment models. I raise softwarelicensing issues and provide an analysis of existing ASP pricing models and strategies. I then discuss server-side implementation issues, involving a detailed description of privacy and security concerns. Finally, I draw conclusions and offer a view of how the ASP model is likely to evolve.
FROM EARLY DEVELOPMENTS TO RESEARCH TRENDS The idea of interacting with a remote computer system across a network goes back to the mainframe era and the introduction of time-shared operating systems. Back then the driving force for this computing model was the investment-intensive mainframe computer systems that had to be maximally utilized to justify their large cost. The 1960s can be considered the era of the mainframe. Combined with the concept of interactive computing, implemented through time-shared operating systems, succeeding the earlier batch-processing model, computing
P1: A-05A-Jacobsen Jacobsen
WL040/Bidgoli-Vol I-Ch-04
September 14, 2003
11:51
Char Count= 0
FROM EARLY DEVELOPMENTS TO RESEARCH TRENDS
jobs had to be submitted and were processed by operators (with the final output delivered much later to the programmer) the idea of a remote computing utility, essentially today’s ASP, was born. The mainframe became accessible through physically distributed dumb terminals connected with the computing utility over dedicated networks. The key difference between the remote computing model and the ASP model is that an ASP offers a fixed set of applications and services to its customers, whereas the remote computing model simply offers accessibility of the bare computing system to multiple users across the network. In the 1970s, the minicomputer—a more affordable, smaller computing system—became the computer of choice for many companies and universities. Later on, in the 1980s, the personal computer became the lucrative choice, even spreading to the private sector. This turned the attention away from the initially popular remote computing model. On one hand, more and more applications were developed for single-user, personal computers; on the other hand, the client–server computing model became popular. In the client–server model, a number of clients are served by a more powerful computing server. Clients and servers may either be computer systems (e.g., file server), but also may simply denote individual processes communicating with one another (e.g., database server, Web server, and application server). In the late 1990s, due to the increasing spread and commercial acceptance of the Internet, advances in server technology, and steady increase in complexity of managing of (business) software systems, a model referred to as network computing model combined with a thin-client model became popular and set the stage for the thenemerging ASP model. Network computing again refers to accessing a powerful computing system across the network. A thin-client can range from a handheld device to a desktop but captures the notion of off-loading most computational and data management tasks to a remote computing system. Many of the technical aspects of an application service provider have been thoroughly investigated in research; see, for example, Bhargava, King, and McQuay (1995); Czyzyk, Mesnier, and More (1997); Abel, Gaede, Taylor, and Zhou (1999); and Jacobsen, Guenther, and Riessen (2001). Business strategic and information economic questions have also been explored. For example, Marchand and Jacobsen (2001) analyzed, from an economic point of view, how the emerging ASP model may affect the profit opportunities of “traditional” independent software vendors. Two alternative economic scenarios can be envisioned, either competitively opposing application leasing and traditional licensing or combining both in a complementary fashion (see Marchand and Jacobsen for details). The research projects exploring ASP models often go one step further than commercial ASPs do at present. Many research projects have explored more open and marketplace-oriented scenarios, in which a number of players interact. For example, the Middleware for Method Management project (see Jacobsen et al., 2001, for details) introduces the differentiation between the infrastructure
37
provider, the data provider, the method provider, and the user. The infrastructure provider models the ASP. The data provider, a separate entity, publishes data sets that may serve the user community (e.g., historical stock quotes, geographic information, or consumer data.) The method provider publishes computational methods, which constitute algorithms from the target application domain of the marketplace (e.g., statistical analysis, numerical analysis, optimization schemes, or decision support algorithms.) The user, as in the commercial ASP model, executes published algorithms on published data. Method providers and data providers usually coincide with the application service provider in commercial systems. Interestingly, this infrastructure already recognized the need for letting individual users offer specific services for other market players to use. A similar vision, more targeting the corporate customer, is underlying the huge effort being put into the Web Services standard (World Wide Web Consortium, 2003). Other research systems include the DecisionNet project (Bhargava et al., 1995), the NEOS service (Czyzyk et al., 1997), the SMART project (Abel et al., 1999), and MMM (Jacobsen, 2001). DecisionNet is an organized electronic market for decision support technologies. The NEOS service provides access to optimization software for use by researchers worldwide. The SMART project serves the government by assisting in county planning tasks and simplifying related administrative tasks. MMM is a middleware platform for mathematical method management that integrates various distributed mathematical software package providers, offering the user one unique access point in using the different systems.
Application Service Providers Application service providers are third-party entities that manage, deploy, and host software-based services and applications for their customers from server farms and datacenters across wide area networks. Customers access the hosted application remotely and pay in a subscriptionbased manner. In essence, ASPs constitute a way for companies to outsource some or all aspects of their information technology operations, thus dramatically reducing their spending in this area. Services and applications offered by ASPs may be broadly categorized as follows: r
Enterprise application ASPs deliver high-end business applications, such as enterprise resource planning solutions. Customers are corporate clients who need these solutions but want to avoid investing in proper in-house installations. r Locally constrained ASPs deliver a wide variety of (mostly bundled) application services in a local area, such as a portal for all the shops in a city or tourist information services for a region, including event registration, booking, and ordering features. These serve both the individual users as well as the local entity (e.g., shop or museum). r Specialized ASPs deliver highly specialized applications addressing one specific function, such as news, sports and stock tickers, weather information, site indexing, or credit card validation. Customers are usually other
P1: A-05A-Jacobsen Jacobsen
WL040/Bidgoli-Vol I-Ch-04
September 14, 2003
38
11:51
Char Count= 0
APPLICATION SERVICE PROVIDERS (ASPS) Server Farm ASP
Customer 1
Server ASP
Customer 2 ASP
Server Server Server
ASP-aggregator
Customer n
Figure 1: Application service provider architecture roles.
online service providers who bundle multiple services as one and serve one target community. r Vertical market ASPs deliver applications catering to one specific industry, such as insurances, human resource management, media asset management, or health care. The customer is the corporate client. r Bulk-service ASPs deliver applications for businesses in large quantities, such as e-mail, online catalog, document, or storage management. r ASP aggregators combine the offerings of several ASPs and provide the user with service bundles and a single way to interact with all the aggregated ASPs. The ASP value chain consists of many players, including the network infrastructure provider, the server farm provider, the independent software vendor, and the ASP and ASP aggregator. Figure 1 depicts a logical architecture of an ASP. The network infrastructure provider is responsible for the network that connects customers to ASPs. The network infrastructure can be further broken down into the physical network provider, such as broadband access, phone lines, and communication infrastructure provider, and the Internet service provider, which offers services to get customers on the network. The black arrows in Figure 1 designate communication links managed by network infrastructure providers. The server farm provider hosts the outsourced applications. The server farm consists of hundreds or more computing servers that are collectively housed. The business
models for running such server farms vary. Under certain models customers bring in their own computing servers and are responsible for administering them. Other models rent a number of servers to each customer and operate them for the customer. In this case, the customer refers to the ASP operating an application. The server farm is often also referred to as a data center, because a fair amount of data management and storage is involved in most applications. In Figure 1, the server farm provider is not explicitly shown. It is responsible for the components designated as servers and as data center in the figure. The view of the server farm provider as a data center refers to ASP models in which the server farm provider takes over data backup or manages high volumes of data for the customer or for the ASP. The independent software vendor (ISV) is responsible for the application software that is offered as outsourced solution through an ASP. Some ASPs decide to build their own software, thus avoiding the payment of license fees to the software vendor. Because the role of this player is more in the background, it is not explicitly shown in Figure 1. The application service provider is the entity that offers the outsourced application to the customer over the network. However, the ASP must neither own the software, which it may license from an independent software vendor, nor must it own or operate the hardware, which it may lease from a server farm provider. In a further breakdown, an ASP aggregator bundles several ASPs together and offers the user one unique interface. This may be as simple as offering an ASP directory, a common log-in, and authentication, or more complex in that data can be seamlessly exchanged between the different ASPs. ASP aggregators strongly depend on open standards for accessing disparate ASPs through software integration. The emerging Web Services standards may constitute a viable solution (World Wide Web Consortium, 2003). The overall architecture of a model ASP is depicted in Figure 1, which shows the interaction of the different elements of the ASP value chain. In the figure, an ASP is shown as a logical entity. It is associated with the software it operates for its customers and the necessary access, billing, and accounting software to run its operation. Figure 1 abstracts these functions into one component and maps them to one or more servers on the server farm. In reality, all these
Figure 2: Factors in deciding to outsource some or all applications.
P1: A-05A-Jacobsen Jacobsen
WL040/Bidgoli-Vol I-Ch-04
September 14, 2003
11:51
Char Count= 0
ASP EXAMPLES
functions may operate on physically distributed computers. This achieves redundancy and fault tolerance and thus increases availability and uptime for the customer. An ASP may be a fully virtual enterprise, without any physical presence (i.e., office space). An ASP must strategically decide to license existing applications or to build the outsourced application itself. This decision is strongly dependent on the kind of application offered. Both models can be found in practice. Existing legacy applications were never intended to be used over wide area networks by disjointed sets of users, so solutions for Web-enabling and deploying legacy applications in ASP fashion must be implemented to be used effectively in outsourcing system. Simple services, on the other hand, can easily be developed, saving the ASP expensive license and integration costs. Most ASPs are likely to use standard components, such as database management systems, in their software architectures, even for simple services. These standard components are too costly for an ASP to redevelop, so that license and integration costs are inevitable. This strategic decision by the ASP may also have an effect on customers, who may prefer to run their outsourced applications on industry-standard software. A standard ensures the customer that a switch back to an in-house operation or another ASP offering the same package can be made at any time. The ASP model offers a number of advantages to the customer. These include a significant reduction of the total cost of ownership, because no software must be purchased and fewer IT personnel must be on site. Moreover, all IT-related tasks, such as software installation and upgrades, application maintenance, data backup, and computer system administration, are shifted to the ASP. A customer can operate with sophisticated IT applications without the huge investments in software licenses and hardware, thus drastically increasing the return of investment. The ASP model also presents a number of disadvantages for the customer, however. These include less control over application software and the data processed, and therefore limited customization and probably less product functionality, and external dependencies on the ASP and on access to it (i.e., the network and Internet service provider.) For data storage– and data processing– intensive and nonstandard software, high switching cost is a further disadvantage to the customer. This applies equally to an application purchased for in-house use, however. Finally, the ASP model is an as yet unproven concept with little experience on either side of the relationship, little standard support, and few widely known successful applications. For an enterprise, outsourcing part or all of its IT operation is an important strategic decision. From the previous analysis, a number of questions to guide this decision process become evident. These questions are summarized in Figure 2. In the late 1990s, the ASP Consortium (http://www ASPstreet.com) was chartered, an industrial organization that represents the interests of ASPs and their customers. Other online resources directly related to the emerging ASP industry are the ASP Harbor (http://www Webharbour.com), ASP Island (http://www ASPIsland.com),
39
and ASP News (http://www.ASPnews.com). These Web sites and portals are mostly commercial Web sites that collect, distribute, and sell information about ASPs.
Wireless Application Service Provider A wireless application service provider, also known as WASP, is essentially the same as a conventional application service provider except it focuses on mobile wireless technology for service access and as a delivery mechanism. A WASP performs similar services for mobile wireless customers as the ASP does for its customers on wired lines. The wireless application service provider offers services catering to users of cellular phones, personal digital assistants, and handheld devices, and, generally, to any mobile wireless client. The service provider is more constrained in what it can offer because of the limits of the access device and great varieties in available device technology. In the business-to-consumer market, wireless application service offerings include, for example, e-mail access, unified messaging, event registration, shopping, and online banking. In the business-to-business market, wireless application services include account management and billing, backend banking, and remote sensing; in the future, this could include user location identification, system monitoring, and wireless network diagnosing. Future extensions of this model could be wireless network access providers, WASPs that offer wireless network infrastructure to customers on a pay-by-use basis in and around coffee shops, restaurants, airports, and train stations. Often, ASPs already offer wireless interaction possibilities and thus embrace both models. Because of the similarity of the WASP and ASP models, I do not discuss the WASP model further. All concepts introduced apply equally well to this kind of application service provider.
Outsourcing as a General Business Concept Outsourcing of many business functions, from simple tasks such as bulk mailing or more complex functions such as accounting and human resource management, have been commonplace since at least the middle of the 20th century. The focus on outsourcing core IT functions and software applications is merely a special case of this broader category that has become possible because of new technology. In this chapter, the focus is on the new application service provider model enabling application outsourcing; more traditional outsourcing concepts are not covered further.
ASP EXAMPLES In this section, I introduce a number of ASP examples. This discussion is based on existing ASP ventures. These examples have been chosen to exhibit different characteristics of ASPs that are presented more comprehensively in the following sections. The examples make no reference to specific ASP ventures, because there are too many operating ASPs, and the current ASP landscape is changing rapidly.
P1: A-05A-Jacobsen Jacobsen
WL040/Bidgoli-Vol I-Ch-04
September 14, 2003
40
11:51
Char Count= 0
APPLICATION SERVICE PROVIDERS (ASPS)
Income Tax ASP
SOFTWARE DEPLOYMENT MODELS
The (income) tax ASP is, in the simplest instantiation, a service accessible through the Internet to complete tax forms. The yearly forms are made available on the ASP site through a browser, either based on the common HTML (hypertext markup protocol) form or based on Java applet technology. The user interface may be enhanced with features to check the entered information automatically or to guide and make the user aware of available options, essentially offering the same functionality as tax consultants or desktop tax software. The final form can then be made available as a printable document for the user to download and forward to the government tax service with supplementary information or be directly forwarded electronically to the government. In the latter case, a direct integration of the ASP with enterprise resource planning software used by the tax service can be envisioned. Many countries are already bringing their government services, especially the revenue service, online, because it greatly facilitates the processing and distribution of the information and thus dramatically cuts costs. Moreover, private companies offer these services, combined with value-added consulting services, and forward the processed information directly to the revenue service. This example can be regarded as a successful undertaking that is changing the way people interact with their governments.
The software deployment model defines where the software resides and how it is managed. In the context of application service provisioning, three models have crystallized. Application hosting refers to the model of remotely managing a specific software package for a customer. The hosting company manages n applications on n hosts on behalf of n customers. Each application is given its dedicated set of resources and is physically shielded from other applications, only sharing the networking infrastructure. At one extreme of this model, the hosting company only provides the host, the network, or building infrastructure (i.e., machine rooms and physical security). Web-space hosting is a popular example of this model. Often ISP (Internet service providers), who already own appropriate data centers and server farms to start with, grow into such hosting ventures. For the hosting company it is difficult to optimally use computing resources, because a switch from one customer’s system to another involves nontrivial installation steps. For example, peak load management, which shifts resources from one application to another depending on its usage pattern, is difficult to achieve for the ASP in this model. On the other hand, different hosted ventures can significantly benefit from the closeness of other hosted services, thus increasing overall efficiency of their Webportals and decreasing perceived latency for users. For example, a big retail store whose Web site is hosted on the same server farm as an Internet advertiser will inevitably benefit from the mutual proximity. Figure 3 depicts this model logically. It shows the one-to-one correspondence between the customer and the software that the ASP operates on behalf of this customer. Each customer has a dedicated, physically separate set of computing resources. The arrows in the figure designate the network over which the customer interacts with the ASP. The figure also indicates that a single ASP manages multiple, possibly diverse, applications for different customers. Application outsourcing refers to the model of remotely deploying one particular application, which serves many customers at the same time. This model is commonly referred to as the ASP model. Other deployment models have also been referred to as ASPs by the press, which does not differentiate carefully the various models. In this chapter, a finer grained separation is advocated. Here, in the ASP model application is offered to n clients. Whether
System Management ASP The system management ASP deploys software components in the customer’s computational environment that communicate autonomously with the ASP’s site. In this fashion, network traffic, available disk space, system access, and, generally, any kind of computing resource activity can be closely monitored and logged. In case of a problem, or an anticipated problem, an operator is notified to take care of the problem. System intrusion detection software can be deployed in this manner as well. The local monitoring software checks for unusual system access patterns, either by forwarding network traces to the ASP’s mining software or by doing the analysis locally and forwarding alert. Allen, Gabbard, and May (2003) provided a detailed discussion that investigates the outsourcing of managed IT security.
System Backup and Testing ASP The system backup ASP offers its clients data archival services, that is, backing up disks on a defined schedule over the network, without operator intervention. Lost, overwritten, and damaged data is thus safeguarded by the ASP and can be retrieved by the client at anytime and from anywhere, over the network. Similarly, the system-testing ASP tests a client’s information system from points across the network. In many cases, a Web portal is, in manually or semiautomated fashion, subjected to loads from an outside entity; any unforeseen features, bugs, and possible errors are logged in a database and turned over to the client. Testing may include end-to-end system load measurements, monitoring of traffic, and verification of results. Because of the increase of Web-based business, this model has become a popular venture.
Customer 1
ASP Application
Server 1 Customer 2
Application
Server 2
Application
Server n Customer n
Figure 3: Application hosting deployment model.
P1: A-05A-Jacobsen Jacobsen
WL040/Bidgoli-Vol I-Ch-04
September 14, 2003
11:51
Char Count= 0
SOFTWARE DEPLOYMENT MODELS
or not this application is spread out over a number of computing nodes depends on the actual application design and implementation. The outsourced application often addresses very specific IT needs, without much customization, by many customers. Web-based e-mail services, online document management, and storage have been widely offered under such a deployment model. This approach is particularly attractive for vertical markets and applications that require little customization on a per-user basis, because in this case, the ASP has little overhead to pay. The advantage over the hosting model is that the ASP model allows computing resources to be allocated more dynamically and on demand. Server-side throughput guarantees (see Service Level Agreement later in the chapter) can thus be implemented with less investment in physical computing resources. A critical problem for this deployment model is the question of how to virtualize an application that was not intended for the use by many noninteracting customers. Standard business software has been designed for use by one customer at a time and not by n independent customers using it over a wide area network. Most independent software application vendors have announced their interests in this model and have started to offer their applications in such a fashion. This trend led to Web enabling of many existing applications, as well as to the redevelopment of such applications with a Web-based model in mind. Smaller and newer companies have primarily undertaken the latter. Figure 4 depicts the application service provider deployment model. The arrows indicate network communication between customers and ASP. The difference between this model and the hosting deployment model is that here the mapping between applications managed by the ASP on behalf of its customers is not transparent. The ASP may not dedicate a physically separate server and application image for each customer; rather, the ASP may serve all customers with one application image. The customer interacts with the ASP network gateway and not the specifically dedicated resources, as in the previous model. The ASP may use less hardware to fulfill its customer needs (i.e., in Figure 4, assume i, the number of servers, is less than n, the number of servers in the hosting model). A third model has appeared, referred to as the application service model. In this model, the service provider offers a service to its clients, which involves installing and maintaining software systems at the clients’ site and ser-
Customer 1
ASP Application
Server 1 Customer 2
Application
Server 2
Application
Server i Customer n
Figure 4: Application service provider deployment model.
Customer 1 ASP managed component (Hardware/Software)
41
ASP Application
Server 1 Application
Customer 2
Server 2
ASP managed component (Hardware/Software)
Application
Server k Customer n ASP managed component (Hardware/Software)
Figure 5: Application services or hybrid software deployment model.
vicing these systems and the clients’ computing infrastructure across the network. Here, the service provider offers one application service to n clients and, additionally, manages computing systems and software at n client sites. Network monitoring, system administration, and system and network security constitute application services that are deployed in this manner. For the service provider this model involves great overhead, because the provider is responsible for many individually distributed, heterogeneous hardware resources. Often, a software monitor observes, at the clients’ site, the state of the managed system and alerts an application service administrator preventively or in the event of problem. In a sense, this is a hybrid deployment model combining the characteristics of the previous two models. Figure 5 depicts this model logically. The components designated “ASP managed component” refer to software or hardware components that the ASP deploys on the customers site. These components monitor or control and alert the ASP in case of malfunction, emergency, or as required. These components do not exist in previous models, in which customers interact with user interface software with the remote ASP. The ASP can operate, manage, and control these components from its site over the network. These ASP-managed components are often referred to as appliances, but this term specifically refers to hardware components that are plugged into the customer’s network. The mapping between customers managed and hardware required on the ASP site may correspond to any one of the previous cases. Finally, Web services constitute services in the sense of information services, but also in the sense of humanfacilitated services that are made available over the Web. In the trade press, these, too, are often referred to as application services, which is a completely different model from what has been described in this chapter thus far. The term “Web service” is in line with a set of standards (referred to as “Web Services”) that is commonly used to build fully automated Web services as described here. This class of services is largely the same as the deployment of applications according to the ASP model described thus far. However, the term Web services usually refers to very specific service offers, whereas an outsourced application usually refers to a much more complex application
P1: A-05A-Jacobsen Jacobsen
WL040/Bidgoli-Vol I-Ch-04
September 14, 2003
42
11:51
Char Count= 0
APPLICATION SERVICE PROVIDERS (ASPS)
Customer 1
ASP
Web service
Customer 2 Web service
Customer n
Web service
Web service
Figure 6: Web services deployment model. offered over the network. Examples of Web services in the information technological sense are the provisioning of stock ticks, news feeds, credit card authentication, horoscopes, chat rooms, or instant messaging. Examples of Web services in the human-facilitated sense are, for example, help desks or medical and legal advice. Often a Web service is bundled, both technically and economically, into a larger Web portal offering a whole range of services to its user community. No clear-cut distinction exists between the Web service model and the application service model. Similarly, mobile services have appeared that target the mobile communication sector but essentially follow the same model. Figure 6 shows the logical architecture of the Web service designated software deployment model. Arrows, as in the previous diagrams, designate network communication. The ASP designated component illustrates the fact that Web services are usually bundled and hardly used in isolation. The latter may also be amenable, however, for customers who directly interact with the desired Web service. The mapping between Web services and hardware to execute the service on the service deployment side depends entirely on what kind of service is offered. It is highly unlikely that a structure as found in the hosting deployment model would be propagated here. Web services may be deployed anywhere on the network and must not be physically collocated with the aggregating service. The half circles designate wrapper code that an aggregator must provide to integrate or bundle the functions of several services to offer a further new Web service. The current standardization efforts referred to as Web Services (World Wide Web Consortium, 2003) will greatly facilitate the writing of such wrapper code, because Web services conforming to this standard would expose a well-formed application programming interface.
SOFTWARE LICENSING AND DISTRIBUTION MODELS The software licensing and distribution model defines the terms of use of the software for the customer and defines any possible obligations on the part of the software provider. Software licensing is tightly coupled with the pricing model and the service level agreement (SLA) offered by a provider. (These concepts are discussed later on in this chapter.) Four basic software licensing and distribution models broadly reflect the cases found in practice.
First, the classical software-licensing model refers to the case in which software is sold through a network of distribution channels to the end user, who buys and installs the software on his or her machine and uses it indefinitely. This model is not applicable in the context of application service provisioning, because the licensed application software does not reside under control by the ASP. The license-controlled model refers to a refined model, whereby a customer buys and installs the software on his or her machines but is bound through a contractual agreement to pay periodic (e.g., yearly) license fees to keep using the software. In return the customer receives updates, patches, training, consulting, or new versions of the software on a regular basis. Software licenses are often designed according to the number of users working with the application (e.g., on a per-seat basis), or according to the numbers of clients interacting with an application, and on the basis of per application clients and servers deployed. Although this model already reflects a subscription-based character, it is often enforced technically by sophisticated license management software that interacts over the network with the distributor’s system. This is still is not the predominant model employed by application service providers, because the customer still has most of the control over the software installed on-site. The leasing-based software-licensing model refers to the model employed by application service providers. This model defines the customer’s interaction with the remote application on a subscription basis and in terms of service level agreements guaranteed by the provider. The subscription mechanism defines a pricing structure based on computing resources consumed and software features used. In this model a customer interacts with an application over the network, with all management aspects of the hardware and the software being shifted to the ASP. The leased software could run entirely on the customer’s machines and be managed remotely by the ASP, or part of the application could run on the customer’s machine and part of it on the ASP’s server farm. Finally, a combination of the models described earlier gives rise to a further licensing and distribution model that is emerging in practice. In this model, a licensed application is offered to the client and complemented with services and extensions accessible only over the network. Examples include update distribution, library provisioning, application administration, security management, performance monitoring, and data management.
PRICING MODELS The ASP model gives rise to the implementation of a finegrained pricing structure that allows charging a user on a pay-per-use basis, rather than a coarse-grained structure that foresees only a limited number of prices charged for using the service. Pricing may account for the amount of system resources consumed (e.g., system interaction time, amount of data storage, CPU [computer processing unit] cycles), application functionality required, transactions executed, or simply based on a periodic or flat fee pricing model, as well as any combination of the former. The following four pricing models can be distinguished. The flat-fee pricing model establishes a flat fee
P1: A-05A-Jacobsen Jacobsen
WL040/Bidgoli-Vol I-Ch-04
September 14, 2003
11:51
Char Count= 0
LEGAL ISSUES AND LIABILITIES
for the use of the ASP. The flat fee can be derived as a function of the size of the customer’s enterprise, derived from an expected use on a per-user and per-month basis, derived from an expected transaction volume, a number of users (i.e., either named users or concurrent users), an expected usage time, or based on screen clicks. The advantage for the ASP is the simplicity to implement this model and the predictable revenue stream. The disadvantage is the invariability of the model to peak uses and customer growth (i.e., overuse). The advantage for the customer is the predictability of cost. The disadvantage for the customer is the inadaptability to underuse (i.e., paying the same price even in periods of less use). A slightly different model, usage level-based pricing, charges a one-time setup fee and bills for certain usage levels (e.g., based on number of users using the application). Any use of the application beyond this level incurs additional charges; any use below it incurs the set charges. This model implements a predictable price structure for the customer and guarantees the ASP a usage-based remuneration, but fixed revenue if the service is underutilized by the customer. Compared with the flat-fee model, nothing changes for the customer unless the service is overused. A usage-based pricing model bills a customer according to the resources and application features used. Implementing this kind of billing is more complicated than are the first two cases because a metering and provisioning engine must be developed for the particular hardware and software (operating system and application) used. These engines are now becoming available and are often part of new Web and application deployment infrastructures. More difficult than metering the usage is metering of the application features used. Unless the application has been designed with this kind of deployment scenario in mind, it is difficult to offer a solution. A wrapper around the application has to be built to mediate between metering the application features and the customer interface. The disadvantage for the customer is the unpredictable price charged. A hybrid model based on a flat fee enhanced with valueadded services constitutes a further model found in practice. In addition to a flat-fee charge, the user is offered to use of value-added services that are billed according to one of the available pricing models. Further consideration of pricing models constitutes bundling of services. This is often advocated by ASP aggregators. Several complementary services are offered as a package and are cheaper as the sum of all individual services together. A good example is the bundling of Internet access with Web hosting propagated by many ISPs today. Furthermore, an ASP may differentiate its services by offering a variety of service-level agreements, discussed in greater detail in the next section. SLAs may account, for instance, for minimum network latency guarantees, minimum computing resource availability and throughput guarantees, and different service schedules (e.g., hotline service and data backup schedules). Rather than selling a software license, giving a user “all or nothing” of a product, the software may be leased to the user. In contrast, under the classical software distribution model and under the software licensing model, a customer
43
obtains the entire functionality of an application, whether or not it is actually required. Consequently, billing is much coarser grained, reflecting only the version structure of the product. A customer may, for instance, choose between a demo, a student, an advanced, and a professional version of the software, but these choices must be made up front. Well-designed pricing models can help attract new customers and differentiate the product opening new market segments for the ASP. The right pricing model is crucial for the success of the ASP and to cover its cost. The costs for running an ASP include software license for outsourced applications, data center and network operation cost, ongoing customer support and maintenance cost, and software and infrastructure upgrade cost.
LEGAL ISSUES AND LIABILITIES It is difficult to come up with one legal framework that determines all the responsibilities and resolves every possible conflict in application outsourcing and hosting relationships. The key problem is that the individual players may physically reside and operate in different countries bound by different legal systems but make their service available all over the Internet. Potential disputes or legal battles can severely hurt the business operation, the ASP, and the customer—as well as public opinion of the ASP model. In this section, a number of mechanisms and guidelines are discussed that can protect the customer from the ASP. A contract with an ASP should always foresee an exit strategy that determines the conditions under which engagement with the ASP can be “legally” terminated and, in such a situation, what happens to the customer’s assets (i.e., the data, business logic and process, and software that the ASP controls). This exit strategy protects the customer from a situation in which the ASP goes out of business or does not fulfill its SLA. Severe violations of SLAs may, for instance, be counteracted by a reduction in monthly payments to the ASP. The most important rule in negotiating terms with an ASP is to retain ownership and access to all business assets outsourced to the ASP. The ASP must guarantee access to all customer data at any time and in any format requested. Copies of data should be made available regularly or, at the outer extreme, be placed in a secured location that can be physically accessed by the customer at any time. Access to data by the customer should be absolutely unconditional. Access to critical data alone does not help to restore an IT operation once the ASP relationship has been terminated. The ASP’s software is critical for processing this data. A contract with the ASP can foresee software licenses of the outsourced software for the customer and rights to operate this software in-house. This, of course, is only an option if the ASP operates standard software packages, which is not the case for all ASPs, some of which build their own solutions or integrate existing packages with value-added services. Online dispute resolution procedures have been defined by several industrial organizations in the ASP sector and for Internet-related disputes in general (WIPO Mediation and Arbitration Center, n.d.), and these may be helpful in sorting out difficulties with ASPs.
P1: A-05A-Jacobsen Jacobsen
WL040/Bidgoli-Vol I-Ch-04
September 14, 2003
44
11:51
Char Count= 0
APPLICATION SERVICE PROVIDERS (ASPS)
Finally, an effort toward standardization of the data formats and application programming interfaces used by ASPS may help to reduce switching cost and reduce customer lock-in, thus increasing “ASP loyalty” to a customer. This may reduce the risk of conflict in the first place.
IMPLEMENTATION ISSUES, PRIVACY, AND SECURITY CONSIDERATIONS Service Level Agreements An ASP customer relies on the availability of the outsourced application and the availability of network access to the application. Different customers require different classes of service, and service requirements vary among customers. For instance, customers may depend on the specific content hosted, the application or service outsourced, the time of day, or the time of year. Allocating more bandwidth and providing more server resources is a costly solution but may not resolve network congestion and server contention. On the contrary, increased resource availability attracts ever more traffic and more unrestricted use. Providing differentiated services and network access models governed by SLAs and combined with a differentiated pricing structure is the solution advocated in the service provider market today. Pricing models were discussed in the previous section. Here, I describe SLAs as related to the ASP model. An SLA constitutes a contract between the service provider and the customer, which defines the service levels offered by the ASP to the customer. Commonly expressed in quantifiable terms, they specify levels and qualities of service that the provider guarantees and the client expects. These agreements are not unique to ASPs; many Internet service providers (ISPs) offer their customers SLAs. Also, IT departments in major enterprises have adopted the notion of defining SLAs so that services for their customers— users in other departments within the enterprise—can be quantified, justified, and ultimately compared with those provided by external ASPs. Today most service industries offer some form of SLAs. In the ASP context, each SLA describes aspects of network and application provisioning, details the level of acceptable service, expected service, limits to the customer usage patterns, states reporting obligations, and defines conditions in case of SLA violation. An SLA contains a definition of service guarantees, which should be of an acceptable high standard and should be achievable within the pricing structure set by the service provider. Neither the customer nor the provider would not be well served by low-performance expectations, which could be achieved with ease but would not provide a sufficiently efficient and cost-effective service. Similarly, the service provider would not benefit from service levels set so high that they could not be reasonably achieved. Because network technology is rapidly changing and because the Internet’s best effort service model gives rise to highly dynamic traffic patterns, SLAs cannot be defined once and for all or expressed in absolute terms. They are defined for a set period of time after which they are revisited and adapted in response to changes in technology. Because of dynamic
traffic patterns, service levels are expressed as averages over time. For ASPs service level agreements usually cover the following four areas of service levels: 1. Access to the outsourced application (i.e., network access); 2. General terms about hosting (i.e., availability of service, security, and data management); 3. Terms about the specific application or service (i.e., features supported, versions available, and upgrades administered); and 4. Customer relationship management (i.e., help desk, customization, and support). Each of these areas comprises a set of service level elements and a set of metrics to evaluate the ASP’s service level guarantees and allows the customer to track its expectations and industrywide benchmarks, if available. SLAs can be divided into technical and nontechnical service levels. A nontechnical service level guarantees a premium client a 24-hour hotline, for example, whereas a nonpremium client would only obtain a 9-to-5 weekday hotline. Various different shades of service levels are imaginable. Although nontechnical SLAs are common to all service industries, technical SLAs are more a specific characteristic of the service-provisioning model (i.e., for ISPs and ASPs alike). Technical SLAs pertain to the availability of computing and network resources that intervene in delivering the application functionality offered by the ASP to its clients. A technical SLA may guarantee the client an average throughput, an average number of transactions executed per unit of time, a bound on network bandwidth and latency. Other examples of technical SLAs are backup schedules (e.g., daily, weekly, monthly), available storage space, and security levels (e.g., encryption key strength and conformance to security standards). Example metrics that track these service levels include percentage of the time applications that are available, number of users served simultaneously, specific performance benchmarks with which actual performance is periodically compared, schedule for notification in advance of network changes that may affect users, help desk response time for various classes of problems, dial-in access availability, and usage statistics that will be provided. Network resource guarantees, like bounds on network bandwidth and latency, are especially difficult to guarantee unless the ASP also controls the communication lines over which its service is delivered. This kind of control is unlikely in the case of Internet access. For the client, it is difficult to assess whether the promised service level has really been offered, because technical SLAs are often expressed as averages and skewed occasionally by bad user perceived latency. An extract of a real-world example of performance indicators and service levels are summarized in Figure 7.
Privacy and Security Considerations Privacy and security is one of the primary concerns that may defeat the widespread acceptance of the ASP model. All customer application input, operational, and output
P1: A-05A-Jacobsen Jacobsen
WL040/Bidgoli-Vol I-Ch-04
September 14, 2003
11:51
Char Count= 0
OUTLOOK
45
1. Over the full 24 hours of operation, - excluding scheduled maintenance, the ASP shall provide network and application availability of: • 99.7% to more than 90% of client organizations; 99% to more than 96.5% of client organizations; • • 97% to more than 99% of client organizations; 93% to more than 99.7% of client organizations • Each is calculated annually from monthly averages over all client organizations. 2. Availability of 99% to all client institutions, calculated annually for each client institution. 3. Mean time between failure (period of unavailability) of at least 1000 hours provided to client institutions. 4. Time to restoration of service (duration of period of unavailability) of less than 10 hours for 90% of failures. This will be calculated as a twelve month rolling average of the percentage of reported failures in each month which are deemed to have taken less then 10 hours to restore. 5. The target for maximum latency for 128 byte packets between a client institution and the nearest point on the JANET national backbone is 15 ms, for 95% of transmissions over any 30 minute period (remains unmonitored).
Figure 7: Example of performance indicators and service levels for an Internet service provider operation. data are available at the ASP site, potentially leaving it exposed for exploitation by the ASP, by other users, or by intruders. ASPs must implement strict security measures to r
Protect sensitive data from being stolen, corrupted, and intentionally falsified during transmission or at the remote side. r Protect the cooperating systems from malicious use (abuse) by impersonators. r Protect the cooperating systems from unauthorized use. r Enforce commercial or national security concerns that may require additional steps to preserve the privacy of the data transmitted (or the encryption technology used). To enforce these security requirements, a number of well-established techniques are available: r
r r r
r
r r
Server authentication (i.e., remote site authentication ensures the client application that it is truly operating on the intended site). Client authentication (i.e., user authentication ensures the remote site that an authorized client is interacting). Integrity (i.e., noncorruption of data transferred prevents both malicious and false operation). Confidentiality (i.e., encrypting transferred data items prevents both malicious and false operation, as well as eavesdropping). Secure invocation of methods from client application to remote services, routed (i.e., delegated) through a logging facility to gather “evidence” of “who” initiated an invocation “when.” Nonrepudiation of invoked methods to ensure liability. Data security to prevent sensitive or “expensive” data from being compromised at the site of computation. This may require the additional use of encryption and transformations techniques, as well as organizational means.
A detailed discussion of all mechanisms implementing these features is beyond the scope of this chapter. Except
for data security, however, all these security techniques are well understood and solutions are widely deployed. The key question for an ASP user is one of trust: Why should the user entrust its sensitive and personal user or corporate data to a remote computational service (e.g., data on income, personal assets, business logic, customer information, financial data, revenue, and earnings)? Consulting groups and the trade press often note that the strongest barrier for engaging in a business relationship with an ASP is trust. Yet it is evident that for effective use of an ASP, customers must expose service input data to the ASP, where eventually it is subject to exposure, at least at the time of execution of the service’s function on the data. Note that this does not refer to the risk of data being captured over the communication link. This problem is solved through cryptographic protocols that are commonplace. The data security problem is much more difficult to solve. A theoretic solution, with proven guarantees of the data remaining unknown to the ASP, is provided by Abadi and Feigenbaum (Abadi & Feigenbaum, 1990; Abadi, Feigenbaum, & Kilian, 1989) and has become know as secure circuit evaluation. Their algorithm is impractical for this scenario because it requires significant interaction between client and server to accomplish a computation. A general solution of this problem that would guarantee data security for the input, the operational, and the output data transmitted is an open research question. Approaches, such as obfuscation techniques, computing with encrypted data, computing with encrypted functions, private information retrieval, and privacy homeomorphisms are techniques that may be applied to solve this problem.
OUTLOOK The widespread acceptance of the ASP model will depend on whether ASPs are able to ensure availability, accessibility, privacy, and security for outsourced applications and services. It will also depend on whether customers will learn to trust the model. System availability can be achieved through redundancy and replication of the service. Full accessibility of the service is strongly dependent
P1: A-05A-Jacobsen Jacobsen
WL040/Bidgoli-Vol I-Ch-04
46
September 14, 2003
11:51
Char Count= 0
APPLICATION SERVICE PROVIDERS (ASPS)
on reliable network connections from the customer to the ASP and is less easy to guarantee by the ASP alone, unless leased communication lines are offered to access the service. Guaranteeing privacy and security of the outsourced data constitutes an open research question, with no fully satisfying solution in sight. For now, the successful ASP will offer a widely useful, highly specialized, and non-mission-critical service. Moreover, privacy guarantees and trust relationships can, for the time being, only be achieved though organizational means—for example, through the operation of ASPs by known industrial players with established brands. Because of these constraints, it is likely that the hybrid deployment model that outsources part of the software, maintains critical application components and data at the customer site, and manages these components from remote sites or, alternatively, offers additional services to complement the licensed software, will prevail. Many enterprise resource planning (ERP) systems require significant customization to adapt the software to the business processes of the customer. Because this is a complex task to accomplish over the Web for the customer and difficult and expensive to manage on an individual basis for the ASP, it is unlikely that ASPs offering fullfledged ERP packages as leased solutions will establish themselves. If these widely accepted business software solutions establish and follow standard business process models, ASPs may be able to offer these models for a wide customer base without going through a long phase of customization. The ASP industry is in an early stage and must still establish itself. This trend is to be expected, because the ASP model represents a paradigm shift away from the traditional application licensing and in-house management model. This emerging industry enables customers to reduce both application deployment time frames and their total cost of ownership. Although certain problems still need to be addressed, the ASP industry is successfully implementing the concept of application outsourcing and software leasing.
GLOSSARY Appliances A prepackaged special-purpose hardware and software solution that plugs into a customer’s existing IT infrastructure without much need for customization. Often, the appliance can be managed by a provider from remote. Application hosting A computer system in a distributed environment like the Internet is often referred to as a computing host, a computing node, or simply host. Hosting refers to the provisioning of services or applications on such a computer system to make these services and applications available to authorized parties interacting in the distributed environment. Application hosting emphasizes that the hosted service is an application. Application outsourcing The deployment of an application over a network. Application server The system software that manages the computational transactions, business logic, and the application activation in a multitiered server configura-
tion. The application server interacts with the database tier at one end and the client tier at the other end of the distributed application. Application services Distinguished from applications by its smaller scope, more specific nature, and its focused functionality. An application service is seldom used in isolation; more often several application services are bundled together and offered by third-party providers that enrich their content offerings with application services. Application service provider (ASP) The entity that manages the outsourcing of applications. Server farm A dedicated place with many computing servers accessible through a network. Software leasing The subscription-based offering of a software application. Often, leasing implicitly refers to a longer term relationship between the customer and the provider. Shorter term relationships are sometimes referred to as software renting, although no clear line is drawn in the context of ASPs. Wireless application service provider (WASP) An ASP that primarily offers services to customers interacting with the ASP through wireless devices. Web services The standard suite of protocols created by several large companies to allow applications to interoperate, discover, invoke, and integrate Web-based services across a network, primarily the Internet, is referred to as Web Services. The term Web should not be confused with the actual Web service that is created based on these standards. The Web service built in this manner is also often referred to as application service.
CROSS REFERENCES See Electronic Commerce and Electronic Business; Internet Literacy; Internet Navigation (Basics, Services, and Portals); Web Hosting; Web Quality of Service; Web Services.
REFERENCES Abadi, M., & Feigenbaum, J. (1990). Secure circuit evaluation. Journal of Cryptology, 2, 1–12. Abadi, M., Feigenbaum, J., & Kilian, J. (1989). On hiding information from an oracle. Journal of Computer and System Sciences, 39, 21–50. Abel, D. J., Gaede, V. J., Taylor, K. L., & Zhou, X. (1999). SMART: Towards spatial Internet marketplaces. Geo Informatica, 3, 141–164. Allen, J., Gabbard, D., & May, C. Outsourcing managed security services. Technical report from the Networked Systems Survivability Program at the Software Engineering Institute. Retrieved February 2003 from http:// www.cert.org / security-improvement / modules / omss / index.html Bhargava, H. K., King, A. S., & McQuay, D. S. (1995). DecisionNet: An architecture for modeling and decision support over the World Wide Web. In T. X. Bui (Ed.), Proceedings of the Third International Society for Decision Support Systems Conference (Vol. II, pp. 541–550). Hong Kong: International Society for DSS. Czyzyk, J., Mesnier, M. P., & More, J. J. (1997). The networked-enabled optimization system (NEOS) server
P1: A-05A-Jacobsen Jacobsen
WL040/Bidgoli-Vol I-Ch-04
September 14, 2003
11:51
Char Count= 0
REFERENCES
(Preprint MCS-P615—1096). Mathematics and Computer Science Division, Argonne National Laboratory. Jacobsen, H.-A., Guenther, O., & Riessen, G. (2001). Component leasing on the World Wide Web. NETNOMICS Journal, 2, 191–219. Marchand, N., & Jacobsen, H.-A. (2001). An economic model to study dependencies between independent software vendors and application service providers.
47
Electronic Commerce Research Journal, 1(3), 315– 334. World Wide Web Consortium (2003). Web services specifications. Retrieved December 2002 from http://www. w3.org/2002/ws/ WIPO Mediation and Arbitration Center (n.d.). Dispute avoidance and resolution best practices for the application service provider industry. Retrieved December 2002 from http://arbiter.wipo.int/asp/report/
P1: IML/FFX WL040-05
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
August 13, 2003
16:41
Char Count= 0
Authentication Patrick McDaniel, AT&T Labs
Authentication Meet Alice and Bob Credentials Web Authentication Password-Based Web Access Single Sign-On Certificates SSL Host Authentication Remote Login
48 48 49 49 50 50 51 51 52 52
AUTHENTICATION An authentication process establishes the identity of some entity under scrutiny. For example, a traveler authenticates herself to a border guard by presenting a passport. Possession of the passport and resemblance to the attached photograph is deemed sufficient proof that the traveler is the identified person. The act of validating the passport (by checking a database of known passport serial numbers) and assessing the resemblance of the traveler is a form of authentication. On the Internet, authentication is somewhat more complex; network entities do not typically have physical access to the parties they are authenticating. Malicious users or programs may attempt to obtain sensitive information, disrupt service, or forge data by impersonating valid entities. Distinguishing these malicious parties from valid entities is the role of authentication and is essential to network security. Successful authentication does not imply that the authenticated entity is given access. An authorization process uses authentication, possibly with other information, to make decisions about whom to give access. For example, not all authenticated travelers will be permitted to enter the country. Other factors, such as the existence of visas, a past criminal record, and the political climate will determine which travelers are allowed to enter the country. Although the preceding discussion focused on entity authentication, it is important to note that other forms of authentication exist. In particular, message authentication is the process by which a particular message is associated with some sending entity. This article restricts itself to entity authentication, deferring discussion of other forms to other chapters in this encyclopedia.
Meet Alice and Bob Authentication is often illustrated through the introduction of two protagonists, Alice and Bob. In these descriptions, Alice attempts to authenticate herself to Bob. Note that Alice and Bob are often not users, but computers. For example, a computer must authenticate itself to a file 48
SSH One-Time Passwords Kerberos Pretty Good Privacy IPsec Conclusion Glossary Cross References References
52 53 53 54 54 55 55 56 56
server prior to being given access to its contents. Independent of whether Alice is a computer or a person, she must present evidence of her identity. Bob evaluates this evidence, commonly referred to as a credential. Alice is deemed authentic (authenticated) by Bob if the evidence is consistent with information associated with her claimed identity. The form of Alice’s credential determines the strength and semantics of authentication. The most widely used authentication credential is a password. To illustrate, UNIX passwords configured by system administrators reside in the/etc/passwd file. During the login process, Alice (a UNIX user) types her password into the host console. Bob (the authenticating UNIX operating system) compares the input to the known password. Bob assumes that Alice is the only entity in possession of the password. Hence, Bob deems Alice authentic because she is the only one who could present the password (credential). Note that Bob’s assertion that Alice is the only entity who could have supplied the password is not strictly accurate. Passwords are subject to guessing attacks. Such an attack continually retries different passwords until the authentication is successful (the correct password is guessed). Many systems combat this problem by disabling authentication (of that identity) after a threshold of failed authentication attempts. The more serious dictionary attack makes use of the UNIX password file itself. A salted noninvertible hash of the password is recorded in the password file. Hence, malicious parties cannot obtain the password directly from/etc/passwd. However, a malicious party who obtains the password file can mount a dictionary attack by comparing hashed, salted password guesses against the password file’s contents. Such an attack bypasses the authentication service and, hence, is difficult to combat. Recent systems have sought to mitigate attacks on the password file by placing the password hash values in a highly restricted shadow password file. Passwords are subject to more fundamental attacks. In one such attack, the adversary simply obtains the password from Alice directly. This can occur where Alice “shares” her password with others, or where she records
P1: IML/FFX WL040-05
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
August 13, 2003
16:41
Char Count= 0
WEB AUTHENTICATION
it in some obvious place (on her PDA). Such attacks illustrate an axiom of security: A system is only as secure as the protection afforded to its secrets. In the case of authentication, failure to adequately protect credentials from misuse can result in the compromise of the system. The definition of identity has been historically controversial. This is largely because authentication does not truly identify physical entities. It associates some secret or (presumably) unforgettable information with a virtual identity. Hence, for the purposes of authentication, any entity in possession of Alice’s password is Alice. The strength of authentication is determined by the difficulty with which a malicious party can circumvent the authentication process and incorrectly assume an identity. In our above example, the strength of the authentication process is largely determined by the difficulty of guessing Alice’s password. Note that other factors, such as whether Alice chooses a poor password or writes it on the front of the monitor, will determine the effectiveness of authentication. The lesson here is that authentication is as strong as the weakest link; failure to protect the password either by Alice or the host limits the effectiveness of the solution. Authentication on the Internet is often more complex than is suggested by the previous example. Often, Alice and Bob are not physically near each other. Hence, both parties will wish to authenticate each other. In our above example, Alice will wish to ensure that she is communicating with Bob. However, no formal process is needed; because Alice is sitting at the terminal, she assumes that Bob (the host) is authentic. On the Internet, it is not always reasonable to assume that Alice and Bob have established, or are able to establish, a relationship prior to communication. For example, consider the case where Alice is purchasing goods on the Internet. Alice goes to Bob’s Web server, identifies the goods she wishes to purchase, provides her credit card information, and submits the transaction. Alice, being a cautious customer, wants to ensure that this information is only being given to Bob’s Web server (i.e., authenticate the Web server). In general, however, requiring Alice to establish a direct relationship with each vendor from whom she may purchase goods is not feasible (e.g., it is not feasible to establish passwords for each Web site out of band). Enter Trent, the trusted third party. Logically, Alice appeals to Trent for authentication information relating to Bob. Trent is trusted by Alice to assert Bob’s authenticity. Therefore, Bob need only establish a relationship with Trent to begin communicating with Alice. Because the number of widely used trusted third parties is small (on the order of tens), and every Web site establishes a relationship with at least one of them, Alice can authenticate virtually every vendor on the Internet.
CREDENTIALS Authentication is performed by the evaluation of credentials supplied by the user (i.e., Alice). Such credentials can take the form of something you know (e.g., password), something you have (e.g., smartcard), or something you are (e.g., fingerprint). The credential type is specific to the authentication service and reflects some direct or
49
indirect relationship between the user and the authentication service. Credentials often take the form of shared secret knowledge. Users authenticate themselves by proving knowledge of the secret. In the UNIX example above, knowledge of the password is deemed sufficient evidence to prove user identity. In general, such secrets need not be statically defined passwords. For example, users in one-time password authentication systems do not present knowledge of secret text, they identify a numeric value valid only for a single authentication. Users need not present the secret directly. They need only demonstrate knowledge of it (e.g., by presenting evidence that could only be derived from it). Secrets are often long random numbers and, thus, cannot be easily remembered by users. For example, a typical RSA private key is a 1024-digit binary number. Requiring a user to remember this number is, at the very least, unreasonable. Such information is frequently stored in a file on a user’s host computer, on a PDA, or on another nonvolatile storage. The private key is used during authentication by accessing the appropriate file. However, private keys can be considered “secret knowledge” because the user presents evidence external to the authentication system (e.g., from the file system). Credentials may also be physical objects. For example, a smartcard may be required to gain access to a host. Authenticity in these systems is inferred from possession rather than knowledge. Note that there is often a subtle difference between the knowledge- and the possessionbased credentials. For example, it is often the case that a user-specific private key is stored on an authenticating smartcard. In this case, however, the user has no ability to view or modify the private key. The user can only be authenticated via the smartcard issued to the user. Hence, for the purposes of authentication, the smartcard is identity; no amount of effort can modify the identity encoded in the smartcard. Contemporary smartcards can be modified or probed. However, because such manipulation often takes considerable effort and sophistication (e.g., use of an electron microscope), such attacks are beyond the vast majority of attackers. Biometric devices measure physical characteristics of the human body. An individual is deemed authentic if the measured aspect matches previously recorded data. The accuracy of matching determines the quality of authentication. Contemporary biometric devices include fingerprint, retina, or iris scanners and face recognition software. However, biometric devices are primarily useful only where the scanning device is trusted (i.e., under control of the authentication service). Although biometric authentication has seen limited use in the Internet, it is increasingly used to support authentication associated with physical security (i.e., governing clean-room access).
WEB AUTHENTICATION One of the most prevalent uses of the Internet is Web browsing. Users access the Web via specialized protocols that communicate HTML and XML requests and content. The requesting user’s Web browser renders received content. However, it is often necessary to restrict access to
P1: IML/FFX WL040-05
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
August 13, 2003
50
16:41
Char Count= 0
AUTHENTICATION
Web content. Moreover, the interactions between the user and a Web server are often required to be private. One aspect of securing content is the use of authentication to establish the true or virtual identity of clients and Web servers.
Password-Based Web Access Web servers initially adopted well-known technologies for user authentication. Foremost among these was the use of passwords. To illustrate the use of passwords on the Web, the following describes the configuration and use of basic authentication in the Apache Web server (Apache, 2002). Note that the use of basic authentication in other Web servers is largely similar. Access to content protected by basic authentication in the Apache Web server is indirectly governed by the password file. Web-site administrators create the password file (whose location is defined by the web-site administrator) by entering user and password information using the htpasswd utility. It is assumed that the passwords are given to the users using an out-of-band channel (e.g., via e-mail, phone). In addition to specifying passwords, the Web server must identify the subset of Web content to be password protected (e.g., a set of protected URLs). This is commonly performed by creating a .htaccess file in the directory to be protected. The.htaccess file defines the authentication type and specifies the location of the relevant password file. For example, located in the content root directory, the following.htaccess file restricts access to those users who are authenticated via password. AuthName "Restricted Area" AuthType Basic AuthUserFile/var/www/webaccess require valid-user Users accessing protected content (via a browser) are presented with a password dialog (e.g., similar to the dialog depicted in Figure 1). The user enters the appropriate username and password and, if correct, is given access to the Web content. Because basic authentication sends passwords over the Internet in clear text, it is relatively simple to recover them
Figure 1: Password authentication on the Web.
by eavesdropping on the HTTP communication. Hence, basic authentication is sufficient to protect content from casual misuse but should not be used to protect valuable or sensitive data. However, as is commonly found on commercial Web sites, performing basic authentication over more secure protocols (e.g., SSL; see below) can mitigate or eliminate many of the negative properties of basic authentication. Many password-protected Web sites store user passwords (in encrypted form) in cookies the first time a user is authenticated. In these cases, the browser automatically submits the cookie to the Web site with each request. This approached eliminates the need for the user to be authenticated every time she visits the Web site. However, this convenience has a price. In most single-user operating systems, any entity using the same host will be logged in as the user. Moreover, the cookies can be easily captured and replayed back to the Web site (Fu, Sit, Smith, & Feamster, 2001). Digest authentication uses challenges to mitigate the limitations of password-based authentication (Franks et al., 1999). Challenges allow authenticating parties to prove knowledge of secrets without exposing (transmitting) them. In digest authentication, Bob sends a random number (nonce) to Alice. Alice responds with a hash of the random number and her password. Bob uses Alice password (which only he and Alice know) to compute the correct response and compares it to the one received from Alice. Alice is deemed authentic if the computed and received responses match (because only Alice could have generated the response). Because the hash, rather than the secret, is sent, no adversary can obtain Alice’s password from the response. A number of other general-purpose services have been developed to support password maintenance. For example, RADIUS, DIAMETER, and LDAP password services have been widely deployed on the Internet. Web servers or hosts subscribing to these services defer all password maintenance and validation to a centralized service. Although each system may use different services and protocols, users see interfaces similar to those presented by basic authentication (e.g., user login above). However, passwords are maintained and validated by a centralized service, rather than by the Web server.
Single Sign-On Basic authentication has become the predominant method of performing authentication on the web. Users often register a username and password with each retailer or service provider with which they do business. Hence, users are often faced with the difficult and error prone task of maintaining a long list of usernames and passwords. In practice, users avoid this maintenance headache by using the same passwords on all Web sites. However, this allows adversaries who gain access to the user information on one site to impersonate the user on many others. A single sign-on system (SSO) defers user authentication to a single, universal authentication service. Users authenticate themselves to the SSO once per session. Subsequently, each service requiring user authentication is redirected to a SSO server that vouches for the user. Hence, the user is required to maintain only a single
P1: IML/FFX WL040-05
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
August 13, 2003
16:41
Char Count= 0
WEB AUTHENTICATION
authentication credential (e.g., SSO password). Note that the services themselves do not possess user credentials (e.g., passwords). They simply trust the SSO to state which users are authentic. Although single sign-on services have been used for many years (e.g., see Kerberos, below), the lack of universal adoption and cost of integration has made their use in Web applications highly undesirable. These difficulties have led to the creation of SSO services targeted specifically to authentication on the web. One of the most popular of these systems is the Microsoft passport service (Microsoft, 2002). Passport provides a single authentication service and repository of user information. Web sites and users initially negotiate secrets during the passport registration process (i.e., user passwords and Web site secret keys). In all cases, these secrets are known only to the passport servers and the registering entity. Passport authentication proceeds as follows. Users requesting a protected Web page (i.e., a page that requires authentication) are redirected to a passport server. The user is authenticated via a passport-supplied login screen. If successful, the user is redirected back to the original Web site with an authentication cookie specific to that site. The cookie contains user information and site specific information encrypted with a secret key known only to the site and the passport server. The Web site decrypts and validates the received cookie contents. If successful, the user is deemed authentic and the session proceeds. Subsequent user authentication (with other sites) proceeds similarly, save that the login step is avoided. Successful completion of the initial login is noted in a session cookie stored at the user browser and presented to the passport server with later authentication requests. Although SSO systems solve many of the problems of authentication on the Web, they are not a panacea. By definition, SSO systems introduce a single point of trust for all users in the system. Hence, ensuring that the SSO is not poorly implemented, poorly administered, or malicious is essential to its safe use. For example, passport has been shown to have several crucial flaws (Kormann & Rubin, 2000). Note that although existing Web-oriented SSO systems may be extended to support mutual authentication, the vast majority have yet to do so.
Certificates Although passwords are appropriate for restricting access to Web content, they are not appropriate for more general Internet authentication needs. Consider the Web site for an online bookstore, examplebooks.com. Users wishing to purchase books from this site must be able to determine that the Web site is authentic. If not authenticated, a malicious party may impersonate examplebooks.com and fool the user into exposing his credit card information. Note that most Web-enabled commercial transactions do not authenticate the user directly. The use of credit card information is deemed sufficient evidence of the user’s identity. However, such evidence is typically evaluated through the credit card issuer service (e.g., checking that the credit card is valid and has not exceeded its spending limit) before the purchased goods are provided to the buyer.
51
The dominant technology used for Internet Web site authentication is public key certificates. Certificates provide a convenient and scalable mechanism for authentication in large, distributed environments (such as the Internet). Note that certificates are used to enable authentication of a vast array of other non-Web services. For example, certificates are often used to authenticate electronic mail messages (see Pretty Good Privacy, below). Certificates are used to document an association between an identity and a cryptographic key. Keys in public key cryptography are generated in pairs: a public and a private key (Diffie & Hellman, 1976). As the name would suggest, the public key is distributed freely, and the private key is kept secret. To simplify, any data signed (using a digital signature algorithm) by the private key can be validated using the public key. A valid digital signature can be mapped to exactly one private key. Therefore, any valid signature can only be generated by some entity in possession of the private key. Certificates are issued by certification authorities (CA). The CA issues a certificate by assigning an identity (e.g., the domain name of the Web site), validity dates, and the Web site’s public key. The certificate is then freely distributed. A user validates a received certificate by checking the CA’s digital signature. Note that most browsers are installed with a collection of CA certificates that are invariably trusted (i.e., they do not need to be validated). For example, many Web sites publish certificates issued by the Verisign CA (Verisign, 2002), whose certificate is installed with most browsers. In its most general form, a system used to distribute and validate certificates is called a public key infrastructure.
SSL Introduced by Netscape in 1994, the SSL protocol uses certificates to authenticate Web content. In addition to authenticating users and Web sites, the SSL protocol negotiates an ephemeral secret key. This key is subsequently used to protect the integrity and confidentiality of all messages (e.g., by encrypting the messages sent between the Web server and the client). SSL continues to evolve. For example, the standardized and widely deployed TLS (Transport Layer Security) protocol is directly derived from SSL version 3.0. The use of SSL is signaled to the browser and Web site through the https URL protocol identifier. For example, Alice enters the following URL to access a Web site of interest: https://www.example.com/. In response to this request, Alice’s browser will initiate an SSL handshake protocol. If the Web site is correctly authenticated via SSL, the browser will retrieve and render Web site content in a manner similar to HTTP. Authentication is achieved in SSL by validating statements signed by private keys associated with the authenticated party’s public key certificate. Figure 2 depicts the operation of the SSL authentication and key agreement process. The SSL handshake protocol authenticates one or both parties, negotiates the cipher-suite policy for subsequent communication (e.g., selecting cryptographic algorithms and parameters), and establishes a master secret. All messages occurring after
P1: IML/FFX WL040-05
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
August 13, 2003
52
16:41
Char Count= 0
AUTHENTICATION
(1) Handshake Request (2) Handshake Response (3) Authentication Request
Alice
Certificate
(4) Server Key Exchange (5) Client Key Exchange
Bob
Figure 2: The SSL protocol. Alice (the client) and Bob (the server) exchange an initial handshake identifying the kind of authentication and configuration of the subsequent session security. As dictated by the authentication requirements identified in the handshake, Alice and Bob may exchange and authenticate certificates. The protocol completes by establishing a session-specific key used to secure (e.g., encrypt) later communication. the initial handshake are protected using cryptographic keys derived from the master secret. The handshake protocol begins with both Alice (the end-user browser) and Bob (the Web server) identifying a cipher-suite policy and session-identifying information. In the second phase, Alice and Bob exchange certificates. Note that policy will determine which entities require authentication: As dictated by policy, Alice and/or Bob will request an authenticating certificate. The certificate is validated on reception (e.g., issuance signature checked against CA’s, whose certificate is installed with the browser). Note that in almost all cases, Bob will be authenticated but Alice will not. In these cases, Bob typically authenticates Alice using some external means (only when it becomes necessary). For example, onlineshopping Web sites will not authenticate Alice until she expresses a desire to purchase goods, and her credit card number is used to validate her identity at the point of purchase. Interleaved with the certificate requests and responses is the server and client key exchange. This process authenticates each side by signing information used to negotiate a session key. The signature is generated using the private key associated with the certificate of the party to be authenticated. A valid signature is deemed sufficient evidence because only an entity in possession of the private key could have generated it. Hence, signed data can be accepted as proof of authenticity. The session key is derived from the signed data, and the protocol completes with Alice and Bob sending finished messages.
HOST AUTHENTICATION Most computers on the Internet provide some form of remote access. Remote access allows users or programs to access resources on a given computer from anywhere on the Internet. This access enables a promise of the Internet: independence from physical location. However, remote access has often been the source of many security vulnerabilities. Hence, protecting these computers from unauthorized use is essential. The means by which host authentication is performed in large part determines the degree to which an enterprise or user is protected from malicious parties lurking in the dark corners of the Internet. This section reviews the design and use of the predominant methods providing host authentication.
Remote Login Embodying the small, isolated UNIX networks of old, remote login utilities allow administrators to identify the set of hosts and users who are deemed “trusted.” Trusted hosts are authenticated by source IP address, host name, and/or user name only. Hence, trusted users and hosts need not provide a user name or password. The rlogin and rsh programs are used to access hosts. Configured by local administrators, the/etc/hosts. equiv file enumerates hosts/users who are trusted. Similarly, the .rhosts file contained in each user’s home directory identifies the set of hosts trusted by an individual user. When a user connects to a remote host with a remote log-in utility, the remote log-in server (running on the accessed host) scans the hosts.equiv configuration file for the address and user name of the connecting host. If found, the user is deemed authentic and allowed access. If not, the .rhosts file of the accessing user (identified in the connection request) is scanned, and access is granted where the source address and user name is matched. The remote access utilities do not provide strong authentication. Malicious parties may trivially forge IP addresses, DNS records, and user names (called spoofing). Although recent attempts have been made to address the security limitations of the IP protocol stack (e.g., IPsec, DNSsec), this information is widely accepted as untrustworthy. Remote access tools trade security for ease of access. In practice, these tools often weaken the security of network environments by providing a vulnerable authentication mechanism. Hence, the use of such tools in any environment connected to the Internet is considered extremely dangerous.
SSH The early standards for remote access, telnet and ftp, authenticated users by UNIX password. While the means of authentication were similar to terminal log in, their use on an open network introduces new vulnerabilities. Primarily, these utilities are vulnerable to password sniffing. Such attacks passively listen in on the network for communication between the host and the remote user. Note that the physical media over which much local network communication occurs is the Ethernet. Because Ethernet is a broadcast technology, all hosts on the local network (subnet) receive every bit of transmitted data. Obviously,
P1: IML/FFX WL040-05
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
August 13, 2003
16:41
Char Count= 0
HOST AUTHENTICATION
this approach simplifies communication eavesdropping. Although eavesdropping may be more difficult over other network media (e.g., switched networks), it is by no means impossible. Because passwords are sent in the clear (unencrypted), user-specific authentication information could be recovered. For this reason, the use of these utilities as a primary means of user access has largely been abandoned. Ftp is frequently used on the Web to transfer files. When used in this context, ftp generally operates in anonymous mode. Ftp performs no authentication in this mode, and the users are often restricted to file retrieval only. The secure shell (SSH) (Ylonen, 1996) combats the limitations of standard tools by performing cryptographically supported host and/or user authentication. Similar to SSL, a by-product of the authentication is a cryptographic key used to obscure and protect communication between the user and remote host. SSH is not vulnerable to sniffing attacks and has been widely adopted as a replacement for the standard remote access tools. SSH uses public key cryptography for authentication. On installation, each server host (a host allowing remote access via SSH) generates a public key pair. The public key is manually stored at each initiating host (a host from which a user will remotely connect). Note that unlike SSL, SSH uses public keys directly, rather than issued certificates. Hence, SSH authentication relies on host administrators maintaining the correct set of host keys. SSH initiates a session in two phases. In the first phase, the server host is authenticated. The initiating host initiates the SSH session by requesting remote access. To simplify, the requesting host generates a random session key, encrypts it with a received host public key of the server, and forwards it back to the server. The server recovers the session key using its host private key. Subsequent communication between the hosts is protected using the session key. Because only someone in possession of the host private key could have recovered the session key, the server is deemed authentic. The server transmits a short-term public key in addition to the host key. The requesting host encrypts the random value response with both keys. The use of the short-term keys prevents adversaries from recovering the content of past sessions should the host key become compromised. The second phase of SSH session initialization authenticates the user. Dictated by the configured policy, the server will use one of the following methods to authenticate the user. .rhosts file: As described for the remote access utilities above, this file simply tests whether the accessing user identifier is present in the .rhosts file located in the home directory of the user. .rhosts with RSA: Similar to the above file, this requires that the accessing host be authenticated via a known and trusted RSA public key. Password authentication: This prompts the user for a local system password. The strength of this approach is determined by the extent to which the user keeps the password private. RSA user authentication: This works via a user-specific RSA public key. Of course, this requires that the server
53
be configured with the public key generated for each user. Note that it is not always feasible to obtain the public key of each host that a user will access. Host keys may change frequently (as based on an administrative policy), be compromised, or be accidentally deleted. Hence, where the remote host key is not known (and the configured policy allows it), SSH will simply transmit it during session initialization. The user is asked if the received key should be accepted. If accepted, the key is stored in the local environment and is subsequently used to authenticate the host. Although the automated key distribution mode does provide additional protection over conventional remote access utilities (e.g., sniffing prevention), the authentication mechanism provides few guarantees. A user accepting the public key knows little about its origin (e.g., is subject to forgery, man-in-the-middle attacks, etc.). Hence, this mode may be undesirable for some environments.
One-Time Passwords In a very different approach to combating password sniffing, the S/Key system (Haller, 1994) limits the usefulness of recovered passwords. Passwords in the S/Key system are valid only for a single authentication. Hence, a malicious party gains nothing by recovery of a previous password (e.g., via eavesdropping of a telnet log in). Although, on the surface, a one-time password approach may seem to require that the password be changed following each log in, the way in which passwords are generated alleviates the need for repeated coordination between the user and remote host. The S/Key system establishes an ordered list of passwords. Each password is used in order and only once, then discarded. While the maintenance of the password list may seem like an unreasonable burden to place on a user, the way in which the passwords are generated makes it conceptually simple. Essentially, passwords are created such that the knowledge of a past password provides no information about future passwords. However, if one knows a secret value (called a seed value), then all passwords are easily computable. Hence, while an authentic user can supply passwords as they are needed, a malicious adversary can only supply those passwords that have been previously used (and are no longer valid). In essence, the S/Key system allows the user to prove knowledge of the password without explicitly stating it. Over time, this relatively simple approach has been found to be extremely powerful, and it is used as the basis of many authentication services. For example, RSA’s widely used SecurID combines a physical token with one-time password protocols to authenticate users (RSA, 2002).
Kerberos The Kerberos system (Neuman & Ts’o, 1994) performs trusted third party authentication. In Kerberos, users, hosts, and services defer authentication to a mutually trusted key distribution center (KDC). All users implicitly trust the KDC to act in their best interest. Hence, this approach is appropriate for localized environments (e.g.,
P1: IML/FFX WL040-05
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
August 13, 2003
16:41
54
Char Count= 0
AUTHENTICATION
(1) login (user ID) (2) response (TGT) (3) authenticate (bob)
KDC
(4) authenticate response (ticket)
Alice obtains a TGT at log in, later authentication can be performed automatically. Thus, the repeated authentication of users and services occurring over time does not require human intervention; Alice types in her password exactly once. Because of its elegant design and technical maturity, the Kerberos system has been widely accepted in local environments. Historically common in UNIX environments, it has recently been introduced into other operating systems (e.g., Windows 2000, XP).
Pretty Good Privacy (5) authentication (ticket)
Alice Bob Figure 3: Kerberos authentication. Alice receives a ticketgranting ticket (TGT) after successfully logging into the Kerberos key distribution center (KDC). Alice performs mutual authentication with Bob by presenting a ticket obtained from the KDC to Bob. Note that Bob need not communicate with the KDC directly; the contents of the ticket serve as proof of Alice’s identity.
campus, enterprise) but does not scale well to large, loosely coupled communities. Note that this is not an artifact of the Kerberos system but is true of any trusted third party approach; loosely coupled communities are unlikely to universally trust a single authority. Depicted in Figure 3, the Kerberos system performs mediated authentication between Alice and Bob through a two-phase exchange with the KDC. When logging onto the system, Alice enters her user name and password. Alice’s host sends the KDC her identity. In response, the KDC sends Alice information that can only be understood by someone in possession of the password (which is encrypted with a key derived from the password). Included in this information is a ticket granting ticket (TGT) used later by Alice to initiate a session with Bob. Alice is deemed authentic because she is able to recover the TGT. At some later point, Alice wishes to perform mutual authentication with another entity, Bob. Alice informs the KDC of this desire by identifying Bob and presenting the previously obtained TGT. Alice receives a message from the KDC containing the session key and a ticket for Bob. Encrypting the message with a key known only to Alice ensures that its contents remain confidential. Alice then presents the ticket included in the message to Bob. Note that the ticket returned to Alice is opaque; its contents are encrypted using a key derived from Bob’s password. Therefore, Bob is the only entity who can retrieve the contents of the ticket. Because later communication between Alice and Bob uses the session key (given to Alice and contained in the ticket presented to Bob), Alice is assured that Bob is authentic. Bob is assured that Alice is authentic because Bob’s ticket explicitly contains Alice’s identity. One might ask why Kerberos uses a two-phase process. Over the course of a session, Alice may frequently need to authenticate a number of entities. In Kerberos, because
As indicated by the previous discussion of trusted third parties, it is often true that two parties on the Internet will not have a direct means of performing authentication. For example, a programmer in Great Britain may not have any formal relationship with a student in California. Hence, no trusted third party exists to which both can defer authentication. A number of attempts have been made to address this problem by establishing a single public key infrastructure spanning the Internet. However, these structures require that users directly or indirectly trust CAs whose operation they know nothing about. Such assumptions are inherently dangerous and have been largely rejected by user communities. The pretty-good-privacy (PGP) system (Zimmermann, 1994) takes advantage of informal social and organization relationships between users on the Internet. In PGP, each user creates a self-signed PGP certificate identifying a public key and identity information (e.g., e-mail address, phone number, name). Users use the key to sign the keys of those users they trust. Additionally, they obtain signatures from those users who trust them. The PGP signing process is not defined by PGP. Users commonly will exchange signatures with friends and colleagues. The keys and signatures defined for a set of users defines a Web of trust. On recept of a key from a previously unknown source, an entity will make a judgment as to whether to accept the certificate based on the presence of signatures by known entities. A certificate will likely be accepted if a signature generated by a trusted party (with known and acceptable signing practices) is present. Such assessment can span multiple certificates, where signatures create trusted linkage between acceptable certificates. However, because trust is not frequently transitive, less trust is associated with long chains. PGP certificates are primarily used for electronic mail but have been extended to support a wide range of data exchange systems (e.g., Internet newsgroups). The PGP approach and other technologies are used as the basis of S/MIME standards (Dusse, Hoffman, Ramsdell, Lundblade, & Repka, 1998). S/MIME defines protocols, data structures, and certificate management infrastructure for authentication and confidentiality of MIME (multipurpose Internet mail extensions) data. These standards are being widely adopted as a means of securing personal and enterprise e-mail.
IPsec IPsec (Kent & Atkinson, 1998) is emerging as an important service for providing security on the Internet. IPsec is not
P1: IML/FFX WL040-05
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
August 13, 2003
16:41
Char Count= 0
GLOSSARY
just an authentication service but also provides a complete set of protocols and tools for securing IP-based communication. The IPsec suite of protocols provides host-to-host security within the operating system implementation of the IP protocol stack. This has the advantage of being transparent to applications running on the hosts. A disadvantage of IPsec is that it does not differentiate between users on the host. Hence, although communication passing between the hosts is secure (as determined by policy), little can be ascertained as to the true identity of users on those hosts. The central goal of the IPsec was the construction of a general-purpose security infrastructure supporting many network environments. Hence, IPsec supports the use of an array of authentication mechanisms. IPsec authentication can be performed manually or automatically. Manually authenticated hosts share secrets distributed via administrators (i.e., configured manually at each host). Identity is inferred from knowledge of the secret. Session keys are directly or indirectly derived from the configured secret. The Internet security association key management protocol (ISAKMP) defines an architecture for automatic authentication and key management used to support the IPsec suite of protocols. Built on ISAKMP, the Internet key exchange protocol (IKE) implements several protocols for authentication and session key negotiation. In these protocols, IKE negotiates a shared secret and policy between authenticated endpoints of an IPsec connection. The resulting IPsec Security Association (SA) records the result of IKE negotiation and is used to drive later communication between the endpoints. The specifics of how authentication information is conveyed to a host are a matter of policy and implementation. However, in all implementations, each host must identify the keys or certificates to be used by IKE authentication. For example, Windows XP provides dialogs used to enter preshared keys. These keys are stored in the Windows registry and are later used by IKE for authentication and session key negotiation. Note that how the host stores secrets is of paramount importance. As with any security solution, users should carefully read all documentation and related security bulletins when using such interfaces.
CONCLUSION The preceding sections described only a small fraction of a vast array of available authentication services. Given the huge number of alternatives, one might ask the question: Which one of these systems is right for my environment? The following are guidelines for the integration of an authentication service with applications and environments. Don’t try to build a custom authentication service. Designing and coding an authentication service is inherently difficult. This fact has been repeatedly demonstrated on the Internet; bugs and design flaws are occasionally found in widely deployed systems, and several custom authentication services have been broken into in a matter of hours. It is highly likely that there exists an authentication service that is appropriate for a given environment. For all of these reasons, one should use services that have been time tested.
55
Understand who is trusted by whom. Any authentication system should accurately reflect the trust held by all parties. For example, a system that authenticates students in a campus environment may take advantage of local authorities. In practice, such authorities are unlikely to be trusted by arbitrary endpoints in the Internet. Failure to match the trust existing in the physical world has ultimately led to the failure of many services. Evaluate the value of the resources being protected and the strength of the surrounding security infrastructure. Authentication is only useful when used to protect access to a resource of some value. Hence, the authentication service should accurately reflect the value of the resources being protected. Moreover, the strength of the surrounding security infrastructure should be matched by the authentication service. One wants to avoid “putting a steel door in a straw house.” Conversely, a weak or flawed authentication service can be used to circumvent the protection afforded by the surrounding security infrastructure. Understand who or what is being identified. Identity can mean many things to many people. Any authentication service should model identity that is appropriate for the target domain. For many applications, it is often not necessary to map a user to a physical person or computer, but only to treat them as distinct but largely anonymous entities. Such approaches are likely to simplify authentication, and to provide opportunities for privacy protection. Establish credentials securely. Credential establishment is often the weakest point of a security infrastructure. For example, many Web registration services establish passwords through unprotected forms (i.e., via HTTP). Malicious parties can (and do) trivially sniff such passwords and impersonate valid users. Hence, these sites are vulnerable even if every other aspect of security is correctly designed and implemented. Moreover, the limitations of many credential establishment mechanisms are often subtle. One should be careful to understand the strengths, weaknesses, and applicability of any solution to the target environment. In the end analysis, an authentication service is one aspect of a larger framework for network security. Hence, it is a necessary to consider the many factors that contribute to the design of the security infrastructure. It is only from this larger view that the requirements, models, and design of an authentication system emerge.
GLOSSARY Authentication The process of establishing the identity of an online entity. Authorization The process of establishing the set of rights associated with an entity. Certificate A digitally signed statement associating a set of attributes with a public key. Most frequently used to associate a public key with a virtual or real identity (i.e., identity certificate). Credential Evidence used to prove identity or access rights. Malicious party Entity on the Internet attempting to gain unauthorized access, disrupt service, or eavesdrop on sensitive communication (syn: adversary, hacker).
P1: IML/FFX WL040-05
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
56
August 13, 2003
16:41
Char Count= 0
AUTHENTICATION
Secret Information only known to and accessible by a specified (and presumably small) set of entities (e.g., passwords, cryptographic keys). Trusted third party An entity mutually trusted (typically by two end-points) to assert authenticity or authorization, or perform conflict resolution. Trusted third parties are also often used to aid in secret negotiation (e.g., cryptographic keys). Web of trust Self-regulated certification system constructed through the creation of ad hoc relationships between members of a user community. Webs are typically defined through the exchange of user certificates and signatures within the Pretty-Good-Privacy (PGP) system.
CROSS REFERENCES See Biometric Authentication; Digital Identity; Digital Signatures and Electronic Signatures; Internet Security Standards; Passwords; Privacy Law; Secure Sockets Layer (SSL).
REFERENCES Apache (2002). Retrieved May 22, 2002, from http://httpd. apache.org/ Diffie, W., & Hellman, M. E. (1976). New directions in cryptography. IEEE Transactions on Information Theory, 6, 644–654. Dusse, S., Hoffman, P., Ramsdell, B., Lundblade, L., & Repka L. (1998). S/MIME version 2 message specification. Internet Engineering Task Force, RFC 2311. Franks, J., Hallam-Baker, P., Hostetler, J., Lawrence, S., Leach, P., Luotonen, A., & Stewart, L. (1999). HTTP
Authentication: Basic and digest authentication. Internet Engineering Task Force, RFC 2617. Haller, N. M. (1994). The S/Key one-time password system. In Proceedings of 1994 Internet Society Symposium on Network and Distributed System Security (pp. 151– 157). Reston, VA: Internet Society. Kent, S., & Atkinson, R. (1998). Security architecture for the Internet protocol. Internet Engineering Task Force, RFC 2401. Kevin F., Sit, E., Smith, K., & Feamster, N. (2001). Dos and don’ts of client authentication on the Web. In 10 th USENIX Security Symposium, 2001 (pp. 251–268). Berkeley, CA: USENIX Assocation. Kormann, D. P., & Rubin, A. D. (2000). Risks of the Passport Single Signon Protocol. In Computer Networks (pp. 51–58). Amserdam, The Netherlands: Elsevier Science Press. Microsoft (2002). Retrieved May 22, 2002, from http:// www.passport.com/ Neuman B. C., & Ts’o T., (1994). Kerberos: An authentication service for computer networks. IEEE Communications, 32(9), 33–38. RSA (2002). Retrieved August 21, 2002, from http://www. rsasecurity.com/products/securid/ Verisign (2002). Retrieved May 22, 2002, from http://www. verisign.com/ Ylonen, T. (1996). SSH—Secure login connections over the Internet. In Proceedings of 6th USENIX Security Symposium (pp. 37–42). Berkeley, CA: USENIX Association. Zimmermann, P. (1994). PGP user’s guide. Distributed by the Massachusetts Institute of Technology, Cambridge, MA.
P1: IML/FFX Benchmarking
P2: IML/FFX
QC: IML/FFX
WL040/Bidgoli-Vol I-Ch-06
T1: IML June 23, 2003
14:49
Char Count= 0
B Benchmarking Internet Vasja Vehovar, University of Ljubljana, Slovenia Vesna Dolnicar, University of Ljubljana, Slovenia
Introduction Business Benchmarking The Concept of Benchmarking Company Benchmarking Extensions of Benchmarking (Public) Sector Benchmarking Framework Conditions Benchmarking Benchmarking on the (Inter)national Level Benchmarking Internet Internet and Company Benchmarking Internet and Sector Benchmarking Internet and Framework Conditions Benchmarking
57 58 58 59 59 60 60 60 61 61 63
International Comparisons Standardized Internet Benchmarks Technical Measurements The Methodological Problems Number of Internet Users Number of Internet Hosts The Dimension of Time Conclusion Glossary Cross References References Further Reading
63 63 65 66 66 67 68 69 69 69 69 71
63
INTRODUCTION Benchmarking is often defined as a total quality management (TQM) tool (Gohlke, 1998). It is also one of the most recent words introduced into the lexicon of modern management (Keegan, 1998). Only since the mid-1980s have explicit benchmarking activities emerged. With expanded benchmarking practices, a variety of professional associations have been established: the Benchmarking Exchange, Corporate Benchmarking Services, Information Systems Management Benchmarking Consortium, Telecommunications International Benchmarking Group, and International Government Benchmarking Association, to name a few. Similarly, there exist an increasing number of professional Web sites, among them the Benchmarking Exchange, Benchmarking in Europe, Public Sector Benchmarking Service, Best Practices, the Benchmarking Network, and Benchmarking. (Uniform resource locators for these organizations and Web sites are found in the Further Reading section.) A study being conducted among the members of the Benchmarking Exchange showed that the main search engine used among practitioners is Google.com, which includes almost 1 million benchmarking-related documents (Global Benchmarking Newsbrief, 2002). The expansion also can be observed in numerous textbooks dealing either with the general notion of benchmarking or with specific benchmarking areas. Certain textbooks have already been recognized as classics (e.g., Camp, 1989). As far as periodicals are concerned, numerous professional and community newsletters arose from practical business activities, such as eBenchmarking Newsletters by the Benchmarking Network (n.d.),
Benchmarking News by the European Association of Development Agencies (n.d.), and ICOBC & Free Newsletter by the International Council of Benchmarking Coordinators (n.d.). Benchmarking has also become a scientific issue within the field of quality management and specialized scholarly journals appeared (e.g., Benchmarking—an International Journal and Process Management in Benchmarking). The online academic databases searches such as EBSCOHost—Academic Search Premier, Emerald and ABI/INFORM Global, Social Science Plus show that in 2002 each of these databases already contained a minimum of 338 and a maximum of 939 papers related to benchmarking. The number of papers that relate simultaneously to benchmarking and to the Internet is considerably lower: from 3 in Emerald to 33 in the EBSCOHost database. Papers on benchmarking can be found mostly in Internet Research, Quality Progress, Computerworld and PC Magazine. When Longbottom (2000) reviewed approximately 500 benchmarking-related papers published between 1995 and 2000 and referenced on online academic indices (ANBAR and Emerald) as well as on various Internet sites, he found that the majority (80%) could be described as practical papers discussing specific aspects of benchmarking. The remaining academic papers are a mix of theory and development. In the Web of Science, the leading science citation database, almost 2,000 benchmarking-related papers were found in 2002. Papers in this database most often refer to the issues of improving competitive advantage or to specific areas such as health care and education. Robert C. Camp, often recognized as the founder of the benchmarking concept, was the most frequently cited author in this database, with almost 700 citations. In 2002, there 57
P1: IML/FFX
P2: IML/FFX
Benchmarking
QC: IML/FFX
WL040/Bidgoli-Vol I-Ch-06
T1: IML June 23, 2003
58
14:49
Char Count= 0
BENCHMARKING INTERNET
were almost 200 books about benchmarking available at the Amazon.com Web site. Nevertheless, benchmarking predominantly relates to the business environment, although over the past few years we can observe also an increased usage in more general context. Sometimes the notion of benchmarking even appears as a synonym for any comparison based on quantitative indicators. As an example, in some studies even the simplest comparisons based on standardized statistical indicators have been labeled “benchmarking” (i.e., Courcelle & De Vil, 2001; Petrin, Sicherl, Kukar, Mesl, & Vitez, 2000). The notion of benchmarking thus has a wide range of meanings, from a specific and well-defined business practice to almost any comparison based on empirical measures. In this chapter, we use the notion of benchmarking in a broad context, although we concentrate on a specific application: the Internet. We must recognize that the Internet is a newer phenomenon than benchmarking, at least in the sense of general usage. We are thus discussing a relatively new tool (benchmarking) in a very new area (Internet). From the aspect of the scholarly investigation, the problem cannot be precisely defined. In particular, Internet-related topics can be extremely broad, as the Internet has complex consequences on a variety of subjects, from business units to specific social segments, activities, and networks, including the everyday life of the average citizen. Additional complexity arises because the Internet is automatically associated with an array of closely related issues (i.e., the new economy, new society, new business processes, new technologies) that are not clearly separated from the Internet itself. In certain areas, the Internet may have a rich and specific meaning that radically extends beyond its mere technical essence as a network of computers based on a common communication protocol. In this chapter, we therefore understand benchmarking in its broadest sense but limit the scope of study, as much as possible, to its relation to the Internet and not to related technologies and the corresponding social and business ramifications. In particular, we limit this discussion to business entities and national comparisons. In the following sections, we describe the notion of benchmarking in a business environment and also in a more general context, such as sectoral benchmarking and benchmarking of framework conditions. Next, we concentrate on benchmarking of Internet-related issues, particularly with regard to the performance of various countries. Key methodological problems are also discussed with specific attention to the dimension of time.
BUSINESS BENCHMARKING A relatively sharp distinction exists between benchmarking at the company level and other types of benchmarking, which we consider extensions of the technique and discuss later in the chapter. First, we examine additional details related to the business benchmarking process.
The Concept of Benchmarking The common denominator of various definitions of benchmarking is the concept of a “proactive, continuous process, which uses external comparisons to promote incremental improvements in products, processes, and services, which ultimately lead to competitive advantage
through improved customer satisfaction, and achieving superior performance” (Camp, 1989, p. 3). The majority of authors also distinguish between benchmarking and benchmarks. The latter are measurements that gauge the performance of a function, operation, or business relative to others (i.e., Bogan & English, 1994, pp. 4–5). Similarly, Camp (1989) defined benchmark as a level of service provided, a process or a product attribute that sets the standard of excellence, which is often described as a “best-in-class” achievement. Benchmarking, in contrast to benchmarks, is the ongoing search for best practices that produce superior performance when adapted and implemented in an organization (Bogan & English, 1994). The benchmarking process is thus a systematic and continuous approach that involves identifying a benchmark, comparing against it, and identifying practices and procedures that will enable an organization to become the new best in class (Camp, 1989; Spendolini, 1992). In general, two types of benchmarking definitions can be found. Some definitions are limited only to the measuring and comparing while the others focus also on implementation of change and the monitoring of results. Within this context Camp (1989, pp. 10–13) distinguished between formal and working definitions, with the latter emphasizing the decision-making component and the former relating to the measurement process alone. As already noted, benchmarking is basically a TQM tool (Codling, 1996; Czarnecki, 1999; Gohlke, 1998). If quality management is the medicine for strengthening organizations, benchmarking is the diagnosis (Keegan, 1998, pp. 1–3). Although benchmarking readily integrates with strategic initiatives such as continuous improvement and TQM, it is also a discrete process that delivers value to the organization itself (American Productivity and Quality Center [APQC], 2002). At the extreme side, Codling (1996, pp. 24–27) did not classify benchmarking within the TQM framework at all but indicated that they are two separate processes that do not exist within a simple hierarchical relationship but are equal concepts with considerable overlap. We add that apart from TQM, benchmarking also integrates with reengineering (Bogan & English, 1994) and the Six Sigma approach (Adams Associates, 2002). In the late 1970s, Xerox developed a well-known benchmarking project, considered a pioneer in the process (Rao et al., 1996). Xerox defined benchmarking as “a continuous process of measuring products, services, and practices against the toughest competitors or those companies recognized as industry leaders” (Camp, 1989, p. 10). Codling (1996) noted, however, that in the 1950s, well before the Xerox project, to U.K. organizations, Profit Impact of Marketing Strategy (PIMS) and the Center for Interfirm Comparison (CIFC) conducted activities that could be defined as benchmarking. PIMS and CIFC systematically gathered information on companies’ performance and compared these data with those from similar businesses. Early seeds of benchmarking can be also found in the Japanese automotive industry when Toyota systematically studied U.S. manufacturing processes at General Motors, Chrysler, and Ford in the 1950s. Toyota then adopted, adapted, and improved upon their findings. All these examples confirm that companies actually used benchmarking well before 1970s, most often using the methods of site visits, reverse engineering, and competitive analysis (Rao et al., 1996).
P1: IML/FFX Benchmarking
P2: IML/FFX
QC: IML/FFX
WL040/Bidgoli-Vol I-Ch-06
T1: IML June 23, 2003
14:49
Char Count= 0
EXTENSIONS OF BENCHMARKING
The emphasis on formal benchmarking processes changed markedly only in 1990s, not only in the business sector, but also in regional and public sectors, particularly in Australia, New Zealand, and the United Kingdom. The initial understanding of benchmarking was rapidly extended in numerous directions. Modern benchmarking thus refers to complex procedures of evaluation, comprehension, estimation, measurement and comparison. It covers designing, processing, and interpreting of the information needed for a improved decision making. This relates not only to businesses but also to the performance of other entities, including countries. As a typical example, in the second benchmarking report comparing the performance of Belgium with other countries, Courcelle and De Vil (2001, p. 1) defined benchmarking as a continuous, systematic process for comparing performances against the best performers in the world.
Company Benchmarking As noted earlier, benchmarking is usually a part of the quality management concept directed toward making products or services “quicker, better and cheaper” (Keegan, 1998, p. 12). The APQC (2002) suggested using benchmarking to improve profits and effectiveness, accelerate and manage change, set stretch goals, achieve breakthroughs and innovations, create a sense of urgency, overcome complacency or arrogance, see “outside the box,” understand world-class performance and make better informed decisions. Within the business environment, benchmarking is most often performed in the fields of customer satisfaction, information systems, employee training, process improvement, employee recruiting, and human resources. The literature describes many types of benchmarking processes. Camp (1995, p. 16) distinguished between four types of benchmarking: internal, competitive, functional, and generic. Similarly, Codling (1996, pp. 8–13) differentiated three types or perspectives on benchmarking: internal, external, and best practice. Bogan and English (1994, pp. 7–9) also presented three distinct types of benchmarking: process, performance, and strategic benchmarking (see also Keegan, 1998, pp. 13–16). Benchmarking procedures are usually formalized in 4 to 12 stages (APQC, 2002; Bogan & English, 1994; Camp, 1995; Codling, 1996; Longbottom, 2000; Keegan, 1998; Spendolini, 1992). As Bogan and English (1994, p. 81) stated, the differences among benchmarking processes are often cosmetic. Most companies employ a common approach that helps them plan the project, collect and analyze data, develop insights, and implement improvement actions. Each company breaks this process into a different number of steps, however, depending on how much detail it wishes to describe at each step of the template. This does not mean that some companies exclude some steps, but in practice certain steps may naturally combine into one (Codling, 1996, p. xii). The four major stages that appear to be common to all classifications are as follows: 1. Planning. This step involves selection of the broad subject area to be benchmarked, defining the process, and other aspects of preparation. During the planning
59
stage, organizations perform an internal investigation, identify potential competitors against which benchmarking may be performed, identify key performance variables, and select the most likely sources of data and the most appropriate method of data collection. 2. Analysis. This step involves collection of data (e.g., from public databases, professional associations, surveys and questionnaires, telephone interviews, benchmarking groups), determination of the gap between the organization’s performance and that of the benchmarks, exchange of information, site visits to the benchmarked company, and observations and comparisons of process. A structured questionnaire asking for specific benchmarks, addressed to the similar or competitive business entities, is often a crucial step in collecting the data. 3. Action. This step involves communication throughout the organization of benchmarking results, adjustment of goals, adaptation of processes, and implementation of plans for improvement. 4. Review. This step involves review and repetition of the process with the goal of continuous improvement.
Another classification of the benchmarking process relates to the maturity of the company. In the early phase of the process, a company applies diagnostic benchmarking. The second phase is holistic benchmarking, in which the business as a whole is examined, identifying key areas for improvement. In the third, mature phase, the company graduates to process benchmarking, focusing on specific processes and chasing world-class performance (Keegan, 1998; O’Reagain & Keegan, 2000). From these descriptions, it is clear that benchmarking activities are performed in a dialogue with competitors. As Czarnecki (1999, pp. 158, 254) pointed out, however, such a relationship does not happen overnight. Traditional barriers among competing companies must come down, and cooperation must be clearly demonstrated. Today’s companies realize that to get information, they also have to give information. Of course, for a successful implementation of change, it is important to build on the managerial foundation and culture rather than blindly adopting another organization’s specific process. Edwards Deming, sometimes referred to as the father of the Japanese postwar industrial revival, illustrated this in his well-known saying that “to copy is too risky, because you don’t understand why you are doing it. To adapt and not adopt is the way” (Keegan, 1998). Bogan and English (1994) pointed out that one company’s effective benchmarking process design may fail at another organization with different operating concerns.
EXTENSIONS OF BENCHMARKING Many authors (Keegan, 1998; O’Reagain & Keegan, 2000) strictly distinguish between benchmarking at the organizational (company, enterprise) level, benchmarking at the sector level, and, more generally, benchmarking of framework conditions. These extensions of benchmarking are the main focus in this section.
P1: IML/FFX
P2: IML/FFX
Benchmarking
QC: IML/FFX
WL040/Bidgoli-Vol I-Ch-06
T1: IML June 23, 2003
14:49
60
Char Count= 0
BENCHMARKING INTERNET
(Public) Sector Benchmarking Public sector benchmarking is a natural extension of company benchmarking. Similar principles can be applied to the set of enterprises that make up an industry. Sector benchmarking thus focuses on the factors of competitiveness, which are specific to a particular industry (O’Reagain & Keegan, 2000). The usual aim here is to monitor the key factors that determine the ability of the sector to respond to continually changing international competitiveness. During the past few decades, the notion of benchmarking extended also to a variety of nonindustrial fields, particularly to the public sector and especially to the social and welfare agencies in the health and education sector (Codling, 1996, p. 6). Of course, the goals of a public sector organization differ from those of a commercial company (O’Reagain & Keegan, 2000). For public sector organizations, benchmarking can serve as the surrogate for the competitive pressures of the market place by driving continuous improvement in value for money for taxpayers. Benchmarking can help public sector bodies to share best practices systematically with the private sector and with public bodies (e.g., government), as well as with other countries (Cabinet Office, 1999; Keegan, 1998, pp. 126– 128). A typical example of this type of benchmarking is the intra-European Union (EU) and EU—U.S. study on the performance of the national statistical offices. The comparisons of explicit benchmarks related to consideration of the time lag between data collection and the release of the economic statistics, which showed considerable lag within the EU statistical system (Statistics Sweden and Eurostat, 2001, p. 12). The study also showed, however, that in the EU international harmonization of economic statistics has been an important priority over the last decade. Further harmonization on a global level (guided by the United Nations, International Monetary Fund, and Organization for Economic Co-operation and Development [OECD]) is regarded as a much more important part of the statistical work in Europe compared with the United States, where complying with international standards has been of less importance. Recognition that award models derived for commercial organizations can be equally applied to public sector organizations has also increased in recent years. To provide a consistent approach to assessment, some authors suggest the use of the European Foundation for Quality Management (EFQM) model for business excellence (e.g., Cabinet Office, 1999; Keegan, 1998, pp. 45–47). Keegan (1998, pp. 126–130) also mentioned “Hybrid Benchmarking,” a technique that compares performance against others in both private and public sectors. Here the sources of information are similar work areas within the organization of the public sector (government departments and other public bodies) and the private sector.
Framework Conditions Benchmarking The benchmarking method traditionally has been applied at the organizational and sector levels to evaluate the performance of the management processes, but it has been extended to the identification and the evaluation of key
factors and structural conditions affecting the entire business environment. This extension is usually called the framework conditions benchmarking (Courcelle & De Vil, 2001, p. 2). Benchmarking of framework conditions typically applies to those key elements that affect the attractiveness of a region as a place to do business. These elements can be benchmarked on a national or regional level: macroeconomic environment, taxation, labor market, education, transportation, energy, environment, research and development, foreign trade, and direct investment, as well as information and communication technology (ICT) (Courcelle & De Vil, 2001; Keegan, 1998, pp. 20–21). Benchmarking of framework conditions therefore usually involves regions or states comparing the regulations, processes and policies that affect the business environment. Benchmarking of framework conditions usually provides an instrument for evaluating the efficiency of public policies and for identifying steps to improve them by reference to worldwide best practice (European Conference of Ministers of Transport, 2000, p. 12). The philosophy and practice of benchmarking are roughly similar in different domains of application. However, there is an important difference in the feasibility of using results in the case of the framework conditions benchmarking, because the political power to implement changes is often lacking. Therefore one of the most important elements of the benchmarking best practice may be missing.
Benchmarking on the (Inter)national Level In recent years, the notion of benchmarking has become extremely popular in the evaluation and comparison of countries. Theoretically, this type of benchmarking arises from benchmarking framework conditions; however, two specifics are worth noting. Standardized comparative indicators have existed for centuries, yet the explicit label of benchmarking strongly emerged for these comparisons only with the rise of the Internet and with recent comparisons of ICT developments. Often, such notion of benchmarking for country comparisons is relatively isolated from the rich theory and practice of benchmarking. Today, we can observe national reports based on simple comparisons of indicators that are referred to as benchmarking studies; these include Benchmarking the Framework Conditions: A Systematic Test for Belgium (Courcelle & De Vil, 2001) and Benchmarking Slovenia: Evaluation of Slovenia’s Competitiveness, Strengths and Weaknesses (Petrin et al., 2000). The essence of the benchmarking concept are evident in these studies because the indicators are compared with leading, comparable, or competitive countries. Similarly, within the European Union, the notion of benchmarking has become a standard term for comparisons of the member states. Typical examples of such research are the periodic benchmark studies on the gross domestic products per capita and per employed person. In a more advanced setting, benchmarking refers to a complex process of establishing and monitoring the standardized set of indicators of the information society (e.g., Conseil de l’Union europ´eenne, 2000).
P1: IML/FFX Benchmarking
P2: IML/FFX
QC: IML/FFX
WL040/Bidgoli-Vol I-Ch-06
T1: IML June 23, 2003
14:49
Char Count= 0
BENCHMARKING INTERNET
BENCHMARKING INTERNET Internet and Company Benchmarking The Internet is rapidly being integrated into every facet of organizations’ overall strategy and operation. Many organizations have expanded their direct-to-consumer business model, employing multiple Internet strategies to customize customer information, track customer development trends and patterns, and increase customer savings as a means to build strong relationships with their customers (Best Practices, 2000, 2001; Martin, 1999). These changes have had a major impact on the benchmarking process as well. ICT systems are not only being benchmarked, they are also the key enablers of successful benchmarking. Current ICT systems permit users to generate, disseminate, analyze, and store vast amounts of information quickly and inexpensively. When poorly managed, however, ICT can annoy customers, slow cycle times, saddle the corporation with excessive costs, and damage productivity—all to the disadvantage of the organization (Best Practices, 2001; Bogan & English, 1994, pp. 171, 188). The benchmarks that provide the comparative insight into the role of the ICT are thus extremely important, particularly because the implementation of the ICT requires long-term strategic planning and significant investment. The process of the Internet benchmarking on the organizational level still lacks a set of universally recognized benchmarks. Nevertheless, on the basis of several sources (e.g., Benchmark Storage Innovations, 2002; Bogan & English, 1994; Haddad, 2002; Tardugno, DiPasquale, & Matthews, 2000) we can broadly classify these benchmarks into three categories. First, when benchmarking features and functionality, indicators usually measure the following: r
r r r r
Characteristics of software and hardware (e.g., server, database, multimedia, networks, operating systems and utilities, security infrastructures, videoconference systems, corporate intranet and extranet, type of Internet connection and connection, download and upload speed) Purchase of new technology (e.g., share of new computers according to the number of all computers) Costs of technology and the organization’s budget for ICT Software and network security administration Computer system performance (processing speeds, central processing unit efficiency, CD-ROM drive access speeds, performance analysis of networking and communications systems, reliability and performance modeling of software-based systems, error rates)
Second, while exploring the measures related to the use of ICT, an organization can measure the processes related to the outside environment, including its clients (customer-oriented benchmarks), or it can evaluate the use of ICTs within the organization (employee-oriented benchmarks). Customer-oriented benchmarks reflect the following:
61
r
The extent to which ICTs have been incorporated into economic activity, such as use of the Internet in a company’s transactions (i.e., electronic commerce) r Types of e-business processes (such as Web sites with no transactions, Web-based e-commerce, electronic marketplace, etc.) r The characteristics of strategic information technology projects (e.g., mobile or wireless commerce offerings; electronic supply chains; participation in electronic marketplaces; the organization’s Web site capacity, performance, and usability; customer service and support infrastructure; creation of “localized” Web sites for customers in other countries, etc.) Employee-oriented benchmarks typically examine the following: r r r r r r r r r
Key applications running in the organization Number of employees that use ICTs and the technical skills evolved Level of training provided for employees to use ICTs effectively Information system indicators reflecting organizational learning and continuous improvement Diffusion of telework New product development times Employee suggestion and process improvement rates Use of software (e.g., databases and telecommunication networks) by employees ICT usability
Third, when an organization explores the benefits attributed to the employment of Internet and other ICTs, it typically observes the following benchmarks: r r r r r r r r r
Increased efficiency, productivity, and performance of the organization Improved workstation comfort and job satisfaction Fewer problems in the production stage Broader customer base in existing and international markets More effective communication with customers, employees, and suppliers Fewer customer complaints Increased customer loyalty Better financial management Better integration of business processes
Measurement instruments and indicators used in the benchmarking process depend on the type of organization, its communication and business processes, the social context within which it operates, the characteristics of the employees and clients, and so on. Consequently, when evaluating features, functionality, use, and benefits of ICT, different practitioners focus on different benchmarks. In addition, most of the benchmarks described here can be measured from different perspectives; for example, a practitioner may concentrate on extent, intensity,
P1: IML/FFX
P2: IML/FFX
Benchmarking
QC: IML/FFX
WL040/Bidgoli-Vol I-Ch-06
T1: IML June 23, 2003
62
14:49
Char Count= 0
BENCHMARKING INTERNET
Table 1 Internet Benchmarks Aa ICT Infrastructure Number of Internet hosts Percentage of computers connected to the Internet Households with access to the Internet Wireless Internet access Number of Web sites Price and quality of Internet connection Cable modem lines per 100 inhabitants Digital subscriber lines (DSL) per 100 inhabitants Network Use Percentage of population that (regularly) uses the Internet Internet subscribers per 100 inhabitants Cable modem, DSL, and Internet service provider (ISP) dial-up subscribers Hours spent online per week Mobile and fixed Internet users Primary uses of the Internet Primary place of access Perception of broadband Internet access Secure Networks and Smartcards Number of (secure) Web servers per million inhabitants Percentage of Internet users with security problems Faster Internet for Researchers and Students Speed of interconnections within national education networks E-commerce Internet access costs Percentage of companies that buy and sell over the Internet Percentage of users ordering over the Internet Business-to-consumer e-commerce transactions (% of gross domestic product) Average annual e-commerce/Web spending per buyer Internet sales in the retail sector (%) Consumer Internet purchases by product Payment methods Future e-commerce plans Business intranet sophistication Online ad placement by type of site Internet advertising revenues—source comparison Domestic venture capital investment in e-commerce Competition in dot-com market Prevalence of Internet startups Use of Internet-based payment systems Sophistication of online marketing Price as barrier of e-commerce Networked Learning Computers connected to Internet per 100 pupils Computers with high-speed connections per 100 pupils Internet access in schools Teachers using the Internet for noncomputing teaching Working in the Knowledge-Based Economy Percentage of workforce using telework Computer workers as a percentage of total employment Participation for All Number of public Internet points (PIAP) per 1,000 inhabitants Availability of public access to the Internet
Bb
x
Cc
Dd
x x
x
x
Ee
Ff
x
Gg
Hh
x
x
x x x
x
x x x
x
x
x
x
x
x x
x x
x
x
x
x x x x x x x
x x
x x x
x
x x x
x x x x
x
x x
x x x
x
x x x
x x x x x x x x x x x x x
x
x x x x
P1: IML/FFX Benchmarking
P2: IML/FFX
QC: IML/FFX
WL040/Bidgoli-Vol I-Ch-06
T1: IML June 23, 2003
14:49
Char Count= 0
INTERNATIONAL COMPARISONS
63
Table 1 (Continued) Aa Central government Web sites that conform to the Web Accessibility Initiative (WAI) Government Online Percentage of basic public services available online Public use of government online services Percentage of online public procurement Government effectiveness in promoting the use of information and communication technology Business Internet-based interactions with government Health Online Percentage of health professionals with Internet access Use of different Web content by health professionals
Bb
Cc
Dd
Ee
Ff
Gg
Hh
x
x x x
x
x x x x
a
European Information Technology Observatory, 10th ed. (2002). Benchmarking eEurope (2002). c The Global IT Report (Kirkman, Cornelius, Sachs, & Schwab, 2002). d Organization for Economic Co-operation and Development (2001, 2001a, 2002); Pattinson, Montagnier, & Moussiegt (2000). e NUA (2002). f International Data Corporation (2002c, 2002d. g International Telecommunications Union (2001, 2001a). h Benchmarking Belgium (Courcelle & De Vil, 2001). b
quality, efficiency, mode of use, familiarity, or readiness of the certain component.
Internet and Sector Benchmarking Sector benchmarking focuses on the factors of competitiveness, specific to a particular industry. Because of its powerful impact—which is sometimes unclear or even contradictory—it is particularly important that Internet is benchmarked within the whole sectors. The need for this practice is especially crucial in ICT-related sectors. Typically, telecommunication companies have used benchmarking to evaluate digital versus analog technology. Benchmarks included one-time costs, maintenance costs per line, minutes of downtime per line per month, and various performance measures for processing time and failures (Bogan & English, 1994, p. 171). The Internet is being benchmarked beyond the ICT sector, however. New technologies, especially Internet-based information and service delivery, offer immense possibilities to meet a range of sector objectives. If appropriately deployed, ICT can help facilitate crucial economic and social development objectives in all sectors (World Bank Group, 2001, p. 67).
Internet and Framework Conditions Benchmarking Framework conditions benchmarking focuses on improving the external environment in which organizations operate. One of the key elements affecting the national or regional business environment is the presence and nature of ICT. Lanvin (2002, p. xi) thus raised an important question: whether societies with different levels of development can turn the ICT revolution into an instrument that reduces the risk of marginalization and alleviates poverty. The realities in this broad and complex area require a clear assessment of how well equipped a region or country
is to face the challenges of the information-driven economy (Lanvin, 2002, p. xi). So, before an action is taken, the so-called digital divide among less developed countries and the most developed countries or regions must be estimated. In other words, only when standardized indicators are available can the challenge of bridging the global digital divide be addressed.
INTERNATIONAL COMPARISONS In this section, the key Internet indicators related to country comparisons are presented, together with their methodological specifics. The international organizations and projects that collect or present these data are briefly introduced.
Standardized Internet Benchmarks From a technological perspective, the Internet is a global network of computers with a common communicating protocol. The corresponding social consequences of this phenomenon are extremely complex, however so we cannot avoid the benchmarks that relate not only to the Internet but also those linked to other ICT and to society. Of course, the line between the Internet and more general ICT benchmarks may be relatively vague. We limit the discussion here only to those benchmarks that are closely linked to the Internet. In recent years, there has been a great deal of conceptual discussion about measuring Internet and the information society. The rapidly changing phenomena in this area have also challenged the process of scientific production, particularly in the social sciences, as well as the production of official statistical indicators. In last few years, however, the key Internet-related benchmarks converged to form relatively simple and commonsense standardized indicators (Table 1). This simplification
P1: IML/FFX
P2: IML/FFX
Benchmarking
QC: IML/FFX
WL040/Bidgoli-Vol I-Ch-06
64
T1: IML June 23, 2003
14:49
Char Count= 0
BENCHMARKING INTERNET
corresponds to a relative loss of the enthusiasm for the so-called new economy, information society, and new business models that has recently occurred. The quest for standardized indicators for Internet benchmarking has perhaps been strongest in the EU. In part, this is because the EU’s official and ambitious goal is to surpass the United States within the next decade as the technologically most advanced society. In addition, the EU urgently needs valid comparisons among its 15 members as well as the 10 countries that will join in 2004. In addition to official EU documents regulating the standards for information society benchmarks (Benchmarking eEurope, 2002), a variety of research projects have emerged, one of the most comprehensive of which is the EU research program Statistical Indicators Benchmarking the Information Society (SIBIS, 2002). The conceptual framework for statistical measurements used by SIBIS was extensively developed for all key areas of the ICTrelated phenomena—from e-security and e-commerce to e-learning, e-health, and e-government. Currently, of course, only a small portion of the proposed indicators is being collected. Table 1 roughly summarizes only the key and most often applied benchmarks in the field of Internet-related country comparisons. In compiling the list, we sought a balance between Internet and the related ICT benchmarks and tried to avoid more general ICT indicators, such as those from the broad field of telecommunications. The columns in Table 1 relate to the selected organizations that have published these data. Of course, the work of many other organizations was omitted because of space limitations and the scope of this chapter. Only the key international bodies and projects that systematically collect and present Internet benchmarks are listed. In addition, we also included two examples of the private companies, NUA (http://www.nua.com), which was one of the first to collect secondary data on worldwide Internet users (column E), and the International Data Corporation (IDC; http://www.idc.com), the leading global consulting agency specializing in international ICT studies (column F). In the last column (column H), the example of the benchmarks included in a typical national report (e.g., Belgium) on ICT is presented (Courcelle & De Vil, 2001). We now briefly describe the sources of the data in the Table 1.
European Information Technology Observatory (EITO) This broad European initiative has as its objective the provision of an extensive overview of the European market for ICT within a global perspective. EITO publishes a yearbook that presents the most comprehensive and upto-date data about the ICT market in Europe, together with the global benchmarks, particularly those related to United States and Japan (EITO, 2002). The majority of benchmarks that measure financial aspects (e.g., ICT investments) rely on data gathered by IDC. From the beginning the EITO has been strongly supported by the European Commission, Directorate General Enterprise, and Information Society, and since 1995 also by the Directorate for Science, Technology and Industry of the OECD in Paris (EITO, 2002). The annual EITO reports include the key benchmarks and also in-depth discussion of the contemporary ICT issues.
Benchmarking e Europe This is the official European Union benchmarking project in the filed of ICT, begun in November 2000, when the European Council identified 23 indicators to benchmark the progress of the eEurope Action Plan. Indicators measure many aspects of ICTs, including e-commerce, egovernment, e-security, e-education, and e-government. The facts and figures from this benchmarking program will be used to evaluate the net impact of eEurope and the information society, to show the current levels of activity in key areas, and to shape future policies by informing policy makers (Benchmarking eEurope, 2002).
The Global Information Technology Report (GITR) 2001–2002 Readiness for the Networked World is a project supported by the Information for Development Program (infoDev, http://www.infodev.org), a multidonor program administered by the World Bank Group (Lanvin, 2002, p. xi; World Bank Group, 2001, p. iii). At the core of the GITR is the Networked Readiness Index, a major comparative assessment of countries’ capacity to exploit the opportunities offered by ICTs. The Networked Readiness Index provides a summary measure that ranks 75 countries on their relative ability to leverage their ICT networks.
Organization for Economic Co-operation and Development The OECD groups 30 countries sharing a commitment to democratic government and the market economy. With active relationships with some 70 other countries, nongovernmental organizations, and civil societies, the OECD has a global reach. Best known for its country surveys and reviews, its work covers economic and social issues from macroeconomics to trade, education, development, and science and innovation. The OECD produces internationally agreed upon instruments to promote rules of the game in areas in which multilateral agreement is necessary for individual countries to make comparisons and progress in a global economy. Within OECD, the Statistical Analysis of Science, Technology and Industry is also conducted, together with the development of the international statistical standards for this field. Among other responsibilities, the OECD’s work in this area seeks ways of examine and measure advances in science and technology and reviews recent developments in information and communication technologies (OECD, n.d.). Several internationally comparable indicators are formed within the field of the information economy, such as resources and infrastructure for the information economy, the diffusion of Internet technologies and electronic commerce, ICTs (software and hardware). The OECD also established The Committee for Information, Computer and Communications Policy (ICCP), which addresses issues arising from the digital economy, the developing global information infrastructure, and the evolution toward a global information society. In 2002, OECD published the OECD Information Technology Outlook, which provides a comprehensive analysis of ICTs in the economy, ICT globalization, the software sector, e-commerce, ICT skills, the digital divide, technology trends, and information technology policies.
P1: IML/FFX Benchmarking
P2: IML/FFX
QC: IML/FFX
WL040/Bidgoli-Vol I-Ch-06
T1: IML June 23, 2003
14:49
Char Count= 0
INTERNATIONAL COMPARISONS
NUA Internet Surveys As a global resource on Internet trends, demographics, and statistics, NUA offers news and analysis updated weekly. It compiles and publishes Internet-related survey information from throughout the world. NUA is particularly known for its unique “How Many Online?” feature, which offers an estimate of the global Internet user population based on extensive examination of surveys and reports from around the world (NUA, n.d.). The value and importance of this work rapidly diminishes as more reliable and standardized indicators have begun to appear.
International Data Corporation (IDC) IDC is a commercial company and the world’s leading provider of technology intelligence, industry analysis, market data, and strategic and tactical guidance to builders, providers, and users of IT (IDC, n.d.). Thus, IDC is perhaps the most reliable global source for the number of personal computers sold in certain country or region. In addition to individual research projects and more than 300 continuous information services, IDC also provides a specific Information Society Index (ISI), which is based on four infrastructure categories: computer, information, Internet, and social infrastructures. The ISI is designed for use by governments to develop national programs that will stimulate economic and social development. It is also a tool for IT, dot-coms, and asset management and telecommunications companies with global ambitions to assess the market potential of the various regions and countries of the world (IDC, 2002a, 2002b).
International Telecommunication Union (ITU) Headquartered in Geneva, Switzerland, ITU is an international organization within the United Nations System in which governments and the private sector coordinate global telecom networks and services. Established in 1865, ITU is the one of the world’s oldest international organization. ITU’s membership includes almost all countries and more than 500 private members from telecommunication, broadcasting, and IT sectors. ITU regularly publishes key telecommunication indicators, including the Internet-related benchmarks (ITU, n.d.).
65
fusing technology and in building a human skills base (UNDP, 2001). United Nations Industrial Development Organization (UNIDO) benchmarked a set of industrial performance and capability indicators and ranked 87 countries. The Industrial Development Report 2002/2003 is intended to help policy makers, business communities, and support institutions assess and benchmark the performance of their national industries and analyze their key drivers (UNIDO, 2002). Benchmarking is also relevant to the United Nations Educational, Scientific and Cultural Organization (UNESCO), particularly within the field of higher education. UNESCO has established Observatory of the Information Society with the objectives of raising awareness on the constant evolution of ethical, legal, and societal challenges brought about by new technologies. It aims to become a public service that provides updated information on the evolution of the information society at the national and international levels (WebWorld, 2002). In 2001 The World Bank Group gathered data that allow comparisons for almost all existing countries available in the World Development Indicators database. Included are also indicators that measure infrastructure and access, expenditures, and business and government environment in relation to ICT. In the United Kingdom, the Department of Trade and Industry (DTI) has sponsored research on levels of ownership, usage, and understanding of ICTs by companies of all sizes and within all sectors in benchmarked countries. The report Business in the Information Age benchmarks businesses in the United Kingdom against those in several European countries, the United States, Canada, Japan, and Australia (DTI, 2002). Also in the United Kingdom, the Office of Telecommunications (2002) issued the International Benchmarking Study of Internet Access, covering both basic dial-up access and broadband services (i.e., DSL and cable modem). The number of institutions that publish some Internetrelated measurements on international level is higher each year. It is hoped that this will also lead to accelerated establishment of standardized instruments for statistical comparisons.
Benchmarking Belgium Benchmarking Belgium (Courcelle & De Vil, 2001) is a typical national ICT benchmarking study with the goal of comparing ICT developments in Belgium with comparable countries. Other organizations also provide international Internetrelated benchmark indicators. The indicators are usually similar to those already covered in Table 1, however. A brief listing of the most important of these follows. The Human Development Report, commissioned by the United Nations Development Programme (UNDP), covers more than 100 countries annually. In 2001, the report was titled Making New Technologies Work for Human Development. It presents statistical cross-country comparisons that have been built up through cooperation of many organizations (e.g., several UN agencies, OECD, ITU, the World Bank). Report contains many composite indexes, such as the technology achievement index designed to capture the performance of countries in creating and dif-
Technical Measurements The benchmarks presented in Table 1 included almost none of the performance metrics of ICT infrastructure, although they are extremely important Internet benchmarks. The technical benchmarks related to the ICT infrastructure predominantly include specific information on computers. Also relevant are the characteristics of modems and the type of Internet connection. Here, some of the most interesting benchmarks also overlap with those already outlined in the Internet and Company Benchmarking section (i.e., the type of software and hardware). One of the central devices for the Internet technical measurement is the Internet host, where the measurements relate to corresponding speed, access, stability, and trace-route. The speed is usually expressed in the amount of information transmitted per second. Beside technical characteristics of modems and computers, the processing
P1: IML/FFX
P2: IML/FFX
Benchmarking
QC: IML/FFX
WL040/Bidgoli-Vol I-Ch-06
66
T1: IML June 23, 2003
14:49
Char Count= 0
BENCHMARKING INTERNET
speed is determined by network speed between the hosts, which depends on the Internet service providers’ and national communication infrastructures. In particular, the capacity of the total national communication links is often used as an important benchmark for the country comparisons. The access and stability are related concepts; stability is checked on a local level and is defined in terms of host’s interruptions. The access stands for stability on a global level; it tells us how accessible the host is from one or more points in the Internet network. Also important are the trace-route reports, in which we can observe the path where data packets travel as they leave the user’s computer system. More direct routes to the key international communication nodes may indicate better national infrastructure.
THE METHODOLOGICAL PROBLEMS Of course, because of their newness, all Internet benchmarks are relatively unstable and typically face severe methodological problems. This is understandable, because these phenomena occurred relatively recently and therefore little time has been available for the discussion of methodological issues. Often, they also exhibit extremely high annual growth rates, measured in tens of percentages. In addition, the new technological improvements continuously change the nature of these phenomena and generate a permanent quest for new indicators. As a consequence, in mid-1990s this rapid development almost entirely eliminated the official statistics from this area. Instead, the private consulting agencies took the lead in ICT measurements. Thus, for example, the IDC produces many key internationally comparable data on the extent and structure of the ICT sectors. In last few years, the efforts in official statistics and other noncommercial entities took some important steps toward compatibility. The activities within OECD particularly in Scandinavian countries, Australia, Canada, and the United States, have been particularly intensive. The United States took the lead in many respects, what was due not so much to the early Internet adoption but to early critical mass achieved in that country. In the United States, there were already millions of the Internet users by mid-1990s, a fact that many commercial organizations considered worthy of research. The U.S. government also reacted promptly, so, for example, in addition to numerous commercial measurements, official U.S. Census Bureau figures are available for business-to-customer and business-to-business sales from the end of the 1990s. The EU, in comparison, is only in the process of establishing these measurements for 2003. With respect to more sociological benchmarks, the National Telecommunications and Information Administration (2002) conducted pioneering research on the digital divide. The Pew Research Center (n.d.), a U.S. nonprofit organization, conducts important research that sets standards for sociological Internet benchmarks. In the reminder of this section, we discuss some typical methodological problems related to the Internet benchmarks. The discussion is limited to the two most popular benchmarks in the field of Internet-related national performance: the number of Internet users and the num-
ber of Internet hosts. We believe that the methodological problems are very much typical for other indicators listed in Table 1.
Number of Internet Users The number of Internet users heavily depends on the definition applied, an issue for which three methodological problems can be cited.
1. The Specification of Time When defining the Internet user, usage during the last three months is often applied (NUA, 2002). Even more often, the Internet user is defined with simple self-classification, in which a question such as “Do you currently use the Internet” is asked on a survey. Experience shows that a positive answer to this question results in about 3–5% overestimation compared with questions asked among monthly users (e.g., people who claim to use the Internet on a monthly basis). Typically, usage during the last three months reveals up to a third more users compared with the category of monthly users. In the case of weekly users, which is another important benchmark, the figure shrinks to about one fifth compared with monthly Internet users. A huge variation thus exists in the number of Internet users only because of the specified frequency of usage. In addition, when asking for the Internet usage from each location separately (e.g., home, school, job), the figure increases considerably compared with asking a general question that disregards location. The timing of the survey has also a considerable impact: February figures can dramatically differ from the November figures of the same year. Unfortunately, the explicit definitions (e.g., working, timing) applied are typically not clearly stated when numbers of Internet users are published.
2. The Base and Denominator for Calculating Percentages The number of Internet users is often observed as a share within the total population. This may be a rather unfair comparison because of populations’ varying age structures and may produce artificially low figures for certain countries. Instead, often only the category 18+ is included in research, particularly in the United States. In Europe, users older than 15 years (15+) have become the standard population. The population aged 15 to 65 is also used as a basis for calculations, whereas media studies usually target the population aged 12 to 65 or 10 to 75. For a country with Internet penetration reaching about a quarter of the population, discrepancies arising from varying target populations (e.g., the basis in the denominator) vary dramatically, from the lowest Internet penetration of 20% in the population 15+ to the highest penetration of 30% in the population aged 15 to 65.
3. Internet Services Used When asking about the Internet usage, typically only the Internet is mentioned in the survey question. Increasingly often the definition explicitly includes also the usage of the e-mail. However, here we instantly face the problem of non-Internet-based email systems. Some other definitions also include Wireless Application Protocol and other
P1: IML/FFX Benchmarking
P2: IML/FFX
QC: IML/FFX
WL040/Bidgoli-Vol I-Ch-06
T1: IML June 23, 2003
14:49
Char Count= 0
THE METHODOLOGICAL PROBLEMS
mobile Internet access methods as well as WebTV. No common international standards have been accepted. An attempt to establish such guidelines may be jeopardized with the emerging and unpredictable devices that will enable the access to the Internet. In the future, the definitions will have to become much more complex, so the potential danger for improper comparisons will also increase. The development of the standardized survey question is thus extremely important. In addition to the these three problems, we should add that the number of Internet users is typically obtained from some representative face-to-face or telephone survey, which creates an additional and complex set of methodological problems related to the quality of survey data (sample design issues, nonresponse problems, etc.). Another approach to estimate the number of Internet users is through models in which the number of Internet hosts and other socioeconomic parameters (i.e., educational statistics, gross domestic product) come as an input. This may be a problematic practice. A much more promising approach are so-called PC-meter measurements, in which the representative sample of Internet users is determined by installing a tracking software that records a person’s Internet-related activities (e.g., NielsenNetratings, MediaMatrix). Despite serious methodological problems—particularly due to the non-household-PC access (i.e., business, school) and non-computer access (i.e., mobile)—this approach seems to be one of the most promising. The key advantage here is that it is not based on a survey question but on real-time observations. Another advantage is the convenience arising from the fact that the leading PC-meter companies already perform these measurements on a global level.
Number of Internet Hosts The number of Internet hosts is perhaps the most commonly used Internet benchmarks. The reason for this is a relative easiness of its calculation and the regular frequency of these measurements. The Network Wizards (http://www.nw.com/) and R´eseaux IP Europ´eens (RIPE) (http://www.ripe.net/) are typical examples of the organizations that gather these kinds of statistics. There are severe methodological problems related to these measurements, however.
Device The term “host” usually relates to a device that is linked to the Internet and potentially offers some content to the network. It also relates to a device with which users access the Internet. During an Internet session, each device has its Internet protocol (IP) number. The device is typically a computer; however, it can also be a modem used for a dial-up access. In the future, other devices—mobile phones, televisions, and perhaps even home appliances such as refrigerators—will also have IP numbers. National differences in the structure of those devices may post severe problems for international comparisons. Some other national specifics may also have some impact, such as a relatively large number of IP numbers partitioned on one server.
67
Dial-Up Modems The most critical type of the host device is a dial-up modem, which usually serves about 100 users (e.g., households or companies) monthly. As a consequence, in each session the dial-up user connects to the Internet through a different and randomly selected modem (IP number). In countries with larger numbers of dial-up access users, the host count may underestimate the reach of the Internet.
Proxy Servers In businesses and organizations, one computer or server may be used as the proxy host for Internet access for all computers within the local network. All the users (e.g., employees) may appear to use the same host number. Countries with a large number of such local networks may underestimate their Internet penetration.
Domain Problems In host count statistics, all the hosts under a country’s national domain are attributed to that country. The countries with restrictive domain-registration policies force their subjects to register their domains abroad, however. Consequently, a considerable number of hosts may be excluded from the national domain count. The Slovenian example is typical. Until 2003, only a company’s name and trademark could receive the national domain name “.si,” so up to one third of all hosts are registered under “.com,” “.net,” and other domains. It is true that with some additional procedures, the hosts can be reallocated to the proper country, as is typically done for the OECD. This requires additional resources, however, and is not available in the original host count data.
Technical Problems The host count measurements are basically performed with a method “pinging” in which the computer signal is sent to a certain host number. Because of increased security protection for the local networks, the methodologies must be permanently adopted. Thus, for example, a few years ago the Network Wizards (NW) had to break the original time series of its measurements with a completely new measurement strategy. The differences between RIPE and NW are also considerable for certain countries. Local measurements can be somewhat helpful here; however, the regional or national partner may not report regularly, so a large dropout rate may result, as was often the case with RIPE data for Italy. There is also the problem of global commercial hosting, in which businesses from one country run their Web activities in the most convenient commercial space found in another country. In the future, the host count measurement will have to upgrade measurement techniques continuously, and there will always remain certain limitations when inferring national Internet development from host count statistics. These methodological problems related to Internet users and hosts also affect other benchmarks listed in Table 1. Thus, a general warning should be raised when using this kind of data. In particular, the methodological description must be closely observed. Despite severe methodological problems, the national benchmarks in Table 1 offer reasonable and consistent
P1: IML/FFX
P2: IML/FFX
Benchmarking
QC: IML/FFX
WL040/Bidgoli-Vol I-Ch-06
T1: IML June 23, 2003
68
14:49
Char Count= 0
BENCHMARKING INTERNET
results. Of course, with certain countries additional factors must be considered in the interpretation of data. In the future, because of the increased need for standardized, stable, and longitudinal benchmarks, we can expect that at least some of them will become standard. Another reason for this is that many phenomena have already profiled themselves and settled down in a stable and standardized form. For others, and particularly for new methods, users may have to struggle through the certain period of ambiguity during which no standardized or official indicators are available. Figure 1: Internet penetration in Time 1 and in Time 2.
THE DIMENSION OF TIME Benchmark comparisons are usually performed within time framework, so this benchmarking dimension is of great significance. Observing benchmarks through time can be extremely problematic because the straightforward comparisons of fixed benchmarks may not suffice in a rapidly changing environment. As an example, the increase in Internet penetration from 5% in Time T1 to 10% in Time T2 for Country A demonstrates the same absolute increase in penetration as experienced by Country B with the corresponding increase from 15% (T1) to 20% (T2). In an absolute sense, one could say there had been an identical increase in Internet penetration (e.g., 5%). Similarly, the gap between the countries remains the same (e.g., 15 − 5 = 10% in time T1 and 20 − 10 = 10% in time T2). In a relative sense, however, the increase in Country A from T1 to T2 was considerably higher:—(10 − 5)/10 = 50%, compared with (15 − 10)/15 = 33% in Country B. Similarly, the amount of the relative difference between the countries dramatically shrunk from (15 − 5)/15 = 75% at T1 compared with (20 − 10)/20 = 50% in T2. Correspondingly, at T1 Country A reached 15 − 5/15 = 33% of the Internet penetration of country B, whereas at T2 it already had reached (20 − 10)/20 = 50% of the penetration in Country B. It is only a matter of subjective interpretation whether the differences in Internet penetration between the two countries remained the same (e.g., 5%) or decreased (e.g., Country A is reaching 50% of the penetration of Country B at T2 instead of only 33% at T1). Paradoxically, as will be shown later, the gap from T1 to T2 between these two countries most likely increased. Of course, these differences may seem trivial because they refer to the usual statistical paradoxes, which can be dealt with a clear conceptual approach about what to benchmark together with some common sense judgment. It is much more difficult to comprehend and express the entire time dimension of the comparison in this example. The fact is that all the information regarding the time lag between the countries cannot be deduced directly from these data (Figure 1). To evaluate the entire time dimension, one would need the diffusion pattern of the Internet penetration or at least some assumptions about it. Typically, we assume that at T2 Country A will follow the pattern of Country B (Sicherl, 2001). For Figure 1 we could thus deduce, using a simple linear extrapolation, that Country A would need 2 × (T2 – T1) time units (i.e. years) to reach the penetration of the country B at T2,
what is usually labeled as a time distance between the two countries. It is also possible, however, that at T1 Country A will need, for example, 3 years to reach the penetration of Country B at T1, whereas at T2, Country A may need 5 years to reach Country B’s penetration at T2. Such an increase in lag time is expected for Internet penetration because its growth is much higher during the introductory period. Typically, much less time is needed for an increase in penetration from 5 to 10% compared with an increase from 55 to 60%. The opposite may also be true, however, as the differences in time may shrink from 3 years at T1 to 2 years at T2; it depends on the overall pattern of the Internet diffusion process. Figure 2 demonstrates these relationships for the case of the two-dimensional presentation of the host density (the number of Internet hosts per 10,000 habitants) for Slovenia and the EU average (1995–2001). We expressed the Slovenian relative host density as the percentage of the density reached in the EU as the first dimension. The other dimension expresses the differences in terms of time distance, that is, the number of years Slovenia would need to catch up to the EU average. The method of time distance, which extrapolates the existing growth to the future, was applied here (Sicherl, 2001). In July 1995, Slovenia reached almost 40% of the EU average and in January 1997, it reached almost 90%, whereas in January 2001, it returned to 40% of the EU average. On the other hand, the corresponding time lag increased from about 1 year in 1995 to more than 3 years in 2001. The same figure for the relative benchmark (e.g., 40% in 1995 and
Figure 2: Host density in Slovenia and EU (1995–2002). Source: Sicherl (2001)
P1: IML/FFX Benchmarking
P2: IML/FFX
QC: IML/FFX
WL040/Bidgoli-Vol I-Ch-06
T1: IML June 23, 2003
14:49
Char Count= 0
REFERENCES
in 2001) has thus a dramatically different interpretation in terms of the time distance (e.g. 1 year and 3 years). The discrepancy can be explained by the fact that it was much easier to expand growth in 1995 when yearly growth rates in hosts’ density were over 100% and the EU average was around 20 hosts per 10,000 habitants, compared to 2001 when the yearly growth rate were only around 10% or even stagnating, and the average of the host density was 40 hosts per 10,000 inhabitants. Obviously, the Internet benchmarks should be observed within the framework of changing penetration patterns. Any benchmark that relies only on the comparisons of absolute or relative achievements may not be exhaustive in explaining the phenomena. It can be even directly misleading. This example illustrates that benchmark researchers must take the time dimension into careful consideration.
CONCLUSION The basic concept of benchmarking relates to comparisons of performance indicators with a common reference point. Historically, such comparisons have been performed since the time of the ancient Egyptians. The systematic collection of the benchmarks also existed from the early days of the competitive economy, when companies compared their business practices with those of competitors. The explicit notion of benchmarking arose only in the late 1970s with the pioneering work of Xerox, however, and interest in the field exploded in the 1990s. Today benchmarking is an established discipline with professional associations, awards, codes of conduct, conferences, journals, and textbooks, and companies around the world are involved in the practice. There are no doubts that modern benchmarking arose from a business environment where all the basic methodology and the standard procedures were developed. However, during past years the notion of benchmarking has expanded to sector benchmarking as well as to the governmental and nonprofit sector. In last few years it has also become popular for the national comparisons in the field of ICT. A number of international studies have been labeled as benchmarking, although little benchmarking theory was actually applied (Courcelle & De Vil, 2001; Petrin et al., 2000). The EU adopted benchmarking for ICT comparisons of member and candidate nations in a formal manner. In this case, statistical data are used for systematic year-by-year comparisons according to 23 Internet benchmarks. The speed of changes in the field of ICT creates severe methodological problems for the Internet benchmarks. With the dramatic rise of the Internet in mid-90s only private companies had sufficient flexibility to provide up-to-date ICT indicators. As a consequence, even today, for the ICT international comparisons the data from private agencies are often used. In particular, this holds true for the scope and structure of the ICT spending. Only in recent years have the official statistics and other international bodies recovered from this lag and presented their own methodological outlines. Here, the work within EU and particularly within the OECD should be emphasized.
69
The contemporary Internet indicators used for the international comparisons of the countries’ performance have stabilized only in recent years. After many theoretical discussions about the complexity of the information society, relatively simple indicators became the standards for the national ICT benchmarking. Among the key indicators in this field are the Internet penetration, the host density, and the share of Internet transactions among all commercial transactions of consumers and companies as well as within the government–citizen relations.
GLOSSARY Benchmark A reference point, or a unit of measurement, for making comparisons. A benchmark is a criterion for success, an indicator of the extent to which an organization achieves the targets and goals defined for it. Benchmarking A process whereby a group of organizations, usually in the same or similar domains, compare their performance on a number of indicators. The aim of the exercise is for participants to learn from each other and to identify good practice with a view toward improving performance in the long run.
CROSS REFERENCES See Developing Nations; Feasibility of Global E-business Projects; Global Issues; Information Quality in Internet and E-business Environments; Internet Literacy; Internet Navigation (Basics, Services, and Portals); Web Quality of Service.
REFERENCES Adams Associates (2002). Six Sigma plus: Black belt training. Retrieved June 22, 2002, from http://www. adamssixsigma.com American Productivity and Quality Center (n.d.). Retrieved May 28, 2002, from http://www.apqc.org Benchmark Storage Innovations (2002). Global IT Strategies 2001. Retrieved June 22, 2002, from http://www. informationweek.com/benchmark/globalIT.htm Benchmarking eEurope (2002). Retrieved June 22, 2002, from http://europa.eu.int/information society/eeurope/ benchmarking/index en.htm Benchmarking Exchange—Benchnet (2002). Benchmarking management report. Retrieved June 22, 2002, from http://66.124.245.170/surveys/bmsurvey/results. cfm?CFID = 199829&CFTOKEN = 38067072 Benchmarking Network (n.d.). eBenchmarking newsletter (TBE newsletter archives). Retrieved June 22, 2002, from http: // 66.124.245.170 / TBE Members2 / news letters/index.cfm Best Practices (2000). BestPracticeDatabase.com: Internet & e-business. Retrieved May 28, 2002, from http://www.bestpracticedatabase.com/subjects/internet ebusiness.htm Best Practices (2001). Online report summary. Driving business through the internet: Web-based sales,
P1: IML/FFX
P2: IML/FFX
Benchmarking
QC: IML/FFX
WL040/Bidgoli-Vol I-Ch-06
70
T1: IML June 23, 2003
14:49
Char Count= 0
BENCHMARKING INTERNET
marketing and service. Retrieved May 28, 2002, from http://www.benchmarkingreports.com/ salesandmarketing/sm140 ebusiness.asp Bogan, C. E., & English, M. J. (1994). Benchmarking for best practices: Winning through innovative adaptation. New York: McGraw-Hill. Cabinet Office (U.K.) (1999). Public sector excellence programme. Retrieved June 11, 2002, from http://www. cabinet-office.gov.uk/eeg/1999/benchmarking.htm Camp, R. C. (1995). Business process benchmarking: Finding and implementing best practices. Milwaukee, WI: ASQ Quality Press. Camp, R. C. (1989). Benchmarking: The search for industry best practices that lead to superior performance. Milwaukee, WI: ASQC Quality Press. Codling, S. (1996). Best practice benchmarking: An international perspective. Houston, TX: Gulf. Conseil de l’Union europ´eenne (2000). List of eEurope Benchmarking indicators. Retrieved February 12, 2002, from http://europa.eu.int/information society/ eeurope/benchmarking/indicator list.pdf Courcelle, C., & De Vil, G. (2001). Benchmarking the framework conditions: A systematic test for Belgium. Federal Planning Bureau: Economic analysis and forecasts. Retrieved February 12, 2002, from http://www. plan.be/en/bench/index.htm Czarnecki, M. T. (1999). Managing by measuring: How to improve your organization’s performance through effective benchmarking. Houston, TX: Benchmarking Network. Department of Trade and Industry (2002). Business in the information age. Retrieved October 22, 2002, from http: // www.ukonlineforbusiness.gov.uk/main/ resources/publication-htm/bench2001.htm European Association of Development Agencies (n.d.). Benchmarking News. Retrieved June 22, 2002, from http: // www.eurada.org/News / Benchmarking/English / ebenchtable.htm European Conference of Ministers of Transport (2000). Transport benchmarking: Methodologies, applications & data needs. Paris: OECD Publications Service. European Information Technology Observatory, 10th ed. (2002). Frankfurt am Main: Author. Retrieved June 25, 2002, from http://www.eito.com/start.html Global Benchmarking Newsbrief (2002). Benchmarking update. Retrieved from June 16, 2002, from http:// 66.124.245.170/TBE Members2/newsletters/bupdate 0601.cfm Gohlke, A. (1998). Benchmarking basics for librarians. Retrieved June 23, 2002, from http://www.sla.org/ division/dmil/mlw97/gohlke/sld001.htm International Data Corporation (n.d.). About IDC. Retrieved June 20, 2002, from http://www.idc.com/en US/ st/aboutIDC.jhtml International Data Corporation (2002a). IDC/World Times information society index: The future of the information society. Retrieved June 20, 2002, from http://www. idc.com/getdoc.jhtml?containerId = 24821 International Data Corporation (2002b). Sweden remains the world’s dominant information economy while the United States slips, according to the 2001 IDC/World Times Information Society Index. Retrieved
June 20, 2002, from http://www.idc.com/getdoc.jhtml? containerId = pr50236 International Council of Benchmarking Coordinators (n.d.). ICOBC & free newsletter. Retrieved June 22, 2002, from http://www.icobc.com International Telecommunications Union (n.d.). ITU overview—contents. Retrieved June 25, 2002, from http://www.itu.int/aboutitu/overview/index.html International Telecommunications Union (2001a). Internet indicators. Retrieved June 25, 2002, from http:// www.itu.int / ITU-D/ict /statistics /at glance / Internet01. pdf International Telecommunications Union (2001b). ITU telecommunication indicators update. Retrieved June 25, 2002, from http: // www.itu.int / ITU-D / ict / update/pdf/Update 1 01.pdf Haddad, C. J. (2002). Managing technological change. A strategic partnership approach. Thousand Oaks, CA: Sage. Keegan, R. (1998). Benchmarking facts: A European perspective. Dublin: European Company Benchmarking Forum. Kirkman, G. S., Cornelius, P. K., Sachs, J. D., & Schwab, K. (Eds.). (2002). The global information technology report 2001–2002: Readiness for the networked world. New York: Oxford University Press. Lanvin, B. (2002). Foreword. In G. S. Kirkman, P. K. Cornelius, J. D. Sachs, & K. Schwab (Ed.), The global information technology report 2001–2002: Readiness for the networked world (pp. xi–xii). New York: Oxford University Press. Longbottom, D. (2000). Benchmarking in the UK: An empirical study of practitioners and academics. Benchmarking: An International Journal, 7, 98–117. Martin, T. (1999). Extending the direct-toconsumer model through the internet. GBC conference presentations. Retrieved May 28, 2002, from http://www.globalbenchmarking.com/meetings/ presentationdetails.asp?uniqueid = 54 National Telecommunications and Information Administration (2002). Americans in the information age: Falling through the net. Retrieved October 20, 2002, from http://www.ntia.doc.gov/ntiahome/digitaldivide NUA (n.d.). About Nua.com. Retrieved June 20, 2002, from http://www.nua.ie/surveys/about/index.html NUA (2002). NUA Analysis. Retrieved June 20, 2002, from http://www.nua.com/surveys/analysis/graphs charts/ index.html Office of Telecommunications (2002). International benchmarking study of Internet access. Retrieved October 20, 2002, from http://www.oftel.gov.uk/ publications/research/2002/benchint0602.pdf O’Reagain, S., & Keegan, R. (2000). Benchmarking explained. Benchmarking in Europe—working together to build competitiveness. Retrieved February 12, 2002, from http://www.benchmarking-in-europe.com/ library/archive material/articles publications/archive psi articles/explained.htm Organization for Economic Co-operation and Development (n.d.). Home page. Retrieved June 20, 2002, from http://www.oecd.org Organization for Economic Co-operation and Devel-
P1: IML/FFX Benchmarking
P2: IML/FFX
QC: IML/FFX
WL040/Bidgoli-Vol I-Ch-06
T1: IML June 23, 2003
14:49
Char Count= 0
FURTHER READING
opment (2001a). Business to consumer electronic commerce: An update on the statistics. Retrieved June 20, 2002, from http: // www.oecd.org / pdf / M00018000/ M00018264.pdf Organization for Economic Co-operation and Development (2001b). The latest official statistics on electronic commerce: A focus on consumers’ Internet transactions. Retrieved June 20, 2002, from http://www.oecd.org/ pdf/M00027000/M00027669.pdf Organization for Economic Co-operation and Development (2002). OECD information technology outlook. Retrieved June 20, 2002, from http://www.oecd.org/ oecd/pages/home/displaygeneral/0,3380,EN-home-401-no-no-no-40,00.html Pattinson, B., Montagnier, P., & Moussiegt, L. (2000). Measuring the ICT sector. Retrieved June 20, 2002, from http://www.oecd.org/pdf/M00002000 / M00002651.pdf Petrin, T., Sicherl, P., Kukar, S., Mesl, M., & Vitez, R. (2000). Benchmarking Slovenia: An evaluation of Slovenia’s competitiveness, strengths and weaknesses. Ljubljana, Slovenia: Ministry of Economic Affairs. Pew Research Center (n.d.). For the people and the press. Retrieved October 22, 2002, from http://peoplepress.org Rao, A., Carr, L. P., Dambolena, I., Kopp, R. J., Martin, J., Rafii, R., & Schlesinger, P. F. (1996). Total quality management: A cross functional perspective. New York: Wiley. Statistical Indicators for Benchmarking the Information Society (2002). Home page. Retrieved October 22, 2002, from http://www.sibis-eu.org/sibis/ Sicherl, P. (2001). Metodologija. Ljubljana: Ministrstvo za informacijsko druzbo. Retrieved June 25, 2002, from http://www.gov.si:80/mid/Dokumenti/CasovneDistance/ CD metodologija.pdf Spendolini, M. J. (1992). The benchmarking book. New York: AMACOM. Statistics Sweden and Eurostat (2001). Report of the Task Force on Benchmarking in Infra-Annual Economic Statistics to the SPC. Luxembourg: Eurostat. Tardugno, A. F., DiPasquale, T. R., & Matthews, R. E. (2000). IT services: Costs, metrics, benchmarking, and marketing. Upper Saddle River: Prentice Hall. United Nations Development Programme (2001). Human development indicators. New York: Oxford University Press. Retrieved October 22, 2002, from http://www. undp.org/hdr2001/back.pdf United Nations Industrial Development Organization (2002). Industrial development report 2002/2003: Competing through innovation and learning. Retrieved October 22, 2002, from http://www.unido.org/userfiles/ hartmany/12IDR full report.pdf WebWorld (2002). UNESCO observatory on the information society. Retrieved October 22, 2002, from http: // www.unesco.org / webworld / observatory /about/ index.shtml
71
World Bank Group (2001). Information and communication technologies (ICT): Sector Strategy Paper.
FURTHER READING Benchmarking (n.d.). Retrieved June 22, 2002, from http://www.benchmarking.de The Benchmarking Exchange (n.d.). Retrieved June 22, 2002, from http://www.benchnet.com Benchmarking in Europe (n.d.). Retrieved June 22, 2002, from http://www.benchmarking-in-europe.com/ index.asp Benchmarking in Europe Archive (2002). Retrieved June 23, 2002, from http://www.benchmarking-ineurope.com/library/archive whats new/index.htm The Benchmarking Network (n.d.). The benchmarking resource guide. Retrieved June 22, 2002, from http:// benchmarkingnetwork.com Best Practices. Global Benchmarking Council (n.d.). Retrieved June 22, 2002, from http://www.global benchmarking.com CAM Benchmarking (2002). Sectoral benchmarking. Retrieved June 11, 2002, from http://www.cambenchmarking.com/nonmem OUT Sectoral.asp Corporate Benchmarking Services (n.d.) Web site. Retrieved June 11, from www.Corporate-Benchmarking. org Information Systems Management Benchmarking Consortium (n.d.). Web site. Retrieved June 22, 2002, from http://www.ismbc.org International Data Corporation (2002). Latin America Internet commerce market model. Retrieved June 20, 2002, from http://www.idc.com/getdoc.jhtml? sectionId = tables&containerId = LA1142G&pageType = SECTION International Data Corporation (2002). Web users in Western Europe, 2001–2006. Retrieved June 20, 2002, from http://www.idc.com/getdoc.jhtml?containerId = dg20020704 International Government Benchmarking Association (n.d.). Web site. Retrieved June 22, 2002, from http:// www.igba.org Public Sector Benchmarking Service (n.d.). Web site. Retrieved June 22, 2002, from http://www.benchmarking. gov.uk/default1.asp Jackson, N., & Lund, H., ed. (2000). Benchmarking for higher education. Buckingham, UK: Open University Press. Kingdom, B. E. A. (1996). Performance benchmarking for water utilities. Denver, CO: American Water Woks Association. Telecommunications International Benchmarking Group (n.d.). Web site. Retrieved June 22, 2002, from http://www.tbig.org
P1: IML/FFX
P2: IML/FFX
BiometricAuthrev
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 16, 2003
13:28
Char Count= 0
Biometric Authentication James. L. Wayman, San Jose State University
Introduction Applications History System Description Data Collection Transmission Signal Processing Decision Storage Performance Testing Types of Technical Tests The National Physical Lab Tests Biometric Forgeries Example Applications Disney World Inspass
72 72 73 73 74 74 75 75 75 76 76 77 77 77 77 78
INTRODUCTION “Biometric authentication” is the automatic identification or identity verification of living humans based on behavioral and physiological characteristics (Miller, 1989). The field is a subset of the broader field of human identification science. Example technologies include, among others, fingerprinting, hand geometry, speaker verification, and iris recognition. At the current level of technology, DNA analysis is a laboratory technique requiring human processing, so it not considered “biometric authentication” under this definition. Some techniques (such as iris recognition) are more physiologically based, some (such as signature recognition) more behaviorally based, but all techniques are influenced by both behavioral and physiological elements. Biometric authentication is frequently referred to as simply “biometrics,” although this term has historically been associated with the statistical analysis of general biological data (Webster’s New World Dictionary, 1966). The word biometrics is usually treated as singular. In the context of this chapter, biometrics deals with computer recognition of patterns created by human behavior and physiology and is usually associated more with the field of computer engineering than with biology.
APPLICATIONS The perfect biometric measure would be r
Distinctive: different across users, Repeatable: similar across time for each user, r Accessible: easily displayed to a sensor, r Acceptable: not objectionable to display by users, and r Universal: possessed by and observable on all people. r
72
Biometrics and Privacy Intrinsic (or Physical) Privacy Informational Privacy Standards Fingerprint Standards Facial Image Standards BioAPI CBEFF ANSI X9.84 Potential Internet Applications Commonsense Rules for Use of Biometrics Conclusion Cross References Glossary References
78 78 78 79 79 79 79 80 80 80 80 81 81 81 81
Unfortunately, no biometric measure has all of these attributes: There are great similarities among different individuals, measures change over time, some physical limitations prevent display, “acceptability” is in the mind of the user, and not all people have all characteristics. Practical biometric technologies must compromise on every point. Consequently, the challenge of biometric deployments is to develop robust systems to deal with the vagaries and variations of human beings. There are two basic applications of biometric systems: 1. To establish that a person is enrolled in a database. 2. To establish that a person is not enrolled in a database. Immigration systems, amusement parks, and health clubs use biometrics in the first application: to link users to their enrolled identity. Social service, drivers’ licensing, and national identification systems use biometrics primarily in the second application: to establish that prospective participants are not already enrolled. Although hybrid systems—using “negative identification” to establish that a user is not already enrolled, then using “positive identification” to recognize enrolled individuals in later encounters—are also possible, they are not common. The largest biometric systems in place, worldwide, are for purely negative identification. The key to all of these of systems is the “enrollment” process, in which a user presents for the first time one or more biometric measures to be processed and stored by the system. Systems of the first type, called “positive identification systems” do not necessarily require centralized databases but can use distributed storage, such as on individual computers or machine-readable cards. Systems of the second type, called “negative identification systems,” require a centralized database or its equivalent.
P1: IML/FFX
P2: IML/FFX
BiometricAuthrev
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 16, 2003
13:28
Char Count= 0
SYSTEM DESCRIPTION
For positive identification systems using distributed storage, the submitted sample can be compared only to a single template on the storage media. This is called “oneto-one” verification. Positive identification systems using a centralized database may require users to enter an identification number or name (perhaps encoded on a magstripe card) to limit the size of the required search to only a portion of the entire database. If the identifying number or name is unique to each enrolled individual, this form of positive verification can also be “one-to-one,” even though the centralized database contains many enrolled individuals. Large-scale negative identification systems generally partition the database using factors such as gender or age so that not all centrally stored templates need be examined to establish that a user is not in the database. Such systems are sometimes loosely called “one-to-N,” where N represents only a small portion of the enrolled users. In the general case, however, both positive and negative identification systems search one or more submitted samples against many stored templates or models. All biometric systems can only link a user to an enrolled identity at some incomplete level of certainty. A biometric system can neither verify the truth of the enrolled identity nor establish the link automatically with complete certainty. If required, determining a user’s “true” identity is done at the time of enrollment through trusted external documentation, such as a birth certificate or driver’s license. When the user is later linked to that enrolled identity through a biometric measure, the veracity of that identity is only as reliable as the original documentation used for enrollment. All biometric measures may change over time, due to aging of the body, injury, or disease. Therefore, reenrollment may be required. If “true” identity or continuity of identity is required by the system, reenrollment must necessitate presentation of trusted documentation. Not all systems, however, have a requirement to know a user’s “true” identity. Biometric measures can be used as identifiers in anonymous and pseudo-anonymous systems. Although biometric technologies are not commonly used with Internet transactions today, future uses would most likely be in the first application: to establish that a person is enrolled in a database and, therefore, has certain attributes and privileges within it, including access authorization. The argument can be made that biometric measures more closely link the authentication to the human user than passwords, personal identification numbers (PINs), PKI codes, or tokens, which authenticate machines. Consequently, the focus and terminology of this chapter is on applications of the first type, “positive identification.”
HISTORY The science of recognizing people based on physical measurements owes to the French police clerk Alphonse Bertillon, who began his work in the late 1870s (Beavan, 2001; Cole, 2002). The Bertillon system involved at least 11 measurements, such as height, the length and breadth of the head, and length of the ring and middle fingers.
73
Categorization of iris color and pattern was also included in the system. By the 1880s, the Bertillon system was in use in France to identify repeat criminal offenders. Use of the system in the United States for the identification of prisoners began shortly thereafter and continued into the 1920s. Extreme claims of accuracy were made for the system, based on the unsupportable hypothesis that the various measures were statistically independent (Galton, 1890; Galton, 1908). Although research on fingerprinting by a British colonial magistrate in India, William Herschel, began in the late 1850s, knowledge of the technique did not become known in the Western world until the 1880s (Faulds, 1880; Herschel, 1880) when it was popularized scientifically by Sir Francis Galton (1888) and in literature by Mark Twain (1992/1894). Galton’s work also included the identification of persons from profile facial measurements. By the mid-1920s, fingerprinting had completely replaced the Bertillon system within the U.S. Bureau of Investigation (later to become the Federal Bureau of Investigation). Research on new methods of human identification continued, however, in the scientific world. Handwriting analysis was recognized by 1929 (Osborn, 1929), and retinal scanning was suggested in 1935 (Smith & Goldstein, 1935). None of these techniques was “automatic,” however, so none meets the definition of “biometric authentication” used in this chapter. Automatic techniques require automatic computation. Work in automatic speaker identification can be traced directly to experiments with analog computers done in the 1940s and early 1950s (Chang, Pihl, & Essignmann, 1951). With the digital revolution beginning in the 1950s, a strong tool for human identification through pattern matching became available: the digital computer. Speaker (Atal, 1976; Rosenberg, 1976) and fingerprint (Trauring, 1963a) pattern recognition were among the first applications in digital signal processing. By 1961, a “wide, diverse market” for computer-based fingerprint recognition was identified, with potential applications in “credit systems,” “industrial and military security systems,” and “personal locks” (Trauring, 1963b). Computerized facial recognition followed (Kanade, 1977). By the mid-1970s, the first operational fingerprint and hand geometry systems (Raphael & Young, 1974) were fielded, and formal biometric system testing had begun (National Bureau of Standards, 1977). Iris recognition systems became available in the mid-1990s (Daugman, 1993). Today there are close to a dozen approaches used in commercially available systems (see Table 1).
SYSTEM DESCRIPTION Given the variety of applications and technologies, it might seem difficult to draw any generalizations about biometric systems. All such systems, however, have many elements in common. Figure 1 shows a general biometric system consisting of data collection, transmission, signal processing, storage, and decision subsystems (Wayman, 1999). This diagram accounts for both enrollment and operation of positive and negative identification systems.
P2: IML/FFX
BiometricAuthrev
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 16, 2003
74
13:28
Char Count= 0
BIOMETRIC AUTHENTICATION
Table 1 Commercially Available Biometric Technologies
introduces a behavioral component to every biometric method because the user must interact with the sensor in the collection environment. The output of the sensor, which is the input sample on which the system is built, is the combination of (a) the biometric measure, (b) the way the measure is presented, and (c) the technical characteristics of the sensor. Both the repeatability and the distinctiveness of the measurement are negatively affected by changes in any of these factors. If a system is to communicate with other systems, the presentation and sensor characteristics must be standardized to ensure that biometric characteristics collected with one system will match those collected on the same individual by another system.
Hand geometry Finger geometry Speaker recognition Iris recognition Facial imaging Fingerprinting Palm printing Keystroke Hand vein Dynamic signature Verification
Transmission Some, but not all, biometric systems collect data at one location but store or process it (or both) at another. Such systems require data transmission over a medium such as the Internet. If a great amount of data is involved, compression may be required before transmission or storage to conserve bandwidth and storage space. Figure 1 shows compression and transmission occurring before the signal processing and image storage. In such cases, the transmitted or stored compressed data must be expanded before further use. The process of compression and expansion generally causes quality loss in the restored signal, with loss increasing with increasing compression ratio. Interestingly, limited compression may actually improve the performance of the pattern recognition software as information loss in the original signal is generally in the less repeatable high-frequency components. The compression
Data Collection Biometric systems begin with the measurement of a behavioral or physiological characteristic. Because biometric data can be one- (speech), two- (fingerprint), or multidimensional (handwriting dynamics), it generally does not involve “images.” To simplify the vocabulary used in this chapter, I refer to raw signals simply as “samples.” Key to all systems is the underlying assumption that the measured biometric characteristic is both distinctive among individuals and repeatable over time for the same individual. The problems in measuring and controlling these variations begin in the data collection subsystem. The user’s characteristic must be presented to a sensor. The act of presenting a biometric measure to a sensor
DATA COLLECTION
SIGNAL PROCESSING
BIOMETRIC
PATTERN MATCHING
DECISION
MATCH SCORE
MATCH? MATCH/ NON-MATCH
QUALITY SCORE
ACCEPT?
FEATURES
EL
OD
FEATURE EXTRACTION
E/M AT
PL
PRESENTATION
M TE
QUALITY CONTROL
SENSOR
PLE
SEGMENTATION SAM
P1: IML/FFX
TRANSMISSION
COMPRESSION
SAMPLE
DATA STORAGE TEMPLATES
EXPANSION
TRANSMISSION CHANNEL
SAMPLE
IMAGES
Figure 1: Example biometric system.
ACCEPT/ REJECT
P1: IML/FFX
P2: IML/FFX
BiometricAuthrev
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 16, 2003
13:28
Char Count= 0
SYSTEM DESCRIPTION
technique used will depend on the biometric signal. An interesting area of research is in finding, for a given biometric technique, compression methods that have a minimal impact on the subsequent signal processing activities. If a system is to allow sharing of data at the signal level with other systems, compression and transmission protocols must be standardized. Standards currently exist for the compression of fingerprint (Wavelet Scalar Quantization, 1993), facial (Information Technology, 1993), and voice (Cox, 1997) data.
Signal Processing The biometrics signal processing subsystem comprises four modules: segmentation, feature extraction, quality control, and pattern matching. The segmentation module must determine if biometric signals exist in the received data stream (signal detection) and, if so, extract the signals from the surrounding noise. If the segmentation module fails to detect or extract a biometric signal, a “failure-toacquire” has occurred. The feature extraction module must process the signal in some way to preserve or enhance the betweenindividual variation (distinctiveness) while minimizing the within-individual variation (nonrepeatability). The output of this module is numbers, vectors, or distribution parameters that, although called biometric “features,” may not have direct physiological or behavioral interpretation. For example, mathematical output from facial recognition systems does not indicate directly the width of the lips or the distances between the eyes and the mouth. The quality control module must do a statistical “sanity check” on the extracted features to make sure they are not outside population statistical norms. If the sanity check is not successfully passed, the system may be able to alert the user to resubmit the biometric pattern. If the biometric system is ultimately unable to produce an acceptable feature set from a user, a “failure-to-enroll” or a “failureto-acquire” will be said to have occurred. Failure-to-enroll or acquire may be due to failure of the segmentation algorithm, in which case no feature set will be produced. The quality control module might even affect the decision process, directing the decision subsystem to adopt higher requirements for matching a poor quality input sample. The pattern matching module compares sample feature sets with enrolled templates or models from the database and produces a numerical “matching score.” When both template and features are vectors, the comparison may be as simple as a Euclidean distance. Neural networks might be used instead. Regardless of which pattern matching technique is used, templates or models and features from samples will never match exactly because of the repeatability issues. Consequently, the matching scores determined by the pattern matching module will have to be interpreted by the decision subsystem. In more advanced systems, such as speaker verification, the enrollment “templates” might be “models” of the signal generation process—very different data structures than the observed features. The pattern matching module determines the consistency of the observed features with the stored generating model. Some pattern matching
75
modules may even direct the adaptive recomputation of features from the input data.
Decision The decision subsystem is considered independently from the pattern matching module. The pattern matching module might make a simple “match” or “no match” decision based on the output score from the pattern matcher. The decision module might ultimately “accept” or “reject” a user’s claim to identity (or nonidentity) based on multiple attempts, multiple measures or a measure-dependent decision criteria. For instance, a “three-try” decision policy will accept a user’s identity claim if a match occurs in any of three attempts. The decision module might also direct operations to the stored database, allowing enrollment templates to be stored, calling up additional templates for comparison in the pattern matching module, or directing a database search. Because input samples and stored templates will never match exactly, the decision modules will make mistakes— wrongly rejecting a correctly claimed identity of an enrolled user or wrongly accepting the identity claim of an impostor. Thus, there are two types of errors: false rejection and false acceptance. These errors can be traded off against one another to a limited extent: decreasing false rejections at the cost of increased false acceptances and vice versa. In practice, however, inherent withinindividual variation (nonrepeatability) limits the extent to which false rejections can be reduced, short of accepting all comparisons. The decision policies regarding “match– no match” and “accept–reject” criteria specific to the operational and security requirements of the system and reflect the ultimate cost of errors of both types. Because of the inevitability of false rejections, all biometric systems must have “exception handling” mechanisms in place. If exception handling mechanisms are not as strong as the basic biometric security system, vulnerability results. An excessively high rate of false rejections may cause the security level of the exception handling system to be reduced through overload. The false acceptance rate, on the other hand, measures the percentage of impostors who are able to access the system. The complement of the false acceptance rate is the percentage of impostors who are intercepted. So a 20% false acceptance rate means that 80% of impostors are intercepted. Depending on the application, this may be high enough to serve as a sufficient deterrent to prevent impostors from attempting access through the biometric system. The exception handling mechanism might become a more appealing target for those seeking fraudulent access. Consequently, the security level of a biometric system in a positive identification application may be more dependent on the false rejection rate than the false acceptance rate.
Storage The remaining subsystem to be considered is that of storage. The processed features or the feature generation model of each user will be stored or “enrolled” in a database for future comparison by the pattern matcher to
P1: IML/FFX
P2: IML/FFX
BiometricAuthrev
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 16, 2003
76
13:28
BIOMETRIC AUTHENTICATION
Table 2 Biometric Template Sizes DEVICE Fingerprint Speaker Finger geometry Hand geometry Face Iris
SIZE IN BYTES 200–1,000 100–6,000 14 9 100–3,500 512
incoming feature samples. This enrollment data is called a “template” if it is of the same mathematical type as the processed features and a “model” if it gives a mathematical explanation of the feature generating process. For systems only performing positive identification, the database may be distributed on magnetic strip, optically read, or smart cards that each enrolled user carries. Depending on system policy, no central database for positive identification systems need exist, although a centralized database can be used to detect counterfeit cards or to reissue lost cards without recollecting the biometric measures. The original biometric measurement, such as a fingerprint pattern, is generally not reconstructable from the stored templates. Furthermore, the templates themselves are created using the proprietary feature extraction algorithms of the system vendor. If it may become necessary to reconstruct the biometric patterns from stored data (to support interoperability with systems from other vendors, for example), raw (although possibly compressed) data storage will be required. The storage of raw data allows changes in the system or system vendor to be made without the need to recollect data from all enrolled users. Table 2 shows some example template sizes for various biometric devices.
PERFORMANCE TESTING Biometric devices and systems might be tested in many different ways. Types of testing include the following: r r r r r r r r
Char Count= 0
Technical performance; Reliability, availability, and maintainability [RAM]; Vulnerability; Security; User acceptance; Human factors; Cost–benefit; and Privacy regulation compliance.
Technical performance has been the most common form of testing in the last three decades, and “best practices” have been developed (Mansfield & Wayman, 2001). These tests generally measure failure-to-enroll, failure-toacquire, false acceptance, false rejection, and throughput rates. Failure-to-enroll rate is determined as the percentage of all persons presenting themselves to the system in “good faith” for enrollment who are unable to do so because of system or human failure. Failure-to-acquire rate
is determined as the percentage of “good faith” presentations by all enrolled users that are not acknowledged by the system. The false rejection rate is the percentage of all users whose claim to identity is not accepted by the system. This will include failed enrollments and failed acquisitions, as well as false nonmatches against the user’s stored template. The false acceptance rate is the rate at which “zero-effort” impostors making no attempt at emulation are incorrectly matched to a single, randomly chosen false identity. Because false acceptance– rejection and false match–nonmatch rates are generally competing measures, they can be displayed as “decision error trade-off” (DET) curves. The throughput rate is the number of persons processed by the system per minute and includes both the human–machine interaction time and the computational processing time of the system.
Types of Technical Tests There are three types of technical tests: technology, scenario, operational (Phillips, Martin, Wilson, & Przybocki, 2000).
Technology Test The goal of a technology test is to compare competing algorithms from a single technology, such as fingerprinting, against a standardized database collected with a “universal” sensor. There are competitive, government-sponsored technology tests in speaker verification (National Institute of Standards and Technology, 2003), facial recognition (Philips, Grother, Michaels, Blackburn, Tabassi, & Bone, 2003), and fingerprinting (Maio, Maltoni, Wayman, & Jain, 2000).
Scenario Test Although the goal of technology testing is to assess the algorithm, the goal of scenario testing is to assess the performance of the users as they interact with the complete system in an environment that models a “real-world” application. Each system tested will have its own acquisition sensor and so will receive slightly different data. Scenario testing has been performed by a number of groups, but few results have been published openly (Bouchier, Ahrens, & Wells, 1996; Mansfield, Kelly, Chandler, & Kane, n.d.; Rodriguez, Bouchier, & Ruehie, M., 1993).
Operational Test The goal of operational testing is to determine the performance of a target population in a specific application environment with a complete biometric system. In general, operational test results will not be repeatable because of unknown and undocumented differences between operational environments. Furthermore, “ground truth” (i.e., who was actually presenting a “good faith” biometric measure) will be difficult to ascertain. Because of the sensitivity of information regarding error rates of operational systems, few results have been reported in the open literature (Wayman, 2000b). Regardless of the type of test, all biometric authentication techniques require human interaction with a data collection device, either standing alone (as in technology
P1: IML/FFX
P2: IML/FFX
BiometricAuthrev
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 16, 2003
13:28
Char Count= 0
EXAMPLE APPLICATIONS 100%
Face FP-chip FP-optical Hand
77
identification number by the user. These times referred only to the use of the biometric device and did not include actually accessing a restricted area.
False Reject R
Iris
10%
Vein Voice
1%
0.1% 0.0001%
0.001%
0.01%
0.1%
1%
10%
100%
False Accept Rate
Figure 2: Detection error trade-off curve: Best of three attempts. testing) or as part of an automatic system (as in scenario and operational testing). Consequently, humans are a key component in all system assessments. Error, failure-toenroll or acquire, and throughput rates are determined by the human interaction, which in turn depends on the specifics of the collection environment. Therefore, little in general can be said about the performance of biometric systems or, more accurately, about the performance of the humans as they interact with biometrics systems.
A study by the U.K. National Physical Laboratory in 2000 (Mansfield et al., 2001b) looked at eight biometric technologies in a scenario test designed to emulate access control to computers or physical spaces by scientific professionals in a quiet office environment. The false accept–false reject DET under a “three-tries” decision policy for this test is shown in Figure 2. The false rejection rate includes “failure-to-enroll/acquire” rates in its calculation.
Throughput Rates The National Physical Laboratory study (Mansfield et al., n.d.) also established transaction times for various biometric devices in an office, physical access control setting, shown as Table 3. In Table 3, the term “PIN” indicates whether the transaction time included the manual entry of a four-digit Table 3 Transaction Times in Office Environment
TRANSACTION TIME
Face Fingerprint—optical Fingerprint—chip Hand Iris Vein Speaker
MEAN MEDIAN MINIMUM PIN? 15 9 19 10 12 18 12
14 8 15 8 10 16 11
It has been well known since the 1970s that all biometric devices can be fooled by forgeries (Beardsley, 1972). In a positive identification system, “spoofing” is the use of a forgery of another person’s biometric measures. (In a negative identification system, “spoofing” is an attempt to disguise one’s own biometric measure.) Forging biometric measures of another person is more difficult than disguising one’s own measures, but is possible nonetheless. Several studies (Blackburn et al., 2001; Matsumoto et al., 2002; Thalheim, Krissler, & Ziegler, 2002; van der Putte & Keuning, 2002) discuss ways by which facial, fingerprint and iris biometrics can be forged. Because retinal recognition systems have not been commercially available for several years, there is no research on retina forgery, but it is thought to be a difficult problem. Speaker recognition systems can make forgery difficult by requesting the user to say randomly chosen numbers. The current state of technology does not provide reliable “liveness testing” to ensure that the biometric measure is both from a living person and not a forgery.
EXAMPLE APPLICATIONS
The National Physical Lab Tests
DEVICE
Biometric Forgeries
10 2 9 4 4 11 10
No No No Yes Yes Yes No
Most successful biometric programs for positive identification have been for physical access control. In this section, I consider two such programs, one at Walt Disney World and the other the U.S. INSPASS program.
Disney World The Walt Disney Corporation needed to link guests to season passes without inconveniencing passholders on visits to their four Orlando parks (Disney World, Epcot, Animal Kingdom, and MGM Studio Tour) and two water parks (Blizzard Beach and Typhoon Lagoon) (Levin, 2002). The challenge was to create a verification component with minimum impact on existing systems and procedures. Alternatives to biometrics existed for Disney. They considered putting photos on the passes, then checking photos against passholders. This approach required human inspectors at all the entrances. Disney could have put names on passes at the point of sale, then checked passes against photo identification presented by the pass holders at the entrance to the parks, but this would have required disclosure of the passholders’ identities. Automatic, anonymous authentication of the passholders might have been accomplished by requiring passholders to enter an identifying number of some sort upon entrance to the parks. That number would be chosen at the time of sale or first use of the pass and could be chosen by the passholder. The concerns with this approach are twofold: forgotten numbers and the increased guest processing time. Disney seeks to make parks as accessible as possible, and use of key pads could create accessibility problems. Disney ultimately decided to use biometric authentication, taking the shapes of right index and middle finger on all passholders. Finger shape only is recorded.
P1: IML/FFX
P2: IML/FFX
BiometricAuthrev
QC: IML/FFX
WL040/Bidgolio-Vol I
78
T1: IML
WL040-Sample.cls
June 16, 2003
13:28
Char Count= 0
BIOMETRIC AUTHENTICATION
Fingerprints are not imaged. The template is established when the holder places the fingers in the imaging device upon first use of the pass at the park entrance. This first user becomes the “authorized holder” of the pass. The template is stored centrally in the Disney access control system and is linked to the pass number encoded on a magnetic stripe on the pass. Upon subsequent uses, the card number allows the recall of the stored finger geometry, which is then compared with that presented by the guest to determine if the guest matches the original holder of the pass. A failure of the system to authenticate the guest is assumed to be a system error and a guest relations officer is on hand to resolve the issue quickly. Disney considers the system to be quite successful.
INSPASS Frequent travelers to the United States can bypass immigration lines if they hold a card issued by the Immigration and Naturalization Service’s (INS) Passenger Accelerated Service System (INSPASS). The system is currently in place at Kennedy, Newark, Los Angeles, Miami, San Francisco, Detroit, and Washington Dulles airports, as well as in Vancouver, and Toronto airports in Canada. Users give measurements of their hand geometry at the INSPASS kiosk to verify that they are the correct holder of the card. The passport number on the card can then be trusted by the INS to be that of the holder and the border crossing event is automatically recorded. At the nine airports with INSPASS, there are more than 21 kiosks, and more than 20,000 automated immigration inspections were conducted on average each month in 2000. About 72% of these inspections were for U.S. citizens, but any citizen of any country on the U.S. “visa waver program” can apply for and receive an INSPASS card. In 2000, there were more than 45,000 active enrollees, and each traveler used INSPASS about four times annually, on average. As of this writing, INSPASS is temporarily on hiatus at some airports, pending the creation of a thorough business plan for the system by the INS Office of Inspections. Use of INSPASS is entirely voluntary. Applicants attest that they have no drug or smuggling convictions and supply fingerprints, along with their hand geometry, at the time of enrollment. A passport is also required as proof of “true” identity. The INS considers the INSPASS holders to be low-risk travelers already well-known to the system and therefore affords them the special privilege of entering the United States without a face-to-face meeting with an immigration officer. Those not using INSPASS have passports inspected by an immigration officer, whose duty it is to ferret out those entering illegally or on forged or stolen passports. Immigration officers are also charged with prescreening all arrivals for customs, agriculture, and State Department–related issues. Returning U.S. citizens not holding INSPASS cards must answer questions such as length of trip, travel destinations, and purpose of travel. Non-U.S. citizens arriving without an INSPASS must answer an even more extensive series of questions about length of stay, intent of visit, and destinations within the United States. In effect, INSPASS substitutes the possession of card for the passport and substitutes presenta-
tion of the hand geometry for the usual round of border crossing questions. Analyses of the error rates for this system are given in Wayman (2000b).
BIOMETRICS AND PRIVACY The concept “privacy” is highly culturally dependent. Legal definitions vary from country to country and, in the United States, even from state to state (Alderman & Kennedy, 1995). A classic definition is the intrinsic “right to be let alone,” (Warren & Brandeis, 1980), but more modern definitions include informational privacy: the right of individuals “to determine for themselves when, how and to what extent information about them is communicated to others” (Westin, 1967). The U.S. Supreme Court has recognized both intrinsic (Griswald v. Connecticut, 1967) and informational (Whalen v. Roe, 1977) privacy. Both types of privacy can be affected positively or negatively by biometric technology.
Intrinsic (or Physical) Privacy Some people see the use of biometric devices as an intrusion on intrinsic privacy. Touching a publicly used biometric device, such as a fingerprint or hand geometry reader, may seem physically intrusive, even though there is no evidence that disease can spread any more easily by these devices than by door handles. People may also object to being asked to look into a retinal scanner or to stand still while giving an iris or facial image. Not all biometric methods require physical contact. A biometric application that replaced the use of a keypad with the imaging of an iris, for instance, might be seen as enhancing of physical privacy. If biometrics are used to limit access to private spaces, then biometrics can be more enhancing to intrinsic privacy than other forms of access control, such as keys, which are not as closely linked to the holder. There are people who object to use of biometrics on religious grounds: Some Muslim women object to displaying their face to a camera, and some Christians object to hand biometrics as “the sign of the beast” (Revelations, 13:16–18). It has been noted (Seildarz, 1998), however, that biometrics are the marks given an individual by God. It can be argued (Baker, 2000; Locke, 1690) that a physical body is not identical to the person that inhabits it. Whereas PINs and passwords identify persons, biometrics identifies the body. Some people are uncomfortable with biometrics because of this connection to the physical level of human identity and the possibility of nonconsentual collection of some biometric measures. Biometric measures could allow linking of the various “persons” or psychological identities that each of us choose to manifest in our separate dealings within our social structures. Biometrics, if universally collected without adequate controls, could aid in linking my employment records to my health history and church membership. This leads us to the concept of “informational privacy.”
Informational Privacy With notably minor qualifications, biometric features contain no personal information whatsoever about the user. This includes no information about health status,
P1: IML/FFX
P2: IML/FFX
BiometricAuthrev
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 16, 2003
13:28
Char Count= 0
STANDARDS
age, nationality, ethnicity, or gender. Consequently, this also limits the power of biometrics to prevent underage access to pornography on the Internet (Woodward, 2000) or to detect voting registration by noncitizens (Wayman 2000a). No single biometric measure has been demonstrated to be distinctive or repeatable enough to allow the selection of a single person out of a very large database. (All automatic large-scale civilian fingerprint systems, for instance, require images from multiple fingers of each user for unique identification. Law enforcement automatic fingerprint identification systems (AFIS) require human intervention to identify an individual from “latent” fingerprint images left at crime scenes.) When aggregated with other data, however, such as name, telephone area code, or other even weakly identifying attributes, biometric measures can lead to unique identification within a large population. For this reason, databases of biometric information must be treated as personally identifiable information and protected accordingly. Biometrics can be directly used to enhance informational privacy. Use to control access to databases containing personal and personally identifiable data can enhance informational privacy. The use of biometric measures, in place of name or social security number, to ensure that personal data is anonymous, is privacy enhancing. Biometric measures are private but not secret. The U.S. Supreme Court has ruled that “Like a man’s facial characteristics, or handwriting, his voice is repeatedly produced for others to hear” (U.S. v. Dionisio, 1973). Therefore, the court concluded, no can reasonably expect that either his or her voice or face “will be a mystery to the world.” The theft of biometric measures through reproduction, theft of identity at biometric enrollment, or theft of biometrically protected identity by data substitution through reenrollment could all have grave repercussions for both intrinsic and informational privacy.
79
In 2002, the International Standards Organization/ International Electrotechnical Commission Joint Technical Committee 1 formed Standing Committee 37 to create international standards for biometrics. Early work by SC37 has focused on “harmonized vocabulary and definitions,” “technical interfaces,” “data interchange formats,” “application profiles,” “testing and reporting,” and “cross jurisdictional and societal aspects.”
Fingerprint Standards The CJIS (1999) report Interim IAFIS Fingerprint Image Quality Specifications for Scanners specifies technical requirements (signal-to-noise ratio, gray scale resolution, etc.) for the scanning of fingerprint images for transmission to the FBI. This standard, commonly known as “Appendix F/G,” was developed for “flat bed scanners” used for the digital imaging of inked fingerprint cards. It has been applied, with difficulty, as a standard for fingerprint sensors used in large-scale biometric fingerprint systems. Fingerprint devices not used for large-scale searches, such as those for computer or facilities access control, commonly do not meet “Appendix F/G” technical standards. Fingerprint images, even when compressed, are around 15 kBytes in size. As shown in Table 2, templates extracted using proprietary algorithms are much smaller. The requirement to store fingerprint information on interoperable ID cards has lead to some efforts at template standardization the American Association of Motor Vehicle Administrators (AAMVA, 2000), motivated by the need for cross-jurisdictional exchange of fingerprint data for driver identification, has published a standard for drivers licenses and identification cards, which includes a section on fingerprint minutiae extraction and storage. Although this standard has never been tested or implemented, it is hoped that it will provide a possible solution to fingerprint system interoperability.
STANDARDS
Facial Image Standards
Biometric methods, such as fingerprinting, face, and speaker recognition, were developed independently, by different academic traditions and for different applications. It has only been within the last decade that the commonality as automatic human identification methods has even been recognized, so it should come as no surprise that common standards have been slow to develop. Furthermore, there has been little need for interoperability among these systems. In fact, the lack of interoperability within and across technologies has been touted as a privacy-preserving asset of biometric systems (Woodward, 1999). Consequently, there has been no motivation for the tedious standards development process required to promote interoperability. A notable exception is in automatic fingerprint identification systems (AFIS), for which law enforcement has long needed the capability of interjurisdictional fingerprint exchanges. In this case, some American National Standards Institute (ANSI), National Institute of Standards and Technology (NIST), and Criminal Justice Information Services (CJIS) recognized standards do exist. These have been accepted as de facto international standards.
Compression standards for facial imaging have already been mentioned (Information Technology, 1993). NIST has a document “Best Practice Recommendations for the Capture of Mugshots” (NIST, 1993), that has application to facial recognition when “mug shot” type photos are used.
BioAPI Beyond the issue of interoperability standards is that of software protocol conventions. In the past, each vendor has used its own platform-specific software to support its own data collection, feature extraction, and storage and matching operations. This has caused major headaches for system integrators who try to use biometric devices in larger or more general access control and information retrieval systems. The integrators have been forced to learn and handle the idiosyncratic software of each biometric vendor. During the last 5 years, under the sponsorship of both the U.S. Department of Defense (Human Authentication-Application Programming Interface Steering Group, 1998) and NIST, common software standards have been emerging. This effort is currently
P1: IML/FFX
P2: IML/FFX
BiometricAuthrev
QC: IML/FFX
WL040/Bidgolio-Vol I
80
T1: IML
WL040-Sample.cls
June 16, 2003
13:28
Char Count= 0
BIOMETRIC AUTHENTICATION
known as the “Biometric Applications Programming Interface” (BioAPI Consortium, 2001). This standard specifies exactly how information will be passed back and forth between the larger system and the biometric subsystems. It will allow system integrators to establish one set of software function calls to handle any biometric device within the system.
CBEFF Closely related to the BioAPI effort is the Common Biometric Exchange File Format (CBEFF) working group sponsored by NIST and the National Security Agency (NIST, 2001). This group has developed a standard “packaging” protocol for transferring biometric templates and samples between applications. Recently, this work is being extended by the Organization for the Advancement of Structured Information Standards (http:// www.oasisopen.org) to an extensible markup language (XML) format for biometric transmission over the Internet.
ANSI X9.84 The ANSI X9.84–2001 standard for Biometric Information Management and Security (ANSI, 2001) is a comprehensive document describing proper architectures and procedures for the financial industry when using biometrics for secure remote electronic access or local physical access control. The standard discusses security requirements for the collection, distribution, and processing of biometric data; uses of biometric technology; techniques for secure transmission of biometric data; and requirements for data privacy.
registrant to a computer for collecting speech samples. When registered users wished to access a controlled Web site, they would telephone the speaker verification company, verify their voice, and be issued an alphanumeric “volatile pass code” that would allow access to the requested site if promptly submitted over the internet. A third model would be a centralized “biometric certification authority” (BCA), operated either by government or a commercial entity. Users would enroll a biometric measure in person, or possibly through an “out-of-band” mechanism such as the mail. Depending on the purpose of the system, users might be required to present proof of identity, credit card information, or the like. After registration, users wishing to access biometrically controlled Web sites would submit their biometric measures over the Internet to the BCA, which would verify identity to the Web site being accessed. Fear of centralized biometric databases in the hands of either business or government has inhibited implementation of this model. Given the nonrevocability of biometric information, its “private but not secret” status, and the need for in-person registration when applied to user identification, it is not clear that biometrics has a secure place in Internet communication or commerce.
Commonsense Rules for Use of Biometrics From what has been discussed thus far, we can develop some commonsense rules for the use of biometrics for access control and similar positive identification applications. r
POTENTIAL INTERNET APPLICATIONS There have been few direct applications of biometrics to the Internet. Biometrics can and are being used to control local network and PC log-on, however. Many commercial devices are available for this application, including fingerprint, keystroke, speaker, and iris recognition. Some of these products have in the past been marketed at officesupply chain stores. Users enroll biometric data on their local network or own PC. At log-on, submitted biometric data is compared locally with that locally stored. This model can be extended to include biometric authorization for access to computer files and services. No biometric data ever leaves the PC or local network. How might biometrics be used directly over the Internet? One model requires users to supply biometric data over the internet when registering at a Web site. Of course, such a system has no way of determining the true identity of the user or even whether the biometric data supplied really belongs to the registrant. But once enrolled, the Web site could determine whether the user’s browser had access to the same biometric measures given at enrollment. This might indicate continuity of the person at the browser or across browsers. Recognizing the weakness of such a system, one speaker verification company established a business model of “out-of-band” registration over the telephone. Those registering would speak to a human operator, who would collect identification information, then switch the
r
r
r
r r
Never participate in a biometric system that allows either remote enrollment or reenrollment. Such systems have no way of connecting a user with the enrolled biometric record, so the purpose of using biometrics is lost. Biometric measures can reveal people’s identity only if they are linked at enrollment to their name, social security number, or other closely identifying information. Without that linkage, biometric measures are anonymous. Remember that biometric measures cannot be reissued if stolen or sold. Do not enroll in a nonanonymous system unless you have complete trust in the system administration. All biometric access control systems must have “exception handling” mechanisms for those that either cannot enroll or cannot reliably use the system. If you are uncomfortable with enrolling in a biometric system for positive identification, insist on routinely using the “exception handling” mechanism instead. The safest biometric systems are those in which each user controls his or her own template. Because biometric measures are not perfectly repeatable, are not completely distinctive, and require specialized data collection hardware, biometric systems are not useful for tracking people. Anyone who really wants to physically track you will use your credit card, phone records, or cell phone emanations instead. Anyone wanting to track your Internet transactions will do so with cookies or Web logs.
P1: IML/FFX
P2: IML/FFX
BiometricAuthrev
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 16, 2003
13:28
Char Count= 0
REFERENCES
CONCLUSION Automated methods for human identification have a history predating the digital computer age. For decades, mass adoption of biometric technologies has appeared to be just a few years away (Raphael & Young, 1974; The Right Biometric, 1989), yet even today, difficulties remain in establishing a strong business case, in motivating consumer demand, and in creating a single system usable by all sizes and shapes of persons. Nonetheless, the biometric industry has grown at a steady pace as consumers, industry, and government have found appropriate applications for these technologies. Testing and standards continue to develop, and the privacy implications continue to be debated. Only time will tell if biometric technologies will extend to the Internet on a mass scale.
CROSS REFERENCES See Digital Identity; Internet Security Standards; Passwords; Privacy Law.
GLOSSARY Biometrics The automatic identification or identity verification of living humans based on behavioral and physiological characteristics. Biometric measures Numerical values derived from a submitted physiological or behavioral sample. Decision A determination of probable validity of a user’s claim to identity or nonidentity in the system. Enrollment Presenting oneself to a biometric system for the first time, creating an identity within the system, and submitting biometric samples for the creation of biometric measures to be stored with that identity. Failure-to-acquire rate The percentage of transactions for which the system cannot obtain a usable biometric sample. Failure-to-enroll rate The percentage of a population that is unable to give repeatable biometric measures on any particular device. False accept rate (FAR) The expected proportion of transactions with wrongful claims of identity (in a positive ID system) or nonidentity (in a negative ID system) that are incorrectly confirmed. In negative identification systems, the FAR may include the failureto-acquire rate. False match rate (FMR) The expected probability that an acquired sample will be falsely declared to match to a single, randomly selected, nonself template or model. False non-match rate (FNMR) The expected probability that an acquired sample will be falsely declared not to match a template or model of that measure from the same user. False reject rate (FRR) The expected proportion of transactions with truthful claims of identity (in a positive ID system) or nonidentity (in a negative ID system) that are incorrectly denied. In positive identification systems, the FRR will include the failure-to-enroll and the failure-to-acquire rates. Features A mathematical representation of the information extracted from the presented sample by the signal processing subsystem that will be used to construct
81
or compare against enrolment templates (e.g., minutiae coordinates, principal component coefficients and iris codes are features). Genuine claim of identity A truthful positive claim by a user about identity in the system. The user truthfully claims to be him- or herself, leading to a comparison of a sample with a truly matching template. Impostor claim of identity A false positive claim by a user about identity in the system. The user falsely claims to be someone else enrolled in the system, leading to the comparison of a sample with a nonmatching template. Identifier An identity pointer, such as a biometric measure, PIN (personal identification number), or name. Identify To connect a person to an identifier in the database. Identity An information record about a person, perhaps including attributes or authorizations or other pointers, such as names or identifying numbers. Identification The process of identifying, perhaps requiring a search of the entire database of identifiers; consequently, the process of connecting a person to a pointer to an information record. Matching score A measure of similarity or dissimilarity between a presented sample and a stored template. Models Mathematical representation of the generating process for biometric measures. Negative claim of identity The claim (either implicitly or explicitly) of a user not to be known to or enrolled in the system. Enrollment in social service systems open only to those not already enrolled is an example. Positive claim of identity The claim (either explicitly or implicitly) of a user to be enrolled in or known to the system. An explicit claim might be accompanied by a claimed identifier in the form of a name or identification number. Common access control systems are an example. Sample A biometric signal presented by the user and captured by the data collection subsystem (e.g. fingerprint, face samples, and iris images are samples). Template A user’s stored reference measure based on features extracted from samples. Transaction An attempt by a user to validate a claim of identity or nonidentity by consecutively submitting one or more samples, as allowed by the system policy. Verification Proving as truthful a user’s positive claim to an identity in the system by comparing a submitted biometric sample to a template or model stored at enrollment.
REFERENCES Alderman, E., & Kennedy, C. (1995). The Right to Privacy. New York: Vintage. American Association of Motor Vehicle Administrators (2000). National standard for driver’s license/identification card (AAMVA 2000-06-30). Retrieved March 15, 2003, from http://www.aamva.org/Documents/stdAAM VADLIDStandrd000630.pdf American National Standards Institute (2001). Biometric information management and security, X9.84– 2001.
P1: IML/FFX
P2: IML/FFX
BiometricAuthrev
82
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 16, 2003
13:28
Char Count= 0
BIOMETRIC AUTHENTICATION
Atal, B. S. (1976). Automatic recognition of speakers from their voices. Proceedings of the IEEE, 64, 460– 487. Baker, L. (2000). Persons and bodies: A constitution view. Cambridge, England: Cambridge University Press. Beavan, C. Fingerprints. New York: Hyperion. Beardsley, C. (1972, January). Is your computer insecure? IEEE Spectrum, 67–78. BioAPI Consortium (2001, March 16). BioAPI specifications, Version 1.1. Retrieved March 15, 2003, from www.bioapi.org Blackburn, D., Bone, M., Grother, P., & Phillips, J. (2001, February). Facial recognition vendor test 2000: Evaluation report. U.S. Department of Defense. Retrieved March 15, 2003, from http://www.frvt.org/DLs/ FRVT 2000.pdf Bouchier, F., Ahrens, J., & Wells, G. (1996, April). Laboratory evaluation of the IriScan prototype biometric identifier. Sandia National Laboratories, Report SAND96–1033. Retrieved March 15, 2003, from http://infoserve.library.sandia.gov/sand doc/1996/ 961033.pdf Chang, S. H., Pihl, G. E., & Essignmann, M. W. (1951, February). Representations of speech sounds and some of their statistical properties. Proceedings of the IRE, 147–153. Cole, S. Suspect identities. Cambridge, MA: Harvard University Press. Cox, R. (1997). Three new speech coders for the ITU cover a range of applications. Communications, 35(9), 40– 47. Criminal Justice Information Services (1999, January 29). Appendix G: Interim IAFIS image quality specifications for scanners. In Electronic Fingerprint Transmission Specification, CJIS-RS-0010(v7). Retrieved March 15, 2003, from http://www.fbi.gov/hq/cjisd/ iafis/efts70/cover.htm Daugman, J. (1993). High confidence visual recognition of persons by a test of statistical independence. Transactions on Pattern Analysis and Machine Intelligence, 15, 1148–1161. Faulds, H. (1880). On the skin furrows of the hand. Nature, 22, 605. Galton, F. (1888). On personal identification and description. Nature. 38, 173–177, 201–202. Galton, F. (1890). Kinship and correlation. North American Review, 150, 419– 431. Galton, F. (1908). Memories of my life. London: Methuen. Griswald v. Connecticut, 381 U.S. 479 (1965). Human Authentication–Application Programming Interface Steering Group (1998, June 30). Meeting notes. Retrieved June 1, 2002, from www.biometrics.org/ REPORTS/HAAPI20/sg2c95.zip Herschel, W. J. (1880). Skin furrows of the hand. Nature, 23, 76. Information technology—Digital compression and coding of continuous-tone still images: Requirements and guidelines (1993). CCITT recommendation T.81, ISO/IEC—10918. Retrieved March 15, 2003, from http://www.w3.org/Graphics/JPEG/itu-t81.pdf Kanade, T. (1977). Computer recognition of human faces. Stuttgart: Birkhauser.
King, S., Harrelson, H., & Tran, G. (2002, February 15). Testing iris and face recognition in a personnel identification application. Biometric Consortium. Retrieved March 15, 2003, from www.itl.nist.gov/div895/isis/ bc2001/FINAL BCFEB02/FINAL 1 Final%20Steve%20 King.pdf Levin, G. (2002, February). Real world, most demanding biometric system usage. Proceedings of the Biometrics Consortium, 2001/02. Retrieved March 15, 2003, from http://www.itl.nist.gov/div895/isis/bc2001/FINAL BCFEB02 / FINAL 4 Final % 20Gordon % 20Levin % 20 Brief.pdf Locke, J. (1690). An essay concerning human understanding (book 2, chapter 27). Retrieved March 15, 2003, from http://www.ilt.columbia.edu/publications/locke understanding.html Maio, D., Maltoni, D., Wayman, J., & Jain, A. (2000, September). FVC2000: Fingerprint verification competition 2000. Proceedings of the 15th International Conference on Pattern Recognition, Barcelona. Retrieved March 15, 2003, from http://www.csr.unibo.it/ fvc2000/download.asp Mansfield, A., Kelly, G., Chandler, D., & Kane, J. (2001, March). Biometric product testing final report. National Physical Laboratory report for Communications Electronic Security Group and the Biometrics Working Group. Retrieved March 15, 2003, from http://www.cesg.gov.uk/technology/biometrics/media/ Biometric Test Report pt1.pdf Mansfield, M. J., & Wayman, J. L. (2002, February). Best practices for testing and reporting biometric device performance. Issue 2.0, U.K. Biometrics Working Group. Retrieved March 15, 2003, from http:// www.cesg.gov.uk/technology/biometrics / media / Best Practice.pdf Matsumoto, T., Matsumoto, H., Yamada, K., & Hoshino, S. (2002, January). Impact of artificial “gummy” fingers on fingerprint systems. Proceedings of SPIE– The International Society of Optical Engineering, 4677. Retrieved March 15, 2003, from http://cryptome.org/ gummy.htm Miller, B. L. (1989). Everything you need to know about automated biometric identification. Biometric Industry Directory. Washington, DC: Warfel and Miller. National Bureau of Standards (1977). Guidelines on evaluation of techniques for automated personal identification. Federal Information Processing Standard Publication 48. Washington, DC: National Bureau of Standards. National Institute of Standards and Technology (2003). The NIST year 2003 speaker recognition evaluation plan. Retrieved March 15, 2003, from http://www.nist. gov/speech/tests/spk/2003/doc/2003-spkrecevalplan-v2. 2.pdf National Institute of Standards and Technology Image Group (1993). Best practice recommendations for capturing mugshots and facial images, version 2. Retrieved March 15, 2003, from http://www.itl.nist.gov/iad/ vip/face/bpr mug3.html National Institute of Standards and Technology (2001, January). The common biometric exchange file format, NISTIR 6529. Retrieved March 15, 2003, from
P1: IML/FFX
P2: IML/FFX
BiometricAuthrev
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 16, 2003
13:28
Char Count= 0
REFERENCES
http://www.itl.nist.gov/div895/isis/bc/cbeff/CBEFF0103 01web.PDF Osborn, S. (1929). Questioned documents. Chicago: Nelson-Hall. Phillips, P. J., Martin, A., Wilson, C. L., & Przybocki, M. (2000). An introduction to evaluating biometric systems. Computer, 33, 56–63. Retrieved March 15, 2003, from http://www.frvt.org/DLs/FERET7.pdf Phillips, P. J., Grother, P., Micheals, R. J., Blackburn, D. M., Tabassi, E., & Bone, M. (2003). Facial recognition vendor test 2002, National Institute of Standards and Technology, NISTIR 6965, March 2003. Retrieved March 15, 2003, from http:/frvt.org Raphael, D. E. & Young, J. R. (1974). Automated personal identification. Menlo Park, CA: Stanford Research Institute. Rodriguez, J. R., Bouchier, F., & Ruehie, M. (1993). Performance evaluation of biometric identification devices. Albuquerque, NM: Sandia National Laboratory Report SAND93–1930. Rosenberg, A. (1976). Automatic speaker verification: A review. Proceedings of the IEEE, 64, 475–487. Seildarz, J. (1998, April 6). Letter to the editor. Philadelphia Inquirer. Simon, C., & Goldstein, I. (1935). A new scientific method of identification. New York State Journal of Medicine, 35, 901–906. Thalheim, L., Krissler, J., & Ziegler, P. (2002, May). Biometric access protection devices and their programs put to the test. C’T Magazine 11, 114. Retrieved from March 15, 2003, http://www.heise.de/ct/english/02/11/ 114 The right biometric for the right application. But when? Personal Identification News, 5, 6. Trauring, M. (1963a). On the automatic comparison of finger ridge patterns. Nature, 197, 938–940. Trauring, M. (1963b, April). Automatic comparison of finger ridge patterns (Report No. 190). Malibu, CA: Hughes Research Laboratories. (Original work published 1961.) Twain, M. (1990). Puddin’head Wilson and other tales: Those extraordinary twins and the man that corrupted Hadleyburg. Oxford: Oxford University Press. (Original work published 1894.) U.S. v. Dionisio, 410 U.S. 1 (1973).
83
van der Putte, T., & Keuning, J. (2000, September). Biometrical fingerprint recognition: Don’t get your fingers burned. Smart Card Research and Advanced Applications, IFIP TC8/WG.8., Fourth Working Group Conference on Smart Card Research and Advanced Applications, pp. 289–303. London: Kluwer Academic. Warren, S., & Brandeis, L. The right of privacy. (1890). Harvard Law Review 4. Retrieved March 15, 2003, from http://www.lawrence.edu/fac/boardmaw/Privacy brand warr2.html Wavelet scalar quantization (WSQ) gray-scale fingerprint image compression specification (1993, February 16). Criminal Justice Information Services, Federal Bureau of Investigation, IAFIS-IC-0110v2. Wayman, J. M. (1999). Technical testing and evaluation of biometric identification devices. In A. Jain, R. Bolle, & S. Pankanti, Biometrics: Personal security in networked society. Boston: Kluwer Acadenuc. Wayman, J. L. (2000a). Biometric identification technologies in election processes. In J. L. Wayman (Ed.), U.S. National Biometric Test Center collected works: 1997–2000. San Jose, CA: San Jose State University. Retrieved March 15, 2003, from http://www.engr. sjsu.edu/biometrrics/nbtccw.pdf Wayman, J. L. (2000b). Evaluation of the INSPASS hand geometry data. In J. L. Wayman (Ed.), U.S. National Biometric Test Center collected works: 1997–2000. San Jose, CA: San Jose State University. Retrieved March 15, 2003, from http://www.engr.sjsu.edu/biometrrics/ nbtccw.pdf Webster’s new world dictionary of the American language (1966). New York: World. Westin, A. (1967). Privacy and freedom. Boston: Atheneum. Whalen v. Roe, 429 U.S. 589 (1977). Woodward, J. (1999). Biometrics: Privacy’s foe or privacy’s friend? In A. Jain, R. Bolle, & S. Pankanti (Eds.), Biometrics: Personal security in networked society. Boston: Kluwer Academic. Woodward, J. D., Jr. (2000, June 9). Age verification technologies (testimony). Hearing before the Commission on Online Child Protection (COPA). Retrieved March 15, 2003, from http://www.copacommission. org/meetings/hearing1/woodward.test.pdf
P1: 08 Miller
WL040/Bidgolio-Vol I
WL040-Sample.cls
August 13, 2003
16:43
Char Count= 0
BluetoothTM—A Wireless Personal Area Network Brent A. Miller, IBM Corporation
Introduction Bluetooth Wireless Technology Origins The Bluetooth Special Interest Group Wireless Personal Area Networks Bluetooth Applications The Bluetooth Protocol Stack Service Discovery Protocol (SDP)
84 84 84 85 85 86 88 91
INTRODUCTION TM
Launched in May 1998, Bluetooth wireless technology rapidly has become one of the most well-known means of communication in the information technology industry. The unusual name Bluetooth itself has garnered much attention (I discuss the origins of this name later), but the main reason for the focus that the technology receives from so many companies and individuals is the new capabilities that it brings to mobile computing and communication. This chapter discusses many facets of Bluetooth wireless technology—its origins, the associated Bluetooth Special Interest Group, its applications, especially in personal-area networks, how it works, and how it relates to other wireless technologies. I also present numerous references where more information can be found about this exciting new way to form wireless personal area networks (WPANs) that allow mobile devices to communicate with each other.
BLUETOOTH WIRELESS TECHNOLOGY Bluetooth wireless technology uses radio frequency (RF) to accomplish wireless communication. It operates in the 2.4-GHz frequency spectrum; the use of this frequency range allows Bluetooth devices to be used virtually worldwide without requiring a license for operation. Bluetooth communication is intended to operate over short distances (up to approximately 100 m, although the nominal range used by most Bluetooth devices is about 10 m). Restricting communication to short ranges allows for lowpower operation, so Bluetooth technology is particularly well suited for use with battery-powered personal devices that can be used to form a WPAN. Both voice and data can be carried over Bluetooth communication links, making the technology suitable for connecting both computing and communication devices, such as mobile phones, personal digital assistants (PDAs), pagers, and notebook computers. Table 1 summarizes these key attributes of Bluetooth wireless technology. Bluetooth wireless technology originally was designed for cable replacement applications, intended to remove the need for a cable between any two devices to allow them to communicate. For example, a cable might be used to 84
Bluetooth Profiles Bluetooth Operation Related Technologies Conclusion Glossary Cross References References
92 93 94 94 95 95 95
connect two computers to transfer files, to connect a PDA cradle to a computer to synchronize data, or to connect a headset to a telephone for hands-free voice calls. This sort of wired operation can often be cumbersome, because the cables used are frequently special-purpose wires intended to connect two specific devices; hence, they are likely to have special connectors that make them unsuitable for general-purpose use. This can lead to “cable clutter”—the need for many cables to interconnect various devices. Mobile users may find this especially burdensome because they need to carry their device cables with them to connect the devices when they are away from home, and even with a large collection of cables, it is unlikely that all of the devices can be plugged together. Nonmobile environments, too, can suffer from cable clutter. In a home or office, wires used to connect, say, computer peripherals or stereo speakers limit the placement of these items, and the cables themselves become obstacles. Bluetooth technology attempts to solve the problem of cable clutter by defining a standard communication mechanism that can allow many devices to communicate with each other without wires. The next section explores the genesis and evolution of Bluetooth wireless communication.
Origins The genesis of Bluetooth wireless technology generally is credited to Ericsson, where engineers were searching for a method to enable wireless headsets for mobile telephones. Realizing that such a short-range RF technology could have wider applications, and further realizing that its likelihood for success would be greater as an industry standard rather than a proprietary technology, Ericsson approached other major telephone, mobile computer, and electronics companies about forming an industry group to specify and standardize a general-purpose, short-range, low-power form of wireless communication. This small group became the Bluetooth Special Interest Group (SIG), discussed later. Of special interest to many is the name “Bluetooth.” Such a name for an industry initiative is unusual. A twopart newsletter article (Kardach, 2001) offers a full explanation of the name’s origin; the salient points follow here. Some of the first technologists involved in early
P1: 08 Miller
WL040/Bidgolio-Vol I
WL040-Sample.cls
August 13, 2003
16:43
Char Count= 0
BLUETOOTH WIRELESS TECHNOLOGY
85
Table 1 Key Bluetooth Technology Characteristics CHARACTERISTIC Medium Range Power Packet types Types of applications Example applications
BLUETOOTH TECHNOLOGY ATTRIBUTES Radio frequency (RF) in the 2.4-GHz globally unlicensed spectrum Nominally 10 m; optionally up to 100 m Low-power operation, suitable for battery-powered portable devices Voice and data Cable replacement, wireless personal-area networks Network access, wireless headsets, wireless data transfer, cordless telephony, retail and m-commerce, travel and mobility, and many other applications
discussions about a short-range wireless technology were history buffs, and the discussion at some point turned to Scandinavian history. A key figure in Scandinavian history is 10th-century Danish king Harald Blatand, ˚ who is credited with uniting parts of Scandinavia. It is said that a loose translation of his surname to English produces “blue tooth.” Those involved in early discussions of this technology recognized that it could unite the telecommunications and information technology (IT) industries, and hence they referred to it as “Bluetooth,” after King Harald. At that time, Bluetooth was considered a temporary “code name” for the project. When the time came to develop an official name for the technology and its associated special interest group, the name Bluetooth was chosen after considering several alternatives. Today this is the trademarked name of the technology and the incorporated entity (the Bluetooth SIG, discussed next) that manages it. In fact, the SIG publishes rules and guidelines (Bluetooth SIG, 2002a) for using the term.
The Bluetooth Special Interest Group Formed in early 1998 and announced in May of that year, the Bluetooth SIG originally was a rather loosely knit group of five companies: Ericsson, Intel, IBM, Nokia and Toshiba. These companies established themselves as Bluetooth SIG promoter members and formed the core of the SIG. Other companies were invited to join as adopter members, and the SIG’s membership grew rapidly. The promoter companies, along with a few invited experts, developed the original versions of the Bluetooth specification (detailed later). In December 1999, four additional companies—3Com, Lucent, Microsoft, and Motorola—were invited to join the group of promoter companies (later Lucent’s promoter membership was transferred to its spin-off company, Agere Systems). By this time, the SIG’s membership had grown to more than 2,000 companies. In addition to promoters and adopters, a third membership tier, called associate member, also was defined. Companies may apply to become associate members, who must pay membership fees. Adopter membership is free and open to anyone. In general, promoter and associate members develop and maintain the Bluetooth specification; adopter members may review specification updates before their public availability.
The SIG’s original purpose was to develop the Bluetooth specification, but it has taken on additional responsibilities over time. In 2001, the SIG incorporated and instituted a more formal structure for the organization, including a board of directors that oversees all operations, a technical organization led by the Bluetooth Architecture Review Board (BARB), a marketing arm and a legal group. The SIG continues to develop and maintain the specification and promote the technology, including sponsoring developers conferences and other events. One important function of the SIG is to manage the Bluetooth qualification program, in which products are tested for conformance to the specification. All Bluetooth products must undergo qualification testing. The SIG’s official Web site (Bluetooth SIG, 2002b) offers more details about the organization and the qualification program.
Wireless Personal Area Networks A personal area network (PAN) generally is considered to be a set of communicating devices that someone carries with her. A wireless PAN (WPAN), of course, is such a set of devices that communicate without cables, such as through the use of Bluetooth technology. One can imagine a “sphere” of connectivity that surrounds a person and moves with her or him, so that all of the devices in the WPAN remain in communication with one another. WPANs need only short-range communication capability to cover the personal area, in contrast with local area (LAN) or wide area networks (WAN), which need to communicate across greater distances using established infrastructure. One source (Miller, 2001) contrasts PANs, LANs, and WANs, particularly Bluetooth technology as a WPAN solution versus Institute of Electrical and Electronics Engineers (IEEE, 1999) 802.11 WLAN technology. The usefulness of a WPAN derives primarily from the ability of individual devices to communicate with each other in an ad hoc manner. Each device still can specialize in certain capabilities but can “borrow” the capabilities of other devices to accomplish certain tasks. For example, a PDA is useful for quickly accessing personal information, such as appointments and contacts. A mobile telephone can be used to contact people whose information is stored in the PDA. Hence, a user might look up the telephone number of an associate and then dial that number on the mobile phone. With a WPAN, however,
P1: 08 Miller
WL040/Bidgolio-Vol I
WL040-Sample.cls
86
August 13, 2003
16:43
Char Count= 0
BLUETOOTHTM —A WIRELESS PERSONAL AREA NETWORK
this process can be automated: Once the telephone number is accessed, the PDA software could include an option to dial the specified phone number automatically on the mobile telephone within the WPAN, using wireless communication links to transmit the dialing instructions to the phone. When combined with a wireless headset in the same WPAN, this could enable a more convenient device usage model for the user, who might never need to handle the mobile telephone at all (it could remain stored in a briefcase). Moreover, the user interface of the PDA is likely to be easier to use than a telephone keypad for retrieving contact information. This allows each of the devices (PDA, mobile phone, and wireless headset in this example) to be optimized to perform the specific tasks that they do best. The capabilities of each device are accessed from other devices via the WPAN. This often is preferred over an alternative usage model, the “all-in-one” device (imagine a PDA that also functions as a mobile telephone). Such multifunction devices might tend to be cumbersome and are more difficult to optimize for specific functions. Because it was developed primarily to replace cables that connect mobile devices, Bluetooth wireless communication is an ideal WPAN technology. Indeed, most of the popular usage scenarios for Bluetooth technology originate in a WPAN of some sort, connecting personal devices to each other or to other networks in proximity. The use of Bluetooth communication links in WPANs is illustrated next, in an examination of various Bluetooth applications.
Bluetooth Applications Because Bluetooth technology primarily is about replacing cables, many of its applications involve well-known usage scenarios. The value that Bluetooth communication adds to these types of applications derives from the ability to accomplish them without wires, enhancing mobility and convenience. For example, dial-up networking is a task commonly performed by many individuals, especially mobile professionals. One of the original usage scenarios used to illustrate the value of Bluetooth technology involves performing dial-up networking wirelessly—with the use of a mobile computer and a mobile phone, both equipped with Bluetooth communication, dial-up networking no longer is constrained by cables. This application and others are detailed next.
Basic Cable Replacement Applications These applications comprise the original set of usage models envisioned for Bluetooth wireless technology. When the SIG was formed and the technology began to be popularized, these applications were touted as the most common ways in which Bluetooth wireless communication would be used. Although many other, perhaps more sophisticated, applications for Bluetooth technology have been discovered and envisioned, these original usage models remain as the primary application set. Nearly all of these early applications involve a mobile computer or a mobile telephone, and for the most part, they involve performing typical existing tasks wirelessly. One such application, already mentioned, is dial-up networking. In this application, a Bluetooth communication link replaces the wire (typically a serial cable)
Internet Cellular link Bluetooth link
Figure 1: Dial-up networking illustration. between a computer and a telephone. When the telephone also is a mobile device, the network connection can be entirely wireless; a Bluetooth wireless link exists between the computer and the telephone, and a wide-area communications link (using typical cellular technology, such as the global system for mobile communication [GSM], time division/demand multiple access [TDMA], or others) carries the network traffic, using the mobile telephone as a wireless modem. Figure 1 depicts this usage model. A variant of this application uses direct (rather than dial-up) connection to a network such as the Internet. In this case, the Bluetooth link allows a computer to connect to a network such as a LAN without using a cable. Together, these two applications (wireless dial-up networking and wireless LAN access) form a usage model that the SIG calls the Internet Bridge. Both applications involve access to a network, using existing protocols, with the main benefit being the ability to access the network without the cables that typically are required to connect the network client computer. Another type of cable replacement application involves data transfer from one device to another. One of the most common such usage models is transferring files from one computer to another. This can be accomplished with removable media (diskettes, CDs), with cables (over a network or via a direct connection) or wirelessly (using infrared or Bluetooth communication, to name two ways). Infrared file transfer is not uncommon, but it requires the two devices to have a line of sight between them. BlueR tooth file transfer operates similarly to that of IrDA infrared file transfer (in fact, the Bluetooth protocol stack, discussed later, is designed such that the same application can be used over either transport medium). Bluetooth communication, being RF-based, does not require a line of sight between the two devices, however. Moreover, through the use of standard data formats, such as vCard (Internet Mail Consortium, 1996a), vCal (Internet Mail Consortium, 1996b), and others, objects other than files can be exchanged between devices using Bluetooth links in a manner similar to that used with IrDA. So, for example, electronic business cards, calendar appointments, and contact information can be shared wirelessly among devices. Building on this capability to exchange data objects is the application that allows these same objects to be synchronized. This means that data sets on two devices reflect the same information at the point in time when they are synchronized. Hence, in addition to simply sending a copy of contact information or a calendar appointment from
P1: 08 Miller
WL040/Bidgolio-Vol I
WL040-Sample.cls
August 13, 2003
16:43
Char Count= 0
BLUETOOTH WIRELESS TECHNOLOGY
one device to another, the full address book or calendar can be synchronized between the two devices so that they have the same set of contacts or appointments. This allows a user to enter information on any convenient device and then have that information reflected on other devices by synchronizing with those devices. In addition to the benefit of performing these tasks wirelessly, by using standard protocols and data formats information can be exchanged easily among many kinds of devices. Specialized cables to connect two computers, or custom cradles to connect a PDA to a computer, are not needed once Bluetooth technology enters the picture. Instead, the same data can be exchanged and synchronized to and from notebook computers, PDAs, mobile phones, pagers, and other devices. This illustrates a hallmark of the value of Bluetooth technology: a single standard wireless link can replace many cables of various types, allowing devices that otherwise might not be able to be connected to communicate easily. Another data transfer application is related to those just described, but it has a distinguished usage model because of the kind of data it transfers, namely, image data. The SIG calls this usage model the instant postcard, and it involves transferring pictures from one device to another. One reason that this application is separately described is because it involves the use of a digital camera. Today, when a camera captures new images, they typically are loaded onto a computer of some sort (or perhaps a television or similar video device) to be displayed. Through the use of Bluetooth wireless technology, this image transfer can be accomplished more easily, but once again, this standard form of wireless link enables the same data to be transferred to other types of devices. For example, rather than uploading photos to a computer, the photos might be transferred to a mobile phone. Even if the phone’s display is not suitable for viewing the photo, it still could be e-mailed to someone who could view it on his computer or other e-mail device. Until now this chapter has focused on applications involving data, but Bluetooth wireless technology also is designed to transport voice traffic (audio packets), and some of the cable replacement applications take advantage of this fact. The most notable of these is the wireless headset. Cabled headsets that connect to a mobile phone are widely used today to allow hands-free conversations. Bluetooth technology removes the cable from the headset to the telephone handset, enabling wireless operation that can allow the phone to be stowed away in a briefcase, pocket, or purse. In fact, as noted earlier, this particular application was the basis for the invention of Bluetooth technology. As with the previously discussed applications, however, once a standard wireless link is established, additional ways to connect other kinds of devices present themselves. For example, the same Bluetooth headset used with a mobile phone might also be used with a stationary telephone (again to allow hands-free operation and increased mobility), as well as with a computer (to carry audio traffic to and from the computer). Furthermore, although Bluetooth wireless communication was not originally designed to carry more complex audio traffic (such as digital music), advances are being made that will allow it to do so. With this capability, the same wireless headset also could be used with home entertainment systems, car audio
87
systems, and personal music players. Hence, the Bluetooth SIG dubs this usage model the ultimate headset. A variation on the wireless headset usage model is what the SIG calls the speaking laptop. In this application, Bluetooth links carry audio data in the same manner as for the headset application, but in this case the audio data is routed between a telephone and a notebook computer’s speaker and microphone, rather than to a headset. One usage scenario enabled with this application is that of using the notebook computer as a speakerphone: A call made to or from a mobile telephone can be transformed into a conference call (“put on the speaker”) by using the speaker and microphone built into nearly all portable (and desktop) computers. Cordless telephony is another application for Bluetooth technology. With a Bluetooth voice access point, or cordless telephone base station, a standard cellular mobile telephone also can be used as a cordless phone in a home or office. The Bluetooth link carries the voice traffic from the handset to the base station, with the call then being carried over the normal wired telephone network. This allows mobile calls to be made without incurring cellular usage charges. In addition, two handsets can function as walkie-talkies, or an intercom system, using direct Bluetooth links between them, allowing two parties to carry on voice conversations in a home, office, or public space without any telephone network at all. Because a single mobile telephone can be used as a standard cellular phone, a cordless phone, and an intercom, the SIG calls this cordless telephony application the three-in-one phone usage model.
Additional Applications Although Bluetooth wireless technology was developed especially for cable-replacement applications such as those just cited, many observers quickly realized that Bluetooth communication could be used in other ways, too. Here I describe a few of the many potential applications of Bluetooth technology, beginning with those that the SIG already is in the process of specifying. The Bluetooth SIG focused primarily on the cable replacement applications already discussed in the version 1.x specifications that it released. The SIG also is developing additional profiles (detailed later) for other types of Bluetooth applications. Among these are more robust personal area proximity networking, human interface devices, printing, local positioning, multimedia, and automotive applications. Bluetooth personal area networking takes advantage of the ability of Bluetooth devices to establish communication with each other based on their proximity to each other, so that ad hoc networks can be formed. The Bluetooth Network Encapsulation Protocol (BNEP) allows Ethernet packets to be transported over Bluetooth links, thus enabling many classic networking applications to operate in Bluetooth piconets (piconets are discussed at length in the section Bluetooth Operation). This capability extends the Bluetooth WPAN to encompass other devices. An example is the formation of an ad hoc Bluetooth network in a conference room with multiple meeting participants. Such a network could facilitate collaborative applications such as white-boarding, instant messaging, and group scheduling. Such applications could allow group editing
P1: 08 Miller
WL040/Bidgolio-Vol I
88
WL040-Sample.cls
August 13, 2003
16:43
Char Count= 0
BLUETOOTHTM —A WIRELESS PERSONAL AREA NETWORK
of documents and scheduling of follow-up meetings, all in real time. Nonetheless, it should be noted that although this scenario resembles classic intranet- or Internet-style networking in some respects, Bluetooth personal area networking is not as robust a solution for true networking solutions as is a WLAN technology, such as IEEE 802.11 (described in the section Related Technologies). Replacing desktop computer cables with Bluetooth communication links fundamentally is a cable replacement application (dubbed the cordless computer by the SIG), and this was one of the originally envisioned Bluetooth usage scenarios, but the original specifications did not fully address it. The new Human Interface Device (HID) specification describes how Bluetooth technology can be used in wireless computer peripherals such as keyboards, mice, joysticks, and so on. The Bluetooth printing profiles specify methods for wireless printing using Bluetooth communication, including “walk up and print” scenarios that allow immediate printing from any device, including mobile telephones and PDAs, as well as notebook computers, to any usable printer in the vicinity. This application of Bluetooth technology can obviate the need for specialized network print servers and their associated configuration and administration tasks. Another application that can be realized with Bluetooth wireless technology is local positioning. Bluetooth technology can be used to augment other technologies, such as global positioning systems (GPS), especially inside buildings, where other technologies might not work well. Using two or more Bluetooth radios, local position information can be obtained in several ways. If one Bluetooth device is stationary (say, a kiosk), it could supply its position information to other devices within range. Any device that knows its own position can provide this information to other Bluetooth devices so that they can learn their current position. Sophisticated applications might even use signal strength information to derive more granular position information. Once position information is known, it could be used with other applications, such as maps of the area, directions to target locations, or perhaps even locating lost devices. The Bluetooth Local Positioning Profile specifies standard data formats and interchange methods for local positioning information. Multimedia applications have become standard on most desktop and notebook computers, and the Bluetooth SIG is pursuing ways by which streamed multimedia data, such as sound and motion video, could be used in Bluetooth environments. The 1 Mbps raw data rate of version 1 Bluetooth radios is not sufficient for many sorts of multimedia traffic, but the SIG is investigating methods for faster Bluetooth radios that could handle multimedia applications. The SIG also is developing profiles for multimedia data over Bluetooth links. Another emerging application area is that of automotive Bluetooth applications. Using Bluetooth communication, wireless networks could be formed in cars. Devices from the WPAN could join the automobile’s built-in Bluetooth network to accomplish scenarios such as the following: r
Obtaining e-mail and other messages, using a mobile phone as a WAN access device, and transferring those
messages to the car’s Bluetooth network, where they might be read over the car’s audio system using textto-speech technology (and perhaps even composing responses using voice recognition technology) r Obtaining vehicle information remotely, perhaps for informational purposes (for example, querying the car’s current mileage from the office or home) or for diagnostic purposes (for example, a wireless engine diagnostic system for automobile mechanics that does not require probes and cables to be connected to the engine) r Sending alerts and reminders from the car to a WPAN when service or maintenance is required (for example, e-mail reminders that an oil change is due or in-vehicle or remote alerts when tire pressure is low or other problems are diagnosed by the vehicle’s diagnostic systems) The SIG, in conjunction with automotive industry representatives, is developing profiles for Bluetooth automotive applications. This area is likely to prove to be an exciting and rapidly growing domain for the use of Bluetooth wireless technology. These new applications comprise only some of the potential uses for Bluetooth wireless technology. Many other domains are being explored or will be invented in the future. Other noteworthy applications for Bluetooth wireless communications include mobile e-commerce, medical, and travel technologies. Bluetooth devices such as mobile phones or PDAs might be used to purchase items in stores or from vending machines; wireless biometrics and even Bluetooth drug dispensers might appear in the future (the 2.4-GHz band in which Bluetooth operates is called the industrial, scientific, and medical band); and travelers could experience enhanced convenience by using Bluetooth devices for anytime, anywhere personal data access and airline and hotel automated check-in. In fact, this latter scenario, including the use of a Bluetooth PDA to unlock a hotel room door, has already been demonstrated (InnTechnology, 2000).
The Bluetooth Protocol Stack A complete discussion of the Bluetooth protocol stack is outside the scope of this article. Numerous books, including Miller and Bisdikian (2000) and Bray and Sturman (2000) offer more in-depth discussions of Bluetooth protocols. Here we present an overview of Bluetooth operation and how the various protocols may be used to accomplish the applications already discussed. A typical Bluetooth stack is illustrated in Figure 2. Each layer of the stack is detailed next.
Radio, Baseband, and Link Manager These three protocol layers comprise the Bluetooth module. Typically, this module is an electronics package containing hardware and firmware. Today many manufacturers supply Bluetooth modules. The radio consists of the signal processing electronics for a transmitter and receiver (transceiver) to allow RF communication over an air-interface between two Bluetooth devices. As noted earlier, the radio operates in the 2.4-GHz spectrum, specifically in the frequency range 2.400–2.4835 GHz. This frequency range is divided into
P1: 08 Miller
WL040/Bidgolio-Vol I
WL040-Sample.cls
August 13, 2003
16:43
Char Count= 0
BLUETOOTH WIRELESS TECHNOLOGY
raw data handling aspects associated with RF communication, including the frequency hopping just mentioned, management of the time slots used for transmitting and receiving packets, generating air-interface packets (and causing the radio to transmit them), and parsing airinterface packets (when they are received by the radio). Packet generation and reception involves many considerations, including the following:
Applications
OBEX
SDP
TCP/ IP
RFCOMM
TCS BIN
Host
r
L2CAP
r
HCI
r Control
89
Audio
Link Manager
r Baseband controller
Bluetooth Module
r r
Radio
Figure 2: The Bluetooth protocol stack. 79 channels (along with upper and lower guard bands), with each channel having a 1-MHz separation from its neighbors. Frequency hopping is employed in Bluetooth wireless communication; each packet is transmitted on a different channel, with the channels being selected pseudo-randomly, based on the clock of the master device (master and slave devices are described in more detail later). The receiving device knows the frequency hopping pattern and follows the pattern of the transmitting device, hopping to the next channel in the pattern to receive the transmitted packets. The Bluetooth specification defines three classes of radios, based on their maximum power output: r
1 mW (0 dBm) 2.5 mW (4 dBm) r 100 mW (20 dBm) r
Increased transmission power offers a corresponding increase in radio range; the nominal range for the 0-dBm radio is 10 m, whereas the nominal range for the 20-dBm radio is 100 m. Of course, increased transmission power also requires a corresponding increase in the energy necessary to power the system, so higher power radios will draw more battery power. The basic cable replacement applications (indeed, most Bluetooth usage scenarios described here) envision the 0-dBm radio, which is considered the standard Bluetooth radio and is the most prevalent in devices. The 0-dBm radio is sufficient for most applications, and its low power consumption makes it suitable for use on small, portable devices. Transmitter and receiver characteristics such as interference, tolerance, sensitivity, modulation, and spurious emissions are outside the scope of this chapter but are detailed in the Bluetooth specification (Bluetooth SIG, 2001a). The baseband controller controls the radio and typically is implemented as firmware in the Bluetooth module. The controller is responsible for all of the various timing and
r
Generating and receiving packet payload Generating and receiving packet headers and trailers Dealing with the several packet formats defined for Bluetooth communication Error detection and correction Address generation and detection Data whitening (a process by which the actual data bits are rearranged so that the occurrence of zero and one bits in a data stream is randomized, helping to overcome DC bias) Data encryption and decryption
Not all of these operations are necessarily performed on every packet; there are various options available for whether or not a particular transformation is applied to the data, and in some cases (such as error detection and correction), there are several alternatives that may be employed by the baseband firmware. The link manager, as its name implies, manages the link layer between two Bluetooth devices. Link managers in two devices communicate using the link manager protocol (LMP). LMP consists of a set of commands and responses to set up and manage a baseband link between two devices. A link manager on one device communicates with a link manager on another device (indeed, this is generally the case for all the Bluetooth protocols described here; a particular protocol layer communicates with its corresponding layer in the other device, using its own defined protocol. Each protocol is passed to the next successively lower layer, where it is transformed to that layer’s protocol, until it reaches the baseband, where the baseband packets that encapsulate the higher layer packets are transmitted and received over the air interface). LMP setup commands include those for authenticating the link with the other device; setting up encryption, if desired, between the two devices; retrieving information about the device at the other end of the link, such as its name and timing parameters; and swapping the master and slave roles (detailed later) of the two devices. LMP management commands include those for controlling the transmission power; setting special power-saving modes, called hold, park, and sniff; and managing quality-of-service (QoS) parameters for the link. Because LMP messages deal with fundamental characteristics of the communication link between devices, they are handled in an expedited manner, at a higher priority than the normal data that is transmitted and received.
Control and Audio The control and audio blocks in Figure 2 are not actual protocols. Instead, they represent means by which the
P1: 08 Miller
WL040/Bidgolio-Vol I
WL040-Sample.cls
August 13, 2003
90
16:43
Char Count= 0
BLUETOOTHTM —A WIRELESS PERSONAL AREA NETWORK
upper layers of the stack can access lower layers. The control functions can be characterized as methods for inter-protocol-layer communication. These could include requests and notifications from applications, end users, or protocol layers that require action by another protocol layer, such as setting desired QoS parameters, requests to enter or terminate power-saving modes, or requests to search for other Bluetooth devices or change the discoverability of the local device. Often, these take the form of a user-initiated action, via an application, that requires the link manager (and perhaps other layers) to take some action. The audio block in Figure 2 represents the typical path for audio (voice) traffic. Recall that Bluetooth wireless technology supports both data and voice. Data packets traverse through the L2CAP layer (described later), but voice packets typically are routed directly to the baseband, because audio traffic is isochronous and hence time critical. Audio traffic usually is associated with telephony applications, for which data traffic is used to set up and control the call and voice traffic serves as the content of the call. Audio data can be carried over Bluetooth links in two formats: r
Pulse code modulation (PCM), with either a-law or µ-law logarithmic compression r Continuous variable slope delta (CVSD) modulation, which works well for audio data with relatively smooth continuity, usually the case for typical voice conversations
Host-Controller Interface (HCI) The HCI is an optional interface between the two major components of the Bluetooth stack: the host and the controller. As shown in Figure 2 and described earlier, the radio, baseband controller, and link manager comprise the module, which is often, but not necessarily, implemented in a single electronics package. Such a module can be integrated easily into many devices, with the remaining layers of the stack residing on the main processor of the device (such as a notebook computer, mobile phone, or PDA). These remaining layers (described next) are referred to as the host portion of the stack. Figure 2 illustrates a typical “two-chip” solution in which the first “chip” is the Bluetooth module and the second “chip” is the processor in the device on which the host software executes. (The module itself might have multiple chips or electronic subsystems for the radio, the firmware processor, and other external logic.) In such a system, the HCI allows different Bluetooth modules to be interchanged in a device, because it defines a standard method for the host software to communicate with the controller firmware that resides on the module. So, at least in theory, one vendor’s Bluetooth module could be substituted for another, so long as both faithfully implement the HCI. Although this is not the only type of partitioning that can be used when implementing a Bluetooth system, the SIG felt it was common enough that a standard interface should be defined between the two major components of the system. A Bluetooth system could be implemented in an “all-in-one” single module, where the host and
controller reside together in the same physical package (often called a “single-chip” solution), although in this case, the HCI might still be used as an internal interface. When the two-chip solution is used, the physical layer for the HCI (that is, the physical connection between the host and the controller) could be one of several types. The Bluetooth specification defines three particular physical layers for the HCI: r
Universal serial bus (USB) Universal asynchronous receiver/transmitter (UART) r RS-232 serial port r
Other HCI transports could be implemented; the Bluetooth specification currently contains details and considerations for these three.
Logical Link Control and Adaptation Protocol (L2CAP) The Logical Link Control and Adaptation Protocol (L2CAP) layer serves as a “funnel” through which all data traffic flows. As discussed earlier, voice packets typically are routed directly to the baseband, whereas data packets flow to and from higher layers, such as applications, to the baseband via the L2CAP layer. The L2CAP layer offers an abstraction of lower layers to higher layer protocols. This allows the higher layers to operate using more natural data packet formats and protocols, without being concerned about how their data is transferred over the air-interface. For example, the Service Discovery Protocol layer (discussed next) defines its own data formats and protocol data units. At the SDP layer, only the service discovery protocol needs to be handled; the fact that SDP data must be separated into baseband packets for transmission and aggregated from baseband packets for reception is not a concern at the SDP layer, nor are any of the other operations that occur at the baseband (such as encryption, whitening, and so on). This is accomplished because the L2CAP layer performs operations on data packets. Among these operations are segmentation and reassembly, whereby the L2CAP layer breaks higher layer protocol data units into L2CAP packets, which in turn can be transformed into baseband packets; the L2CAP layer conversely can reassemble baseband packets into L2CAP packets that in turn can be transformed into the natural format of higher layers of the stack. An L2CAP layer in one Bluetooth stack communicates with another, corresponding L2CAP layer in another Bluetooth stack. Each L2CAP layer can have many channels. L2CAP channels identify data streams between the L2CAP layers in two Bluetooth devices. (L2CAP channels should not be confused with baseband channels used for frequency hopping. L2CAP channels are logical identifiers between two L2CAP layers.) An L2CAP channel often is associated with a particular upper layer of the stack, handling data traffic for that layer, although there need not be a one-to-one correspondence between channels and upper-layer protocols. An L2CAP layer might use the same protocol on multiple L2CAP channels. This illustrates another data operation of the L2CAP layer: protocol multiplexing. Through the use of multiple channels and a protocol identifier (called a protocol-specific
P1: 08 Miller
WL040/Bidgolio-Vol I
WL040-Sample.cls
August 13, 2003
16:43
Char Count= 0
BLUETOOTH WIRELESS TECHNOLOGY
multiplexer, or PSM), L2CAP allows various protocols to be multiplexed (flow simultaneously) over the airinterface. The L2CAP layer sorts out which packets are destined for which upper layers of the stack.
Service Discovery Protocol (SDP) The SDP layer provides a means by which Bluetooth devices can learn, in an ad hoc manner, which services are offered by each device. Once a connection has been established, devices use the SDP to exchange information about services. An SDP client queries an SDP server to inquire about services that are available; the SDP server responds with information about services that it offers. Any Bluetooth device can be either an SDP client or an SDP server, acting in one role or the other at different times. SDP allows a device to inquire about specific services in which it is interested (called service searching) or to perform a general inquiry about any services that happen to be available (called service browsing). A device can perform an SDP service search to look for, say, printing services in the vicinity. Any devices that offer a printing service that matches the query can respond with a “handle” for the service; the client then uses that handle to perform additional queries to obtain more details about the service. Once a service is discovered using SDP, other protocols are used to access and invoke the service; one of the items that can be discovered using SDP is the set of protocols that are necessary to access and invoke the service. SDP is designed to be a lightweight discovery protocol that is optimized for the dynamic nature of Bluetooth piconets. SDP can coexist with other discovery and control protocols; for example, the Bluetooth SIG has published a specification for using the UPnP discovery and control technology over Bluetooth links.
RFCOMM As its name suggests, the RFCOMM layer defines a standard communications protocol, specifically one that emulates serial port communication (the “RF” designates radio frequency wireless communication; the “COMM” portion suggests a serial port, commonly called a COM port in the personal computer realm). RFCOMM emulates a serial cable connection and provides the abstraction of a serial port to higher layers in the stack. This is particularly valuable for Bluetooth cable replacement applications, because so many cable connections—modems, infrared ports, camera and mobile phone ports, printers, and others—use some form of a serial port to communicate. RFCOMM is based on the European Telecommunications Standards Institute (ETSI) TS07.10 protocol (ETSI, 1999), which defines a multiplexed serial communications channel. The Bluetooth specification adopts much of the TS07.10 protocol and adds some Bluetooth adaptation features. The presence of RFCOMM in the Bluetooth protocol stack is intended to facilitate the migration of existing wired serial communication applications to wireless Bluetooth links. By presenting higher layers of the stack with a virtual serial port, many existing applications that already use a serial port can be used in Bluetooth environments without any changes. Indeed, many of the cable replacement applications cited earlier, including
91
dial-up networking, LAN access, headset, and file and object exchange, use RFCOMM to communicate. Because RFCOMM is a multiplexed serial channel, many serial data streams can flow over it simultaneously; each separate serial data stream is identified with a server channel, in a manner somewhat analogous to the channels used with L2CAP.
Telephony Control Specification-Binary (TCS-BIN) The TCS-BIN is a protocol used for advanced telephony operations. Many of the Bluetooth usage scenarios involve a mobile telephone, and some of these use the TCS-BIN protocol. TCS-BIN is adopted from the ITU-T Q.931 standard (International Telecommunication Union, 1998), and it includes functions for call control and managing wireless user groups. Typically, TCS-BIN is used to set up and manage voice calls; the voice traffic that is the content of the call is carried over audio packets as described earlier. Applications such as the three-in-one phone usage model use TCS-BIN to enable functions such as using a mobile phone as a cordless phone or an intercom. In these cases, TCS-BIN is used to recognize the mobile phone so that it can be added to a wireless user group that consists of all the cordless telephone handsets used with a cordless telephone base station. TCS-BIN also is used to set up and control calls between the handset and the base station (cordless telephony) or between two handsets (intercom). TCS-BIN offers several advanced telephony functions; devices that support TCS-BIN can obtain knowledge of and directly communicate with any other devices in the TCS-BIN wireless user group, essentially overcoming the master-slave relationship of the underlying Bluetooth piconet (detailed later). It should be noted that not all Bluetooth telephony applications require TCS-BIN; an alternative method for call control is the use of AT commands over the RFCOMM serial interface. This latter method is used for the headset, dial-up networking, and fax profiles.
Adopted Protocols Although several layers of the Bluetooth protocol stack were developed specifically to support Bluetooth wireless communication, other layers are adopted from existing industry standards. I already have noted that RFCOMM and TCS-BIN are based on existing specifications. In addition to these, protocols for file and object exchange and synchronization are adopted from the Infrared Data Association (IrDA), and Internet networking protocols are used in some applications. The IrDA’s object exchange (OBEX) protocol is used for the file and object transfer, object push, and synchronization usage models. OBEX originally was developed for infrared wireless communication, and it maps well to Bluetooth wireless communication. OBEX is a relatively lightweight protocol for data exchange, and several well-defined data types—including electronic business cards, e-mail, short messages, and calendar items— can be carried within the protocol. Hence, the Bluetooth SIG adopted OBEX for use in its data exchange scenarios; by doing so, existing infrared applications can be used over Bluetooth links, often with no application changes.
P1: 08 Miller
WL040/Bidgolio-Vol I
WL040-Sample.cls
August 13, 2003
16:43
Char Count= 0
BLUETOOTHTM —A WIRELESS PERSONAL AREA NETWORK
92
Generic Access Profile Telephony (TCS-BIN) Profile
Cordless Telephony Profile
Dial-Up Networking Profile
Service Discovery Profile
Serial Port (RFCOMM) Profile
Generic Object Exchange Profile
Intercom Profile
LAN Access Profile
FAX Profile
Headset Profile
Object Push Profile
File/Object Transfer Profile
Synchronization Profile
Figure 3: The Bluetooth profiles. In addition, the infrared mobile communications (IrMC) protocol is used for the synchronization usage model. Typically, infrared communication occurs over a serial port, so the adopted IrDA protocols operate over RFCOMM in the Bluetooth protocol stack. Networking applications such as dial-up networking and LAN access use standard Internet protocols, including point-to-point protocol (PPP), Internet protocol (IP), user datagram protocol (UDP), and transmission control protocol (TCP). As shown in Figure 2, these protocols operate over the RFCOMM protocol. Once a Bluetooth RFCOMM connection is established between two devices, PPP can be used as a basis for UDP–IP and TCP–IP networking packets. This enables typical networking applications, such as network dialers, e-mail programs, and browsers, to operate over Bluetooth links, often with no changes to the applications.
Bluetooth Profiles I have presented an overview of the Bluetooth protocols, which are specified in Bluetooth SIG (2001a), the first volume of the Bluetooth specification. The Bluetooth SIG also publishes a second volume of the specification (Bluetooth SIG, 2001b), which defines the Bluetooth profiles. Profiles offer additional guidance to developers beyond the specification of the protocols. Essentially, a profile is a formalized usage case that describes how to use the protocols (including which protocols to use, which options are available, and so on) for a given application. Profiles were developed to foster interoperability; they provide a standard basis for all implementations, to increase the likelihood that implementations from different vendors will work together, so that end users can have confidence that Bluetooth devices will interoperate with each other. In addition to the profile specifications, the SIG offers other mechanisms intended to promote interoperability; among these are the Bluetooth Qualification Program (a definition of testing that a Bluetooth device must undergo) and unplugfests (informal sessions where many vendors can test their products with each other); detailed discussions of
these programs are outside the scope of this chapter, but more information is available on the Bluetooth Web site (Bluetooth SIG, 2002b). Our earlier discussion of Bluetooth applications presented several usage models for Bluetooth wireless communication. Many of these applications have associated profiles. For example, the dial-up networking profile defines implementation considerations for the dial-up networking application. Most of the applications cited here have associated profiles, and many new profiles are being developed and published by the SIG; the official Bluetooth Web site (Bluetooth SIG, 2002b) has a current list of available specifications. In addition, there are some fundamental profiles that describe basic Bluetooth operations that are necessary for most any application. The version 1.1 profiles are illustrated in Figure 3. This figure shows the relationship among the various profiles, illustrating how certain profiles are derived from (and build upon) others. The leaf nodes of the diagram consist of profiles that describe particular applications—file and object transfer, object push, synchronization, dial-up networking, fax, headset, LAN access, cordless telephony, and intercom. The telephony (TCS-BIN) profile includes elements that are common to cordless telephony and intercom applications; similarly, the generic object exchange profile describes the common elements for its children, and the serial port profile defines operations used by all applications that use the RFCOMM serial cable replacement protocol. Note that the generic object exchange profile derives from the serial port profile; this is because OBEX operates over RFCOMM in the Bluetooth protocol stack. The two remaining profiles describe fundamental operations for Bluetooth communication. The service discovery application profile describes how a service discovery application uses the service discovery protocol (described earlier). The generic access profile is common to all applications; it defines the basic operations that Bluetooth devices use to establish connections, including how devices become discoverable and connectable, security considerations for connections, and so on. The generic
P1: 08 Miller
WL040/Bidgolio-Vol I
WL040-Sample.cls
August 13, 2003
16:43
Char Count= 0
BLUETOOTH WIRELESS TECHNOLOGY
access profile also includes a common set of terminology used in other profiles; this is intended to reduce ambiguity in the specification. The generic access and service discovery application profiles are mandatory for all Bluetooth devices to implement, because they form the basis for interoperable devices. Other works, including Miller and Bisdikian (2000), delve more deeply into the Bluetooth profiles.
Bluetooth Operation Having discussed WPANs, Bluetooth applications, protocols, and profiles, I now turn our attention to some of the fundamental concepts of Bluetooth operation, illustrating an example flow for a Bluetooth connection. At the baseband layer, Bluetooth operates on a master– slave model. In general, the master device is the one that initiates communication with one or more other devices. Slaves are the devices that respond to the master’s queries. In general, any Bluetooth device can operate as either a master or a slave at any given time. The master and slave roles are meaningful only at the baseband layer; upper layers are not concerned with these roles. The master device establishes the frequency hopping pattern for communication with its slaves, using its internal clock values to generate the frequency hopping pattern. Slaves follow the frequency hopping pattern of the master(s) with which they communicate. When a master establishes a connection with one or more slaves, a piconet is formed. To establish the connection, a master uses processes called inquiry and paging. A master can perform an inquiry operation, which transmits a well-defined data sequence across the full spectrum of frequency hopping channels. An inquiry effectively asks, “Are there any devices listening?” Devices that are in inquiry scan mode (a mode in which the device periodically listens to all of the channels for inquiries) can respond to the master’s inquiry with enough information for the master device to address the responding device directly. The inquiring (master) device may then choose to page the responding (slave) device. The page also is transmitted across the full spectrum of frequency hopping channels; the device that originally responded to the inquiry can enter a page scan state (a state in which it periodically listens to all of the channels for pages), and it can respond to the page with additional information that can be used to establish a baseband connection between the master and the slave. The master can repeat this process and establish connections with as many as seven slaves at a time. Hence, a piconet consists of one master and up to 7 active slaves; additional slaves can be part of the piconet, but only seven slaves can be active at one time. Slaves can be “parked” (made inactive) so that other slaves can be activated. Figure 4 illustrates a typical Bluetooth piconet. Note that a device could be a slave in more than one piconet at a time, or it could be a master of one piconet and a slave in a second piconet. In these cases, the device participating in multiple piconets must use the appropriate frequency hopping pattern in each piconet, so it effectively must split its time among all the piconets in which it participates. The Bluetooth specification calls such interconnected piconets scatternets.
93
Once a piconet is formed (a baseband connection exists between a master and one or more slave devices), higher layer connections can be formed, and link manager commands and responses may be used to manage the link. At some point, it is likely that an L2CAP connection will be formed for data packets (even if the main content for a link is voice traffic, an L2CAP data connection will be needed to set up and manage the voice links). A Bluetooth device can have one data (L2CAP) connection and up to three voice connections with any other Bluetooth device at a given time (recall, however, that L2CAP connections can be multiplexed, so many different data streams can flow over the single L2CAP connection). Once an L2CAP connection is established, additional higher layer connections can be established. If the devices are not familiar to each other already, it is likely that an SDP connection will be used to perform service discovery. RFCOMM and TCS-BIN connections also might be made, depending on the application. From here, applications can manage voice and data packets to accomplish their usage scenarios, which might be those defined by Bluetooth profiles or other ways to use Bluetooth wireless communication to accomplish a given task. Additional details about fundamental Bluetooth operations and connection establishment, including the various types of packets, master–slave communication protocols, and timing considerations, are outside the scope of this chapter but are detailed in works such as Miller and Bisdikian (2000) and Bray and Sturman (2000). An additional noteworthy aspect of Bluetooth operation is security. In the wireless world, security justifiably is a key concern for device manufacturers, device deployers and end users. The Bluetooth specification includes security measures such as authentication and encryption. At the time that a link is established, Bluetooth devices may be required to authenticate themselves to each other. Once a link has been established, the data traffic over that
Active Slave
Parked Slave
Active Slave Active Slave
Master Active Slave
Active Slave Parked Slave Parked Slave
Active Slave
Parked Slave
Active Slave
Figure 4: Example of a Bluetooth piconet.
P1: 08 Miller
WL040/Bidgolio-Vol I
WL040-Sample.cls
94
August 13, 2003
16:43
Char Count= 0
BLUETOOTHTM —A WIRELESS PERSONAL AREA NETWORK
link may be required to be encrypted. The methods used for authentication and encryption are detailed in the specification, and the Bluetooth profiles discuss what security measures should be employed in various circumstances (for example, the file transfer profile requires that both authentication and encryption be supported, and it recommends that they be used). Applications are free to impose additional security restrictions beyond those that are provided in the specification. For example, applications might choose to expose data or services only to authorized users or to implement more robust user authentication schemes. Details of the operation of Bluetooth security features are outside the scope of this chapter but are detailed in works such as Miller and Bisdikian (2000) and Bray and Sturman (2000).
Related Technologies Other sources, including other chapters in this encyclopedia (see the “Cross References” section), discuss related wireless communication technologies in some depth. Here I briefly describe two particularly interesting related technologies, IrDA and IEEE 802.11 WLANI. I also comment on the IEEE 802.15.1 WPAN standard. IrDA technology (IrDA 2002) uses infrared optical links, rather than RF links, to accomplish wireless communication between two devices. IrDA ports are common on many portable computing and communication devices and often are found on the same sorts of devices that are good candidates for using Bluetooth technology. IrDA and Bluetooth technologies share several aspects: r
Both are for short-range communication. Both use low-power transmission. r Both are useful for cable-replacement applications. r
Some differences between the two technologies are the following: r
IrDA typically uses less power than Bluetooth radios. IrDA data rates typically are greater than those of Bluetooth wireless communication. r IrDA technology, because it uses optical communication, requires a “line of sight” between the two devices, whereas Bluetooth technology, because it uses RF communication, can penetrate many obstacles that might lie between two devices. r
As already noted, many IrDA applications are well suited for use with Bluetooth links, and the Bluetooth SIG adopted the OBEX and IrMC protocols from the IrDA specification for use in the Bluetooth specification to foster the use of common applications for either technology. For certain applications in certain situations, one technology or the other might be the most suitable to use. A more detailed comparison of the two technologies can be found in Suvak (1999). Many articles (e.g., Miller, 2001) have dealt with the relationship between IEEE 802.11 WLAN (IEEE, 1999) and Bluetooth WPAN technologies. Often, these two technologies are portrayed as being in competition with one another, with the proposition that one will “win” at the ex-
pense of the other. In fact, I believe, as many do, that these two technologies are complementary. Indeed, they are optimized for different purposes and applications. IEEE 802.11 is for WLAN applications, whereas Bluetooth technology is aimed at WPAN applications. Similarities between the two technologies include the following: r
Both operate in the 2.4-GHz RF spectrum. (The IEEE 802.11 standard actually includes two variants, 802.11a and 802.11b. Here I focus primarily on 802.11b. IEEE 802.11a operates in the 5-GHz spectrum.) r Both use spread spectrum to communicate. r Both can be used to access networks. There are, however, distinct differences between the two technologies, including the following: r
IEEE 802.11 has a longer range (nominally 100 m) versus the typical nominal range of 10 m for Bluetooth wireless communication. r IEEE 802.11 permits a much faster data rate (about 11 Mbps for 802.11b, even faster for 802.11a) than does Bluetooth technology (about 1 Mbps). r The topology, higher data rate, and longer range of IEEE 802.11 WLAN technology inevitably lead to higher power consumption on average, making it less suitable for small portable personal devices with limited battery power. r IEEE 802.11 is intended to be used for Ethernet-style networking applications in a LAN environment—it is a typical LAN without the wires—whereas Bluetooth is optimized for WPAN applications, notably cable replacement. Indeed, the IEEE distinguishes between WLAN and WPAN technologies and has a separate standard, IEEE 802.15.1 (IEEE, 2001) for WPAN applications. In fact, this IEEE 802.15.1 standard is based on the Bluetooth specification, and it essentially adopts a subset of the Bluetooth technology as the IEEE standard. Using Bluetooth wireless technology as the basis for the IEEE WPAN standards offers further evidence that Bluetooth and IEEE 802.11 technologies can complement each other, as does the fact that the Bluetooth SIG and the IEEE work together on certain issues, including pursuing methods by which the RF interference between the two technologies can be minimized.
CONCLUSION This introduction to Bluetooth wireless technology has touched on what can be done with the technology (Bluetooth applications), how it began and how it now is managed (the Bluetooth SIG), how it works (Bluetooth protocols, profiles, and operation), and how it relates to some other wireless communication technologies. The chapter focused on the application of Bluetooth wireless communication as a WPAN technology for connecting personal portable devices. I have presented several references where this topic is explored in greater detail. With tremendous industry backing, a design to work with both voice and data, the ability to replace
P1: 08 Miller
WL040/Bidgolio-Vol I
WL040-Sample.cls
August 13, 2003
16:43
Char Count= 0
REFERENCES
cumbersome cables, and many new products being deployed on a regular basis, Bluetooth wireless technology is poised to become an important way for people to communicate for the foreseeable future. From its genesis as a method to provide a wireless headset for mobile phones, this technology named for a Danish king continues to spread across the planet.
GLOSSARY Bluetooth wireless technology Name given to a wireless communications technology used for short-range voice and data communication, especially for cablereplacement applications. Bluetooth SIG The Bluetooth Special Interest Group, an industry consortium that develops, promotes, and manages Bluetooth wireless technology, including the Bluetooth qualification program and the Bluetooth brand. Frequency-hopping spread spectrum A method of dividing packetized information across multiple channels of a frequency spectrum that is used in Bluetooth wireless communication. IEEE 802.11 A wireless local-area network standard developed by the Institute for Electrical and Electronics Engineers that is considered complementary to Bluetooth wireless technology. IEEE 802.15.1 A wireless personal area network standard developed by the Institute for Electrical and Electronics Engineers that is based on Bluetooth wireless technology. Infrared Data Association (IrDA) An industry consortium that specifies the IrDA infrared communication protocols, some of which are used in the Bluetooth protocol stack. Piconet A Bluetooth wireless technology term for a set of interconnected devices with one master and up to seven active slave devices. Profile In Bluetooth wireless technology, a specification for standard methods to use when implementing a particular application, with a goal of fostering interoperability among applications and devices. Radio frequency (RF) Used in the Bluetooth specification to describe the use of radio waves for physical layer communication. Wireless personal area network (WPAN) A small set of interconnected devices used by one person.
CROSS REFERENCES See Mobile Commerce; Mobile Devices and Protocols; Mobile Operating Systems and Applications; Propagation Characteristics of Wireless Channels; Radio Frequency and Wireless Communications; Wireless Application Protocol (WAP); Wireless Communications Applications; Wireless Internet.
REFERENCES Bluetooth Special Interest Group (2001a). Specification of the Bluetooth system (Vol. 1). Retrieved December 10,
95
2002, from http://www.bluetooth.com/pdf/Bluetooth 11 Specifications Book.pdf Bluetooth Special Interest Group (2001b). Specification of the Bluetooth system, (Vol. 2). Retrieved December 10, 2002, from http://www.bluetooth.com/pdf/ Bluetooth 11 Specifications Book.pdf Bluetooth Special Interest Group (2002a). Trademark info. Retrieved December 10, 2002, from http://www. bluetooth.com/sig/trademark.use.asp Bluetooth Special Interest Group (2002b). The official Bluetooth Web site. Retrieved December 10, 2002, from http://www.bluetooth.com Bray, J. & Sturman, C. (2000) Bluetooth: Connect without cables. New York: Prentice Hall PTR. (Second edition published 2001.) European Telecommunications Standards Institute (1999). Technical specification: Digital cellular telecommunications system (Phase 2+); Terminal equipment to mobile station (TE-MS) multiplexer protocol (GSM 07.10). Retrieved March 28, 2003, from http://www.etsi. org Infrared Data Association (2002). IrDA SIR data specification (and related documents). Retrieved March 28, 2003, from http://www.irda.org/standards/ specifications.asp InnTechnology (2000). The Venetian Resort-Hotel-Casino & InnTechnology showcase Bluetooth hospitality services. Retrieved March 28, 2003, from http://www. inntechnology.com/bluetooth/bluetooth press release. html Institute of Electrical and Electronics Engineers (1999). Wireless Standards Package (802.11). Retrieved March 28, 2003, from http://standards.ieee.org/getieee802 Institute of Electrical and Electronics Engineers (2001). IEEE 802.15 Working Group for WPANs. Retrieved March 28, 2003, from http://standards.ieee.org/ getieee802 International Telecommunication Union (1998). Recommendation Q.931—ISDN user-network interface layer 3 specification for basic call control. Retrieved March 28, 2003, from http://www.itu.org Internet Mail Consortium (1996a). vCard—The electronic business card exchange format. Retrieved March 28, 2003, from http://www.imc.org/pdi Internet Mail Consortium (1996b). vCalendar—The electronic calendaring and scheduling exchange format. Retrieved March 28, 2003, from http://www.imc.org/pdi Kardach, J. (2001). The naming of a technology. Incisor, 34 (10–12) and 37 (13–15). Miller, B. (2001). The phony conflict: IEEE 802.11 and Bluetooth wireless technology. IBM DeveloperWorks. Retrieved March 28, 2003, from http://www-106. ibm.com/developerworks/library/wi-phone/?dwzone = wireless Miller, B., & Bisdikian, C. (2000). Bluetooth revealed: The insider’s guide to an open specification for global wireless communication. New York: Prentice Hall PTR. (Second edition published 2001.) Suvak, D. (1999). IrDA and Bluetooth: A complementary comparison. Walnut Creek, CA: Infrared Data Association. Retrieved March 28, 2003, from http://www. irda.org/design/ESIIrDA Bluetoothpaper.doc
P1: IML/FFX
P2: IML/FFX
E-commerce˙Business˙Plan
QC: IML/FFX
T1: IML
WL040/Bidgolio-Vol I
WL040-Sample.cls
June 17, 2003
18:12
Char Count= 0
Business Plans for E-commerce Projects Amy W. Ray, Bentley College
Introduction and Background Beginning of the Dot-com Gold Rush Changing Interests of Investing Firms Confounding Organizational Issues Organizing the Keys to Success Consideration for any New Business Venture Identification of a Competitive Mission Key Employees Budgeting Security of Transactions New Start-up Considerations Building Brand and Customer Loyalty Financing Options
96 96 97 98 98 99 99 99 100 100 101 101 101
Special Considerations for E-ventures of Brick-and-Mortar Firms Potential for Channel Conflict/Channel Complement Operational Management Issues Special Considerations for New E-businesses Bandwidth Additional Security Issues for New E-businesses Preparing the Business Plan Appendix Glossary Cross References References
102 102 102 103 103 103 103 104 104 105 105
INTRODUCTION AND BACKGROUND
Beginning of the Dot-com Gold Rush
Numerous excellent resources exist to help would-be entrepreneurs with the technical aspects of writing a business plan, many of which are referenced here. Many articles have also been written on financial management of e-businesses, but most of these are targeted to helping venture capitalists (VCs) rather than entrepreneurs. Would-be entrepreneurs should begin by taking one step back from the actual preparation of a written business plan to consider the types of funding available for start-up e-businesses. Deciding how best to fund a startup company is the first important issue faced by entrepreneurs, yet the consequences of specific choices are often overlooked—the people who fund the company will invariably have a major impact on how the company is ultimately managed. Although there are numerous benefits to actually writing a business plan, most formal business plans are executed as a requirement for obtaining external funding, and the primary source of funding for e-businesses is venture capital. In fact, Venture Economics, a New Yorkbased consulting firm, notes that about $240 billion has been poured into venture funds since the beginning of 1998 (Healy, 2002). This number does not include the significant contributions from VCs beyond the large, centrally managed venture funds. Accordingly, the benefits and challenges of using venture capital as a primary source of start-up funding are discussed here. Through consulting engagements with numerous start-up companies, the number one problem this author sees is that entrepreneurs fail to recognize that differences between their motivations for starting businesses and the investors’ motivation for funding them. Publicly available examples are used throughout this chapter to help explain the additional volatility that entrepreneurs face when using venture capital to fund their e-business start-up companies.
The mid-1990s marked the beginning of the dot-com gold rush, a period of time when investors eagerly supplied capital to any entrepreneur willing to brave the mysterious world of electronic commerce and build a businessto-consumer (B2C) company presence on the Internet. All would-be entrepreneurs needed were the words “online,” “dot-com,” or “electronic commerce” in their business plans to spark significant interest. Traditional concerns regarding the potential to generate revenues—typically measured by the existence of current sales—were temporarily suspended, as investors believed that the Internet held limitless potential and that existing business methods for sales held little predictive value for the success of an e-business. Essentially, as recently as 1993 the boundaries of this new frontier were completely unknown and every investment was a tremendous leap of faith. Yet many investors readily made this leap, believing that the payoff would be worth it. Since 1993, we have learned a great deal about the Internet’s role in business, including the fact that many of the keys to success for an e-business venture are the same as the keys to success for most traditional business ventures. “Just because you’re part of the Net doesn’t mean that the laws of economics have been repealed. You’ve got to have a real business,” says Bill Reichart, the current President of Garage Technology Ventures (Aragon, 2000). The challenge is in finding the right balance between traditional and new business measures and methods. Many e-business ventures that were started in the mid-1990s continue to flourish, but many more ventures have come and gone. What we hear about most in the popular press are the spectacular failures of the dot-com companies, often dubbed “dot-bombs.” In fact, whole Web sites have been dedicated to documenting these failures. Table 1 lists some of the failed companies’ documentation sites, still up and running in September 2002. It is interesting to note
96
P1: IML/FFX
P2: IML/FFX
E-commerce˙Business˙Plan
QC: IML/FFX
T1: IML
WL040/Bidgolio-Vol I
WL040-Sample.cls
June 17, 2003
18:12
Char Count= 0
INTRODUCTION AND BACKGROUND
97
Table 1 Listing of Dot-com Failure Web Sites HOOVERS ITWATCHDOG PLANETPINKSLIP DOTCOMSCOOP WEBMERGERS
http://www.hoovers.com/news/detail/0,2417,11 3583,00.html http://www.itwatchdog.com/NewsFeeds/DotcomDoom.html http://www.planetpinkslip.com/ http://www.dotcomscoop.com http://www.webmergers.com/
that at one time many more documentation sites existed, but many of them ceased to exist as the economy slowed considerably during the third quarter of 2001 and continued to decline for several months afterward. Studying past failure is an important part of increasing future likelihood of success, lest history repeat itself. Many articles have documented the explosion of e-business failures, with particular emphasis given to the myriad new but bad ideas that have been funded. However, that’s not the entire story, or even the most interesting bit of the story. In this chapter, a brief discussion of other major reasons for dot-com failures precedes the discussion of keys to business plan success. Specifically, this chapter focuses on two additional key factors in numerous dot-com failures. First, the rapidly changing interests of investing firms are discussed. That is followed by a discussion of failed frontrunners with new and good business ideas that just lost their balance on the steep learning curve of new customer issues and/or business infrastructure requirements for e-business success.
Changing Interests of Investing Firms Beginning in the mid-1990s, numerous investors eagerly sought electronic commerce ventures and were especially interested in funding B2C start-ups. The prospect of potentially reaching any consumer around the world without the cost of setting up physical shops seemed to hold unlimited profit potential. In fact, for a short period of time it seemed that the ultimate goal was to find and fund purely virtual companies, or companies completely independent of physical constraints. One of the first successful virtual companies is Tucows (http://www.tucows.com). Scott Swedorski, a young man who was very good at developing Winsock patches for Microsoft products, started Tucows in 1993. Mr. Swedorski developed a Web site and posted software patches as freeware. The Web site became so successful among software developers that advertisers rushed to place banner advertisements on the Web site. It is interesting to note that not only is Tucows one of the first successful virtual companies, it is also one of a very few companies that successfully used banner advertising as a sustainable business model. Indeed, Tucows is the exception, not the norm. Mr. Swedorski recognized a unique market need and fulfilled it, using the Internet. In contrast, many new e-business start-ups have been driven more by a desire to identify new Internet markets rather than by an existing need. That is one valuable lesson from the Tucows example. Another lesson is that Tucows did not need venture capital funding because the company grew as a result of meeting a need in a new way. There is much less risk all around if an entrepreneur with a good
idea can start small and grow the business slowly. Entrepreneurs who want to start big usually require venture capital. Invariably, this means that entrepreneurs will relinquish control of company management. It also means that overall the financial stakes are higher. Two examples of start-up failures are briefly documented here to illustrate the related risks of starting big and using venture capital. Please note, however, that these examples are not intended to represent an indictment of VCs. Rather, it is a call to attention of the challenges of using venture capital that are generally overlooked. Pets.com started off as a good idea—one-stop online shopping for pet supplies, from food to beds to routine medications. Pets.com was started late in the 1990s, a time when VCs were pushing managers to establish brand recognition at all costs. During this period, it was common for VCs to give large sums of money to start-up managers and tell them to spend aggressively on advertising campaigns designed to establish significant brand recognition as quickly as possible. During 1998 and 1999, the spectacular successes of a few firms with this strategy further encouraged other firms to push the limits on advertising campaigns. For example, exorbitant spending by the start-ups monster.com and hotjobs.com during the Superbowl in 1999 proved to be very successful. There was little reason to believe that it wouldn’t work for other companies, such as Pets.com. Yet in the fiscal year ending September 30, 1999, Pets.com made $619,000 in revenues and spent $11,815,000 on marketing and sales. They were arguably successful in establishing brand recognition, as most people today would still recognize the Pets.com sock puppet, but the cost was too high. In the year 2000, they went public, but despite their brand recognition, consumers did not come in as quickly as as needed to build revenues. Consequently, they closed their doors the same year they went public. The role of VCs in the failure of Pets.com and similar companies during the late 1990s is often overlooked in the retelling of the stories but was, in fact, key to those failures. Instead of fostering the vision of a sustainable business model, many VCs pushed artificial company growth purely through venture spending, with the goal of taking the company public as quickly as possible. Once such companies went through their initial public offering and public investors started to buy, the original VCs often sold their shares. Thus, it was traditional investors, not the VCs, who were left holding the losses. Traditional investors have learned the hard way that revenues matter, even for young companies. At the same time, for would-be entrepreneurs interested in seeking venture capital, such stories provide important lessons: Logistics and transportation issues probably would not have spun so far out
P1: IML/FFX
P2: IML/FFX
E-commerce˙Business˙Plan
98
QC: IML/FFX
T1: IML
WL040/Bidgolio-Vol I
WL040-Sample.cls
June 17, 2003
18:12
Char Count= 0
BUSINESS PLANS FOR E-COMMERCE PROJECTS
of control if companies had started smaller, and brand recognition is not the only thing that matters. It is important to keep in mind the focus of high risk investors such as VCs: They are willing to make higher risk investments with the hope of fast and significant payoffs. Unfortunately, this translates into an environment where some of those investors are willing to sacrifice the longterm future of a start-up company for their own profitability. Accordingly, it is very important that entrepreneurs check the track records of different investors before deciding whom to approach for funding. VCs also played a major role in the failure of companies that never went public simply by shifting their interest to other types of start-ups before their previous start-up investments had stabilized. Streamline.com was an online grocery company that had a business model superior to most of their competitors’ in terms of provisions for customer value and service. One of their primary problems was that they underestimated logistics and transportation costs, and they also found that competing with large grocery stores for top-quality products from food production companies was difficult. By the time that they began to learn from these issues, the VCs decided against further financing of the business, as their interests had shifted to other, newer types of emerging e-commerce companies. The initial enthusiasm for B2C start-ups in the mid1990s was replaced in the late 1990s with interest in the business-to-business (B2B) space. Investors began to realize that the profit potential in the B2B space was much greater because of larger transaction size and volume and greater customer reliability (businesses instead of individuals). Soon after B2B business plans became the favored choice for VCs, investors began to realize that a lot of work on Internet infrastructure was needed in order to truly exploit the Internet’s potential. Thus, companies building software applications and other support mechanisms for strengthening the Internet’s infrastructure replaced B2B investments as the favored business plans to invest in. Within the span of 5 years or so, the interests of investors moved from B2C, to B2B, to Internet infrastructure business plans. This was taking place at the same time entrepreneurs were receiving large sums of money to build artificial growth. The result was that a number of companies with sound business plans grew too quickly to be sustainable on their own revenues, yet venture backing stopped, as VCs moved on to what they deemed to be more exciting categories of business plans. Streamline.com is but one example of a B2C company that started to show signs of progress as interest in financing shifted from B2C to what were considered potentially more lucrative ventures. The moral of the story is that entrepreneurs looking for venture backing need to understand trends in business as well as what investments are favored by venture firms. Entrepreneurs willing to start smaller and grow more slowly can maintain control of the company’s management and can move along the learning curve at a more reasonable pace. On the other hand, entrepreneurs with clear ideas for fulfilling significant and known market needs with electronic commerce businesses should genuinely consider venture capital as a funding option.
Confounding Organizational Issues In the previous section, Streamline.com was mentioned as a firm that could not attract additional rounds of funding as they were learning how to manage their operations. Many online stores, such as Streamline.com and Amazon.com, started off believing they would compete directly with physical stores. Indeed, during the mid-1990s there was much discussion regarding anticipated changes in profits of physical stores and anticipated changes in consumer behavior. In the end, it became clear that most consumers were primarily interested in online shopping as an additional convenience rather than as an exclusive alternative to shopping in physical stores. This meant that online stores did not grow their market shares as quickly as they had hoped or anticipated. Also, online stores underestimated their own dependence on companies with a physical presence, such as warehouses and distribution centers. Amazon.com is a perfect example of a company that ultimately was reorganized when managers recognized they needed the physical world more than they originally imagined (Vigoroso, 2002). Brick-and-mortar companies making e-business investments struggled less with e-business investments, as they already had the infrastructure support they needed and they also had a clearer understanding of their customer base. In fact, many brick-and-mortar companies learned very quickly how to adopt best practices from the efforts of virtual companies and had the capital to implement their ideas with much less trouble than the virtual firms. Yet it is also interesting to note that many of these brick-and-mortar ventures into e-business lacked the innovation and forethought that could have resulted in truly innovative business practices. For example, Barnes and Noble developed their Web presence completely separately from their physical presence. Barnesandnoble.com had a completely different staff and completely different databases. Once the managers of the brick-and-mortar establishment felt that the market had stabilized and that the threat of market share loss from such companies as Amazon.com had dissipated, the managers of the brick-and-mortar Barnes and Noble decided to keep some shares in barnesandnoble.com but sell off the rest of virtual Barnes and Noble. Imagine, however, what innovations might have developed if, for example, the online databases had been fully integrated with the databases of sales from physical inventories? A much more complete understanding of customer spending habits on books could have been obtained.
Organizing the Keys to Success Many of the keys to success for e-business ventures are simply the keys to success for any new business. The business world has learned a great deal by doing postmortem analyses of failed e-business start-ups. We have learned that customers want Web options as opposed to Web exclusivity in their lives. We have relearned that revenues count, relationships with suppliers are important, and a host of other traditional business values are still relevant. Many core considerations for building a successful company did not change as a result of the high tech investment
P1: IML/FFX
P2: IML/FFX
E-commerce˙Business˙Plan
QC: IML/FFX
T1: IML
WL040/Bidgolio-Vol I
WL040-Sample.cls
June 17, 2003
18:12
Char Count= 0
CONSIDERATION FOR ANY NEW BUSINESS VENTURE
boom and have not changed since the investment bust. Yet there are also some unique issues to tackle for ebusiness start-ups. The next four sections of this paper provide a layered view of methods for building a successful e-business plan, starting with the most general success considerations and building to the most specific issues. In particular, keys to success are analyzed as follows: considerations for any new business venture, considerations for any new company, special concerns when planning a new e-business venture for an existing brick-and-mortar firm, and special considerations when starting a new business that is primarily an e-business.
CONSIDERATION FOR ANY NEW BUSINESS VENTURE A business venture may be anything, from an investment in a major new project by an existing company, to the start-up of a new company. Although new and established companies have some unique challenges and opportunities, there are also numerous issues common to both types of companies. Regardless of the size and sophistication of any given company, it is imperative to begin with three basic questions: what, who, and how much. More specifically, what is the venture goal or strategy, and how is the proposed business plan different from competitors’ plans? Who will be designated as the key leaders for the venture, and approximately how much capital is needed to start the project or company? This section takes a closer look at each of these issues.
Identification of a Competitive Mission The first thing angel investors or VCs will look at in a business plan is the executive summary. Specifically, they will be looking to see how the mission of a company is different from others they have seen. Investors will be looking for evidence that the company is proposing products or services that are in demand and that the organizational leaders in some way demonstrate innovation in their thinking about running the business. They will also be looking for evidence that the key managers have given adequate thought and consideration to the competitive environment. Investors want assurance that thorough analysis of the customers, competitors, and suppliers has been completed. Since the failure of so many e-business start-ups, investors have started scrutinizing mission statements very carefully and asking organizational leaders numerous difficult questions prior to making an investment. In a sense, this is an advantage to the proposed organizational leaders, as they should be forced to consider many possible business scenarios prior to actually starting their business. Established brick-and-mortar companies investing in e-business operations, either B2B or B2C, should ask themselves the same difficult questions that VCs and bankers will ask entrepreneurs, to ensure that the proposed business plan fits well with the organization’s existing missions and goals. However, brick-and-mortar companies often find themselves in a position of investing in e-business out of competitive necessity rather than out of competitive strategy. As a result, the e-business
99
investments of the brick-and-mortar companies are often not leveraged to the extent they could be if they were more strategically linked to organizational goals and missions. Barnes and Noble is an example of lost opportunity when a major investment in e-business is never strategically linked to other core services offered by the company.
Key Employees It is important to identify project leaders and key personnel as early as possible. These individuals need to provide strong leadership to the organization and to shape the company’s future. Yet aspiring entrepreneurs often learn that finding a good business partner is about as difficult as finding a good marriage partner. First and foremost, entrepreneurs should find someone whose skills complement their skills. For example, a software engineer needs someone to be responsible for designing a marketing and sales plan. Also, entrepreneurs need partners who share their vision and, perhaps most important, are capable of helping fulfill that vision. Many would-be entrepreneurs are enthusiastic at the beginning of a start-up venture but do not know how, or do not have the wherewithal, to see the vision through to fruition. It is generally not a good idea to jump into a business venture with a relative stranger. A possible exception is if the partner candidate has a proven track record and enough time is spent with him or her to know that a good rapport exists. A proven track record for follow-through is of equal importance to rapport, however, and should not be overlooked simply because the person is likable. Anyone who can’t execute your business plan effectively will quickly become less likable! Ronald Grzywinski and Mary Houghton, cofounders of Shorebank Corp (http://www.sbk.com) note that effective management also requires a commitment to developing and creating opportunities for others to move through the ranks of the organization. They find that it is most difficult to find personnel with strong general management skills. Accordingly, they invest a great deal in building these skills in their existing personnel. Identifying key personnel for e-business ventures of brick-and-mortar companies has its own challenges. Identifying whom to designate as the new managers for the e-business venture of a brick-and-mortar company involves deciding whether to move proven managers from the existing organization over to the e-business, or whether to hire new personnel from outside the company. Existing employees can ensure that an understanding of the current customer base is considered as decisions are made to develop the new venture. On the other hand, it may be possible to attract e-business veterans who can help the traditional management team with the e-business learning curve. It seems like an attractive option to hire a combination of insiders and outsiders to run new e-business ventures, but care must be taken to ensure that a culture clash does not get in the way of effective management. In any event, it is most important that the strategic goals and missions of the e-business be clearly thought out and articulated at the very beginning, and that every subsequent decision be measured against these goals.
P1: IML/FFX
P2: IML/FFX
E-commerce˙Business˙Plan
100
QC: IML/FFX
T1: IML
WL040/Bidgolio-Vol I
WL040-Sample.cls
June 17, 2003
18:12
Char Count= 0
BUSINESS PLANS FOR E-COMMERCE PROJECTS
This will ensure that the scope of projects considered by the company stays reasonable and that the company does not take off in directions that cannot be subsequently supported by the core capabilities of the organization. Although this sounds like common sense, managers often get so caught up in either making a sale—any sale under any circumstances—that they forget to focus on fulfilling the strategic missions of the company. A clearly articulated and carefully considered mission that is continuously referenced by the management team can significantly reduce the likelihood of miscommunication within the organization and will help keep all personnel on the same strategic page.
Budgeting Budgeting for any new business venture is not easy. E-business ventures, however, are particularly difficult to budget for, as the majority of capital used in start-up is for Web and other software development, network infrastructure, personnel, marketing, and other intangible items. Budgeting for intangibles is difficult because costs are difficult to estimate and the resulting financial benefits are difficult to calculate. Yet without adequate effort to capture estimated costs and benefits, financial control of a company is quickly lost. Oracle financial managers note that traditional business planning relies heavily on return on investment (ROI) as the key metric for measuring project effectiveness. However, to accurately measure e-business effectiveness, organizations are looking at multiple additional metrics, including changes in productivity, changes in cost of sales, and changes in the costs of customer acquisition and the gains in customer retention (Oracle, 2001). In any event, perhaps it is most important to remember that numbers associated with intangible assets are invariably based on numerous assumptions. Wherever financial calculations and projections are made pertaining to e-business, the assumptions should be readily available as well. Typical financial information that should be incorporated into the business plan include the following: sales projections, product demand, cost of sales/services, breakdown of fixed and variable costs, projected months to break even, projected months to positive cash flow, estimated capital burn rate, and a full set of pro forma financial statements. While developing financial projections, it is also worthwhile to consider outsourcing some e-business activities. Even if managers ultimately decide to develop and manage everything in-house, consideration of outsourcing can help clarify cost and management issues. Common e-business activities that many companies have decided to outsource today include hosting of Web services, customer support applications, information security management, and advertising. It is worthwhile mentioning that there are numerous budgeting tools available for e-business entrepreneurs on the Web. For example, at Entrepreneur.com, many forms are available for downloading, including forms to help managers estimate business start-up cash needs, business insurance planning worksheets, personal cash flow management statements, and development budget worksheets. There are also many stories in the online business
periodicals documenting financial mistakes made in e-business start-ups. An article by Benoliel (2002) cites five of the most common financial mistakes made by new e-businesses. Overstating Projections. Benoliel notes that like Enron, some companies may overstate expected revenues to deceive, but many companies overestimate earnings simply by being overly enthusiastic. In either case, realistic budgets and projections are the best choice for the long-term success of relationships between managers and investors. Ignoring Immediate Budgetary Needs. Although some managers err by being overly optimistic, others will not ask for enough upfront capital to get started because they don’t want to scare off potential investors. Since the failure of so many e-businesses, managers have created more conservative budgets, believing that capital will be much more difficult to come by. Ultimately, however, there is still a lot of investment capital out there and investors are looking for good ideas to fund adequately for success. Again, best-estimate budgets are better than overly ambitious or overly conservative budgets. Revenues = Positive Cash Flow. This is a simple accounting principle that many entrepreneurs without a business background can get into trouble over. Where possible, a conservative policy of delaying new purchases until after revenues are collected and existing bills are paid is a good business practice for young companies with minimal cash inflow. Forgetting Taxes. Sales tax and employee withholdings are paid periodically instead of perpetually and accordingly may be temporarily forgotten as genuine expenses until it is time to pay them.
Mismanaging the Advertising Timeline Advertising costs are often recorded as a percentage of sales in the same period. Yet advertising costs are actually incurred in the hope that they will lead to future sales. Failure to budget the appropriate items in a strategic time frame will underutilize finances needed to achieve sales goals and can lead to overspending in later months.
Security of Transactions Eighty percent of the brick-and-mortar companies still not engaged in electronic commerce claim the primary reason for their trepidation is the inherent insecurity of the Internet medium. Thus, full consideration must be given to all security issues involved in the use of the Internet for e-business. Specifically, considerable thought should be given to security of information as it resides in company databases and as it is transmitted from point A to point B. Also, the confidentiality and privacy of customer information needs to be carefully considered. General security plans should be explained in business plans to the extent possible. Many entrepreneurs consider the use of third-party Web site security providers as a means of ensuring transaction security and thus attracting new customers.
P1: IML/FFX
P2: IML/FFX
E-commerce˙Business˙Plan
QC: IML/FFX
T1: IML
WL040/Bidgolio-Vol I
WL040-Sample.cls
June 17, 2003
18:12
Char Count= 0
NEW START-UP CONSIDERATIONS
Specifically, some of the most frequently used companies for assuring information security, information privacy, or both include TRUSTe (http://www.truste.org), the Better Business Bureau (http://www.bbbonline.com), the AICPA (http://www.aicpa.org), and Verisign (http://www. versign.com). Major differences in offerings from these four organizations are briefly summarized here. For an indepth analysis of the differences, the reader should consult the corresponding Web sites. Essentially, the Better Business Bureau is, and always has been, a consumeroriented organization. As such, they focus primarily on assuring consumers that online vendors will respond appropriately to consumer complaints. Although this is the lowest level of assurance, it is the oldest company, so an entrepreneur may actually attract more average consumers with this seal than with lesser known seals that indicate stronger security. The TRUSTe seal assures consumers that an online company provides certain levels of protection of personal information in addition to promising to respond to customer complaints. Verisign, on the other hand, focuses on authentication of online merchants to assure that they are who they say they are, as well as on assurance that actual transactions are safely transmitted in an encrypted form to protect against data theft. Finally, the AICPA focuses on providing comprehensive transaction integrity assurance as well as on informing customers about specific business policies. AICPA services are far more comprehensive, and far more expensive, than other organizational offerings.
NEW START-UP CONSIDERATIONS Anyone thinking about starting a new company needs to consider at least two issues, in addition to those discussed in the previous section. Specifically, would-be entrepreneurs of new companies need to consider how they will go about building brand recognition and customer loyalty, and how they will go about financing the start-up of their company. These are concerns for any new company, high tech or otherwise.
Building Brand and Customer Loyalty It costs a company five times as much to acquire a new customer as it does to keep an existing one. This is good news for existing companies building e-businesses but rather daunting news for a start-up company. The fact that acquiring new customers is difficult has lured many e-business entrepreneurs into spectacular spending on advertising campaigns. As mentioned in the introductory section of this chapter, there was a brief period during the late 1990s when e-business managers focused on establishing brand recognition at the expense of all other business considerations. The fact that it worked for companies like monster.com fueled overspending by many ebusiness managers, who followed their lead. Another example is Drkooop.com. Prior to their demise, they spent very heavily on advertising. Specifically, they had deals with Creative Artists Agency, ABCNews.com, and three of Disney’s GO Network sites to be their exclusive health content provider. In July 1999, drkoop.com signed a fouryear, $89 million contract to feature its logo on AOL’s
101
portals—an amount that represented nearly 181% of the company’s total available cash at the time. Although traffic increased dramatically, the price paid by Drkoop.com proved too high, just as it did for Pets.com. In December 2001, Drkoop.com filed for bankruptcy. Although we now have proof that focusing only on brand recognition is more often than not a fatal business concept, establishing brand recognition is still a critical success factor for startups. Balancing spending to establish brand with all other start-up costs when budgets for new companies are becoming quite lean is particularly challenging. A good starting point for spending on brand is to identify a target market and build advertising campaigns specifically matched to interests of individuals in that target market. Media choices should be carefully selected based on target market habits. Even if a generous budget is set for advertising and marketing, a well thought out plan should drive spending on brand. Otherwise it is very easy for this expense to spin out of control. Sales and marketing personnel should be given liberal freedom to spend, but expenses should also be reconciled against plans, and personnel should be expected to justify expenditures on a periodic basis. Perhaps the most infamous example of why justification and reconciliation are so important is the story of the six Barclay bankers who ran up a $62,700 dinner tab (mostly for drinks) at a fine restaurant in London and then tried to pass it off as a client expense. Reconciliation and justification efforts resulted in five of the six bankers getting fired. Thankfully, most entrepreneurs do not need to worry about their sales personnel spending $62,000 on one dinner, but if management makes it clear early on that all employees are accountable for the money they spend, they will think twice before treating themselves to airline upgrades, expensive hotel rooms, valet parking, and other unnecessary luxuries commonly expensed when no one is watching. Building customer loyalty is related to building brand recognition in that having loyal customers is one of the most effective means of establishing brand. Yet customers are usually loyal to companies that provide unique products and services, or that provide products and services in a consistently reliable fashion or at consistently lower prices. Accordingly, building customer loyalty is particularly challenging for B2C start-up companies that aim to sell commodity-type goods over the Internet. A broad definition of commodity is used here: It means that products manufactured with the same specifications are not different from each other. In other words, two items that are configured in exactly the same way are, for all intents and purposes, interchangeable. Books, CDs, computers, and automobiles are examples.
Financing Options As with many other aspects of business, favored financing options for start-up companies have in a sense come full circle. Before the e-business boom, many new entrepreneurs built their businesses slowly, by starting with one or two clients, and then applying revenues from those sales either directly to investments in a new company or as collateral for a traditional bank loan.
P1: IML/FFX
P2: IML/FFX
E-commerce˙Business˙Plan
102
QC: IML/FFX
T1: IML
WL040/Bidgolio-Vol I
WL040-Sample.cls
June 17, 2003
18:12
Char Count= 0
BUSINESS PLANS FOR E-COMMERCE PROJECTS
The dream of early e-business entrepreneurs, however, was to grow their company quickly to a substantial size. What we have learned from the growing pains experienced by many of those firms is that slow and steady has definite merits over fast and fatal! For example, Donna LaVoie, entrepreneur, launched her corporate communications company in the fall of 2001 (Hendricks, 2002). She took a one-third prepayment of services to be rendered for her first client and used that to hire her first employee. This return to old-fashioned values may not be entirely by choice for many entrepreneurs, however. Results from a recent survey by the National Association of Manufacturers show that more than a third of respondents find it harder to get loans from their banks now than it was in early 2001, whereas results from another survey, by the National Venture Capital Association, reveal that the number of companies receiving venture money fell from 6,245 in 2000 to 3,736 in 2001 (Henricks, 2002). Yet there is also evidence that capital is still readily available for well-presented business plans. Enter the term venture capital into any good search engine, such as Google, and a very large number of venture capital firm sites will be returned. A few especially good sites to begin with include http://www.Start-upjournal.com, http:// www.businessplans.org, and http://www.garage.com. Before approaching VCs, however, entrepreneurs interested in external funding should understand a little bit about the basic categories of financing options. New start-ups with no initial capital and no customers will usually start by looking for “angel” investors. Angel investors are individuals or companies willing to take bigger risks with their capital than VCs are, but they will also expect bigger company percentages than traditional VCs, generally in the form of a larger share of the start-up company. Angels generally expect to contribute anywhere from $100,000 to $1,000,000,000 to help get a company started. VCs today will come in a bit later and large venture capital companies will often want a seat on the company’s board of directors to help protect their investments. Another investment option is to approach companies that specialize in providing management services in addition to financial backing. These companies are called incubators. This chapter has described the numerous advantages and disadvantages both to self-financing and to external financing of operations, with particular emphasis on the pros and cons of using venture capital. These issues need to be carefully considered before financing options are pursued in earnest.
SPECIAL CONSIDERATIONS FOR E-VENTURES OF BRICK-AND-MORTAR FIRMS Brick-and-mortar firms need to consider the issues discussed in sections one and two of this chapter, as well as the issues unique to their environment, which are described in this section. Specifically, brick-and-mortar companies have unique customer and operational management issues to sort out.
Potential for Channel Conflict/Channel Complement Channel conflict occurs when a company cannibalizes its own sales with other operations. For example, eSchwab cannibalizes some of Schwab’s traditional services and fees, but other online brokerage services were capturing part of their market share, so they felt they did not have a choice. At the same time, Schwab did a good job of building synergies between their online and offline service offerings, ultimately resulting in channel complements that helped build customer loyalty. In retail, many companies have developed an online presence, believing it is a competitive necessity. This often leads to a situation where little is truly gained by the online presence, as little strategic intellectual capital is invested. One company that has done a great job of building channel complements between its online and offline business is Talbots. Talbots was one of the first retail companies to embrace the idea that customers want the convenience of being one customer to the entire company. Accordingly, you can buy a product at Talbots.com and return it to a brick-and-mortar Talbots store with no hassle. This has done a great deal to boost their sales overall. Most businesses lose about 25% of their customers annually. If brick-and-mortar companies viewed e-business investments as a means of keeping their existing customer base happy, they would likely be more profitable overall, by reducing the number of customers lost.
Operational Management Issues The first major management decision that brick-andmortar companies need to make is whether to manage the online operations completely separately from the offline business. The temptation is great to manage them separately because it is far simpler and more familiar. Yet separate management increases the likelihood that the two different channels will end up competing for the same customers. Separation also increases the risk that the online operations will not stay aligned with the corporate business missions and goals. There are tremendous opportunities for creative thinking about channel management and building customer loyalty if a company decides to operate the online and offline operations synergistically. Yet commitment to building synergies is likely to be very expensive, as team management issues and data architecture issues may require significant upfront capital outlays. In any event, strong management leadership and a commitment from top management is a major key to the success of e-ventures made by brick-and-mortar companies. One advantage that brick-and-mortar companies have over virtual companies is that they are more likely to have good business policies and practices in place for measuring success of investments and for evaluating management performance. Constant feedback on performance is key for a new venture. Another advantage brick-andmortar companies have is that they start with an existing customer base. Identifying creative means for leveraging this base can be the difference between success and failure.
P1: IML/FFX
P2: IML/FFX
E-commerce˙Business˙Plan
QC: IML/FFX
T1: IML
WL040/Bidgolio-Vol I
WL040-Sample.cls
June 17, 2003
18:12
Char Count= 0
PREPARING THE BUSINESS PLAN
103
SPECIAL CONSIDERATIONS FOR NEW E-BUSINESSES
remote access controls, control over portable and handheld computing devices, and transaction security mechanisms and policies.
Finally, there are a couple of special considerations for new e-business entrepreneurs. Specifically, new e-businesses must give special consideration to bandwidth and security, in addition to the issues discussed in the first three sections of this chapter.
PREPARING THE BUSINESS PLAN
Bandwidth A company that is completely dependent on the Internet for sales must ensure that customers have minimal wait time for pages and images to appear on screen. This is a life and death issue for start-up companies. It is important to select top-quality Internet service providers and to have highly scalable software solutions. The user interface should be attractive and helpful but should not be unnecessarily bandwidth intensive.
Additional Security Issues for New E-businesses It is fairly well known by now that companies with an online presence attract computer criminals. The latest annual security survey from the Computer Security Institute revealed a number of shocking statistics, including the following findings (http://www.gocsi.com): Ninety percent of respondents detected computer security breaches within the past 12 months. Eighty percent acknowledged financial losses due to computer breaches. For the fifth year in a row, more respondents (74%) cited their Internet connection instead of their internal systems as a frequent point of attack. Forty percent detected system penetration from the outside. Forty percent detected denial of service attacks. Although all companies with an online presence need to ensure the best security possible for online transactions, for a company that is completely dependent on online customers, one security breach is potentially fatal to the entire company. Thus, new e-business entrepreneurs need to ensure that they budget adequately for security of financial and personal customer data, security of financial and personal employee data, appropriate internal and
In this final section, all the elements described in previous sections are pulled together. The key elements of a modern business plan are described and an outline of essential e-business plan elements is provided. Writing a business plan for an e-business is not much different from writing one for any other business now. If anything, what has changed since the investment in, and subsequent failure of, so many Internet companies is that investors have become a lot more savvy about their investments and scrutinize business plans greater than ever. Investors in new companies like to take calculated risks and are usually looking for new and exciting ideas to invest in. In the business plan, the entrepreneur must communicate effectively yet simply the innovation in his or her business ideas. It is critical that the business plan include a reasonable explanation of the value of the company’s ideas. Business plans are generally shorter than they used to be, because investors expect either to be involved in company decision making or to keep close contact with the start-up company’s management. First and foremost, a business plan should begin with a good executive summary. If the executive summary doesn’t hook potential investors, they will not read the rest of the business plan. Following the executive summary, a business plan should have at least four major sections: a detailed description of the business, a marketing plan, a financial plan, and a general management plan. There are a multitude of Webbased resources for writing a business plan (Table 2) and there are a number of possible additions to a business plan beyond the basic four elements described above. The model in the Appendix provides a reasonably comprehensive description of the business plan sections most commonly found in an e-business plan. Ultimately, whether a business plan is read depends on whether it sparks interest with the reader. Accordingly, entrepreneurs should get to the point of uniqueness quickly, emphasize why their team can fulfill the concept better than any other company, and put forth reasonable financial goals for getting the job done. If these key elements are in place, the end result will likely be successful funding of the business proposal.
Table 2 Sources for Writing a Business Plan BUSINESS 2.0 INC DELOITTE SBA BUSINESS PLANS SOFTWARE GOOGLE DIRECTORY WSJ’s START-UP JOURNAL BUSINESS PLANS
http://www.business2.com/webguide/0,1660,19799,FF.html http://www.inc.com/guides/start biz/directory.html http://www.deloitte.com/vc/0,1639,sid%253D2304%2526cid%253D9021,00.html http://www.sba.gov/starting/indexbusplans.html http://www.businessplansoftware.org http://directory.google.com/Top/Business/Small Business/Resources/ http://www.Start-upjournal.com http://www.businessplans.org
P1: IML/FFX
P2: IML/FFX
E-commerce˙Business˙Plan
QC: IML/FFX
T1: IML
WL040/Bidgolio-Vol I
104
WL040-Sample.cls
June 17, 2003
18:12
Char Count= 0
BUSINESS PLANS FOR E-COMMERCE PROJECTS
APPENDIX The following is an excerpt from Professor Dennis Galletta’s Web site on Business Plans (http://www.pitt.edu/∼ galletta/iplan.html). It lists the sections of an e-commerce business plan. Executive Summary: This section must concisely communicate the basics of an entire business plan. Keep in mind that your reader may be unfamiliar with the Internet and its tremendous potential. Business Description: In this section, discuss your firm’s product or service, along with information about the industry. Describe how your product and the Internet fit together or complement each other. Marketing Plan: Discuss your target market, identify competitors, describe product advertising, explain product pricing, and discuss delivery and payment mechanisms. Customers: Define who your customers are and how many of them exist on the Internet. An analysis of the customer base should not be a casual guess. Competitors: Use Internet search engines to look for known competitors or products similar to yours. Be sure to use several search engines, because each uses different search techniques. All major direct competitors should be found and analyzed in your plan. Remember, readers of your business plan will be very interested in knowing how you are going to beat the competition. Advertising: Describe how you are going to tell the Internet community about your product or service. Designing beautiful Web pages is only a first step. You must also get the word out about your Web site. Some tips: Detail a plan to add your Web address to the databases of search engines, add it to the bottom of all of your e-mail messages, and perhaps create physical novelties for local customers. Pricing: How are you setting prices for your products or services? If your product is intangible information delivered over the Internet, you should try to create some sort of pricing model to justify your prices. You could start by researching what others are charging for similar products. Delivery and Payment: How are you going to deliver your product and get paid? E-mail alone is not secure. Consider encryption techniques and online payment services. Research and Development: The technical aspects of your company, this addresses where the company is now andd the R&D efforts that will be required to bring it to completion; it will also forecast how much the company will cost. Since the Internet is continually developing, you should also address continuing plans for R&D. Operations and Manufacturing: Discuss the major aspects of the business, including daily operations and physical location. Also, what equipment will your business require? Will you be using your own Web server, or will
you be contracting with another company? Who will be your employees—will you hire staff with knowledge of the Internet or will you train them in-house? Be sure to include cost information. Management: Address who will be running the business and their expertise. Because the business centers around the Internet, be sure to discuss the management team’s level of Internet expertise and where they gained it. Also, describe your role in the business. Risks: Define the major risks facing the proposed business. In addition to such regular business risks as downward industry trends, cost overruns, and the unexpected entry of new competitors, also include risks specific to the Internet. For example, be sure to address the issues of computer viruses, hacker intrusions, and unfavorable new policies or legislation. Financial: Include all pertinent financial statements. Potential investors will pay close attention to this area, because it is a forecast of profitability. Remember to highlight the low expenses associated with operating on the Internet compared to those of other business. Timeline: Lay out the steps it will take to make your proposal a reality. When developing this schedule, it might be helpful to talk to other Internet businesses to get an idea of how long it took to establish their Internet presence.
GLOSSARY Angel investor Early investors in new Start-up companies who usually are willing to accept even higher risks than venture capitalist but in exchange for anticipated larger returns, generally in the form of a larger share of the Start-up company for their investment than later venture capitalists would expect to receive. Brick-and-click companies Companies with both traditional operations and e-business operations. Originally, brick-and-click was used to describe brick-andmortar companies that built an e-business presence. Now many e-businesses are also building traditional operations. Brick-and-mortar companies Companies without any e-business presence or to traditional companies whose operations are completely dependent on physical buildings and other assets and physical business infrastructures. Business-to-business (B2B) Internet age term referring to the online exchange of products, services or information between two or more business entities. Business-to-consumer (B2C) Internet age term referring to the online sale of products, services or information to consumers by businesses. Channel conflict When a company cannibalizes their own sales with other operations. Dot bomb A failed dot-com company. Dot-com A dot-com is any Web site intended for business use and, in some usages, it’s a term for any kind of Web site. The term is popular in news stories about how the business world is transforming itself to meet
P1: IML/FFX
P2: IML/FFX
E-commerce˙Business˙Plan
QC: IML/FFX
T1: IML
WL040/Bidgolio-Vol I
WL040-Sample.cls
June 17, 2003
18:12
Char Count= 0
REFERENCES
the opportunities and competitive challenges posed by the Internet and the World Wide Web (definition taken from www.whatis.com). E-business venture Any major investment in an ebusiness initiative ranging from investment in a new Web-enabled transaction processing system for a brick-and-mortar company to a new virtual company built around a set of e-business missions and goals. Incubators Companies that specialize in providing management services in addition to financial backing Venture capitalist (VC) Investors, usually in smaller private companies, who are looking for larger than average returns on their investments. VCs may be in private independent firms, in subsidiaries or affiliates of corporations, or government supported agencies. Venture funding Investment funds received from a venture capitalist. Virtual company A company with primary operations that are independent of other brick and mortar companies.
CROSS REFERENCES See Business-to-Business (B2B) Electronic Commerce; Business-to-Business (B2B) Internet Business Models; Business-to-Consumer (B2C) Internet Business Models; Click-and-Brick Electronic Commerce; Collaborative Commerce (C-commerce); Consumer-Oriented Electronic Commerce; Customer Relationship Management on the Web; E-marketplaces; Marketing Plans for E-commerce Projects Electronic Commerce and Electronic Business; Mobile Commerce.
105
REFERENCES Aragon, L. (2000). VC P.S.: Ten myths and realities of VC. Retrieved February 13, 2003, from http://www.circlenk. com/10-Myths-vc-vcps061400.htm Balu, R. (2000). Starting your start-up, fast company. Retrieved February 13, 2003, from http://www. fastcompany.com/online/31/one.html Benoliel, I. (2002). Avoid these errors to avoid financial nightmares. Retrieved February 13, 2003, from http:// www.Entrepreneur.com/article/0,4621,297437,00.html Healy, B. (2002). Tracking the incredible shrinking venture funds. Retrieved February 13, 2003, from http://digitalmass.boston.com / news/globe tech / venture capital/ 2002/0729.html Henricks, M. (2002). Consider the benefits of funding alternatives. Retrieved February 13, 2003, from http:// www.Start-upjournal.com/financing/trends/20020501henricks.html International Council of Shopping Centers White Paper. The marketing of a net company. Retrieved February 13, 2003, from http://www.icsc.org/srch/rsrch/wp/ ecommerce/marketingofanetcomp.html Johnson, A. (2000). Special report: Console makers face a brand-new game. Retrieved October 29, 2002, from http://www.upside.com/texis/mvm/story?id=39aea60e0 Oracle White Paper (2001). Essentials for a winning e-business plan. Retrieved February 13, 2003, from http:// www.oracle.com / consulting / offerings / strategy / epswp.pdf Vigoroso, M. (2002). And the e-commerce gold medal goes to . . . . Retrieved February 13, 2003, from http://www. ecommercetimes.com/perl/story/16387.html
P1: A-42 Ray
WL040/Bidgoli-Vol I-Ch-10
July 11, 2003
10:53
Char Count= 0
Business-to-Business (B2B) Electronic Commerce Julian J. Ray, Western New England College
Introduction Foundations of B2B E-commerce Innovations in Technology Innovations in Business Processes Motivation for B2B E-commerce Benefits for Established Companies New Products, New Services, and New Companies Classification of B2B Strategies E-selling E-procurement E-collaboration E-markets Methods for Implementing B2B
106 106 106 107 107 107 108 108 108 109 109 110 110
Point-to-Point Transfer Methods Database Integration Methods API Integration Process-Oriented Integration Evaluating and Selecting Integration Approaches B2B E-commerce Challenges Management Challenges Monitoring and Regulation Challenges Technological Challenges B2B E-commerce in Perspective Glossary Cross References References
111 111 112 113 113 114 114 115 116 117 117 118 118
INTRODUCTION
FOUNDATIONS OF B2B E-COMMERCE
The focus of business-to-business e-commerce (e-B2B) is on the coordination and automation of interorganizational processes via electronic means. E-B2B is conducted between business organizations and is a subset of all e-commerce activity, which includes, among others, business-to-consumer (B2C), business-togovernment (B2G), and consumer-to-consumer (C2C) activities. Before an in-depth analysis of e-B2B, it is worthwhile to note that many authors today identify e-commerce as one component of a more general form of electronic business termed e-business. E-business is generally used to identify the wider context of process automation that can occur between businesses and includes automating the provision and exchange of services, support, knowledge transfer, and other aspects of business interaction that do not necessarily result in a buy or sell transaction being executed. Within this document the terms e-commerce and e-business are used interchangeably and refer in either case to the wider context defined above. E-B2B is a major driving force for the Internet economy. According to research from the Gartner Group (http://www.gartner.com), B2B is growing at a rate of 100– 200% per year. With an estimated $430 billion in 2000, the total value of e-B2B activity was initially expected to exceed $7 trillion by 2004 with the largest shares in North America ($2.8 trillion), Europe ($2.3 trillion), and Asia ($900 billion), and 24% of all e-B2B transactions are expected to be performed electronically by 2003 (SBA, 2000). However, the recent slowdown in the global economy has caused a revision of these initial predictions to reflect changing economic conditions. Current estimates for 2004 global B2B revenues have been reduced to $5.9 trillion with a growth rate of 50–100% per year (Gartner Group, 2001). Even with the slowdown, the rates of adoption are significant and on track to exceed $8.5 trillion in 2005.
Sustaining a high rate of adoption and growth in e-B2B activity requires several key components: the innovative application of technology, a ubiquitous communication medium well suited for the transmission of e-B2B-related documents and information, a business and regulatory environment suitable for sustaining the process, and last but not least a set of motivating forces prompting organizations to adopt interbusiness automation as part of their day-to-day operations.
106
Innovations in Technology The technological foundations of e-B2B have been developing over the past 30 years and reflect innovations in telecommunications, computer communications protocols and methods, and intraorganizational business process management. During the 1970s, electronic funds transfer (EFT) and electronic data interchange (EDI) were nascent communication standards developed for business exchanges over expensive private computer networks. The costs of participating in these initial systems were high. EDI systems, for example, require specialized data processing equipment and dedicated telecommunications facilities, prohibiting all but the largest businesses to participate. However, the potential benefits realized from implementing these early electronic B2B systems included greatly reduced labor costs and processing time as well as vast increases in accuracy. Private communications networks requiring dedicated telecommunications facilities, initially the only option for e-business, are still in use today and provide a secure medium for exchanging electronic data. A value-added network (VAN) is a type of semiprivate communications network usually managed by a third-party provider that allows groups of companies to share business information using commonly agreed-upon standards for transmitting and formatting the data. VANs have traditionally been
P1: A-42 Ray
WL040/Bidgoli-Vol I-Ch-10
July 11, 2003
10:53
Char Count= 0
MOTIVATION FOR B2B E-COMMERCE
associated with EDI and provide an alternative to more costly private communications facilities. More recently e-business has adopted the public Internet as its basic communication medium. The Internet provides the ubiquitous connectivity and high data rates necessary to support modern e-business. The significantly reduced costs of operation and costs of entry of the Internet with respect to other communication systems such as traditional VANs or private lines makes the adoption of an e-business model more attractive and sustainable to smaller organizations. One of the key technologies allowing the Internet to support e-business is the virtual private network (VPN). VPNs are a form of protected communication using a combination of encryption methods and specialized data transmission that allows companies to use the public Internet as if it were a private system. VPNs also enable development of extended private networks (extranets), which are shared with business partners. Today VPNs largely replace the role played by early value-added networks with a cost-effective alternative allowing companies to freely share private business information over the public Internet. Processing transactions electronically can significantly reduce the cost and time taken to manage business information. Computer servers today can handle thousands of transactions per second and significantly help reduce the costs and risks associated with sending and recording a business transaction. One of the greatest current interoperability challenges is the development of common methods and protocols to enable semantic translation. To date there are hundreds of protocols and standards that govern almost every aspect of using the Internet to transmit and translate business data. Extensible markup language (XML) is an example of a data-format technology that can be understood and processed by a wide variety of computer languages and systems. These new and emerging standards build on earlier protocols such as EFT and EDI. When coupled with system-independent data communication protocols such as the Internet’s TCP/IP, these initial data translation protocols enable business data to literally transcend organizational and technological boundaries. For example, while EDI documents can still be sent over private lines, today it is also possible to complete the same transactions using the Internet and XML at a fraction of the cost of traditional EDI. Companies who invested heavily in the early technologies can maintain their original investment and still participate in modern e-business initiatives while companies who could not afford technologies such as EDI are now able to engage in intraorganizational data exchange.
Innovations in Business Processes Greenstein, O’Leary, Ray, and Vasarhelyi (in press), in a discussion on the electronization of business, identify the surge of e-B2B activity as a progression of an evolving process that has been evident throughout the industrial revolution. The authors attribute the rise of e-business activity as a response to dramatic improvements in both information processing and telecommunications systems,
107
which have facilitated revolutionary change in organizational business processes. This most recent of technological revolutions centered around the Internet provides businesses with the ability to automatically place orders, send quotes, share product designs, communicate sales projections, and collect payment using electronic means, providing the potential to reduce operating costs, shorten time-to-market for new products, and receive payment faster than ever before. Moreover, businesses willing to commit to transitioning to the new digital economy have the potential to partner with other similarly minded businesses creating tightly integrated supply chains, remove intermediaries, and develop networks of loyal customers and trading partners who, research shows, collectively outperform their competitors. Recent changes in the banking industry provide an example for understanding the potential improvements in business processes that can be achieved using electronic automation. The U.S. Federal Reserve along with 6 other central banks and 66 major financial firms recently implemented a system called the Continuous Link Settlement System. This system is designed to automate the trading and settlement of foreign currency exchanges. Traditionally, when performing a trade, two banks agree on an exchange rate and wire the money to each other. This is a process that can take two to three days during which time events can change the value of a country’s currency, ultimately affecting the exchange rate. In extreme cases banks have gone bankrupt in this period of time (Colkin, 2002). Banks trade on the order of $300 trillion each day, so the ability to perform trades in real time can reduce the amount of lost interest. Accordingly, the risk associated with transaction latency justifies the $300 million investment in technology that the Continuous Link Settlement costs.
MOTIVATION FOR B2B E-COMMERCE There are a number of reasons why a company might decide to invest in developing and implementing an e-B2B strategy. Established companies, faced with rising costs and increased competition, often have a set of motivating factors and goals different than that of newer companies. Newer companies are often more technologically agile and can readily take advantage of newer e-business strategies and products.
Benefits for Established Companies E-B2B strategies have been proven to cut operations cost for companies and to reduce cycle time during product development. We provide examples from Federal Express and General Motors later in this text to illustrate how different e-B2B strategies can lead to cost reduction and an increased ability to compete by reducing the time required to design and develop new products. The ability of a company to automate its supply chain is another significant advantage of e-B2B. Reducing costs and production times are only a few of the types of benefits that can be realized by e-B2B companies. Other benefits are associated with increased visibility over the supply chain allowing for better planning and inventory management, strengthened
P1: A-42 Ray
WL040/Bidgoli-Vol I-Ch-10
108
July 11, 2003
10:53
Char Count= 0
BUSINESS-TO-BUSINESS (B2B) ELECTRONIC COMMERCE
relationships with business partners, and the ability to integrate with new business partners in a variety of interesting and innovative ways using mechanisms such as online exchanges and auctions. Norris, Hurley, Hartley, Dunleavy, and Balls (2000) identify three stages in the adoption of an e-B2B business model by established companies. The early focus of a company’s e-B2B activity are on increasing the efficiency of the sales and/or purchasing processes while minimizing the disruption to organizational culture and business processes. The second stage of adoption is usually on improving business processes by using electronic information technologies to integrate the supply chain and streamline the process of conducting business. This stage is often aimed at reducing costs and increasing the effectiveness of operations beyond sales and purchasing. The last stage in the conversion of a company to a mature e-business model is the development of strategic, tightly coupled relationships between companies to realize the mutual benefits and joint rewards of an optimized supply chain. General Motors (http://www.generalmotors.com) is an example of a traditional company that is rapidly evolving into an e-B2B company. General Motors has adopted a multipronged approach in its transition to an e-business using small pilot projects to test different e-B2B strategies in a drive to increase its competitive edge and reduce costs. Among GM’s e-B2B initiatives are joint product design, dealer portals, and online auctions (Slater, 2002). E-B2B can be used to reduce a company’s risk such as the financial risks associated with completing monetary transactions in a timely manner as demonstrated by the Continuous Link Settlement System. E-business can also be an effective tool in reducing the risk associated with performing business under changing economic conditions. Such risk can be mitigated by developing economic as well as digital ties to existing business partners and providing a framework wherein new business partners can be introduced and incorporated into the supply chain.
New Products, New Services, and New Companies E-business strategies may involve creating and introducing new products and services within existing companies as well as providing the basis for the creation of entirely new e-business. Online exchanges, for example, have emerged as a multibillion dollar industry virtually overnight as a result of the desire for increased speed and expediency of interorganizational electronic business transactions. Proprietary supply-chain integration is another area with significant e-B2B activity. However, even with these proprietary efforts, companies acting as intermediaries may provide a large part of the technology and services involved in supply-chain operations. Examples of such companies include Manugistics (http://www.manugistics. com), which has successfully adapted its more traditional technologies and services to operate within the emerging e-business environment across a broad array of industries, as well as brand new technology-savvy companies such as Elogex (http://www.elogex.com) with niche
strategies of developing technology and services specifically for a narrow industry focus: consumer goods, food and grocery in this case. For small to mid-sized organizations, the barriers to participating in the e-business arena have been removed. The financial overhead associated with implementing early e-business systems such as EDI is significant and out of reach of all but the largest corporations. The rapid adoption of the Internet as a ubiquitous communication medium along with the reduced cost of computer systems and software over the last decade has allowed all types of companies to develop e-business strategies providing increasing levels of competition, innovation, and access to a global pool of potential business partners. In fact, the inverse relationship between size and agility gives small to mid-size firms some competitive advantages. The dot-com boom of the late 1990s, which was so visible in the B2C sector with companies such as Pets.Com and WebVan, carried over into the B2B sector. During this time large numbers of highly innovative, technologysavvy Internet-focused companies became immersed in all areas of online business-to-business activity. By 2000 there were an estimated 2,500 online B2B exchanges serving industries such as electronics, health care, chemicals, machinery, food, agriculture, construction, metals, printing, and medical lab equipment (Wichmann, 2002). Like their B2C counterparts, however, many of the B2B dotcom companies have failed as a result of weak business models and misguided, overzealous venture capital. Now less than 150 of these exchanges are left.
CLASSIFICATION OF B2B STRATEGIES E-business strategies are usually classified based on the nature of the interaction taking place between the business partners. Four strategies that may be classified using this approach are e-selling, e-procurement, e-markets, and e-collaboration. Within each of these strategies we can further identify the vertical or horizontal industry focus of the participation and whether the participation involves the use of intermediaries. Companies with a vertical focus usually operate within a single industry such as the automotive, chemical, energy, or food-retailing industries. Alternatively, companies with horizontal focus provide products and/or services across a wide range of industries. Office supplies and computers are examples of products horizontally focused. Intermediaries in e-B2B are most often associated with e-markets and electronic auctions where one company, the intermediary, acts as a broker allowing other companies to buy and sell products and services by collaborating using the infrastructure provided by the intermediary.
E-selling E-selling is concerned with the direct sale of products and services to other businesses using automated means. The business model is similar to the B2C direct-sale model but differs in that B2B interaction requires prenegotiation of prices, catalog items, and authorized users. Turban, King, Lee, Warkentin, and Chung (2002) identify two major methods for e-selling: electronic catalogs
P1: A-42 Ray
WL040/Bidgoli-Vol I-Ch-10
July 11, 2003
10:53
Char Count= 0
CLASSIFICATION OF B2B STRATEGIES
and electronic auctions. Electronic catalogs can be customized in both content and price for specific businesses and can be coupled with customer relationship management software such as SAP’s Internet sales solution (http://www.sap.com) to provide personalized content delivery. There are many examples where companies provide a direct sales channel to other businesses using the Internet: Dell Computers (http://www.dell.com) and Staples (http://www.staples.com) are examples of companies that allow businesses to set up accounts through the Web and customize the content available for online purchase. Cisco is another example of a company that has a very successful direct-sale approach using the Internet (http://www.cisco.com). Using Internet based pricing and configuration tools, Cisco receives 98% of their orders online, reducing operating costs by 17.5% and lead-times by at least 50% down to 2–3 days (Turban et al., 2002). Electronic auctions are sites on the World Wide Web where surplus inventory is auctioned by a business for quick disposal. Auctions allow companies to save time and reduce the cost of disposing of surplus inventory, allowing the company to obtain a better price. Ingram Micro, for example, receives on average 60% of an item’s retail cost by selling surplus inventory through an online auction. Prior to using the auction Ingram Micro recovered 10–25% of the price using traditional liquidation brokers (Schneider, 2002). Some companies create and manage their own auction sites such as CompUSA. CompUSA’s Auction Site (http://www.compusaauctions.com) is designed to allow small and mid-sized companies to bid on its inventory. Alternatively, companies can use existing general purpose or industry-specialized auction sites such as eBay (http://www.ebay.com) or ChemConnect (http://www.chemconnect.com). Both Sun Microsystems and IBM have successfully adopted eBay as an auction site for surplus computer hardware, software, training courses, and refurbished equipment. eBay in this case is operating as a horizontally aligned intermediary between Sun Microsystems and the companies or individuals bidding on Sun’s products. ChemConnect is an example of a specialized third-party auction site designed to manage the selling and buying of chemical and plastic products using an auction format. Unlike eBay, which is horizontally focused, ChemConnect has a vertical focus and concentrates on selling specifically to the chemical industry.
E-procurement E-procurement is concerned with the automated purchasing of goods and services from suppliers. These applications are designed to facilitate the exchange of electronic information between trading partners by integrating a buyer’s purchasing process with a seller’s order entry process (Davis & Benamati, 2002). E-procurement systems can be either buy- or sell-side applications. Buy-side applications reside on the buyer’s systems and control access to the ordering process, the authorization of the order, and possibly the list of trading partners allowed to receive the order. Sell-side applications reside on supplier’s systems and allow authorized buyers to access and place orders directly on the system. Sell-side systems are often
109
implemented as extranets where access to the electronic catalogs and pricing information is carefully controlled. According to the Aberdeen Group, e-procurement is one of the main areas of e-B2B that is “delivering rapid and quantifiable results.” They estimate that 80–90% of companies plan to use online procurement systems by 2003 (Aberdeen Group, 2001b). The sorts of products purchased by e-procurement systems are generally limited and of relatively low value. Strategis (http://www.strategis.gc.ca), an online branch of Industry Canada, report that 42% of e-procurement purchases by businesses in Canada in 2001 were for office supplies, furniture, and office equipment, followed by IT hardware and software (29% of total) and travel services (15% of total) (Strategis, 2002). Similarly, Microsoft reports that 70% of their annual 400,000 procurement purchases are for requisitions less than $1,000 (Microsoft, 2000). Early e-procurement implementations relied on fax, email delivery, EDI, or real-time system processing to transfer data among trading partners. Newer e-procurement systems use Internet technologies to manage the ordering process. Recent surveys indicate that most commercial solutions now use XML and the Internet to share procurement data and provide access to online market places as part of the offering (Aberdeen Group, 2001b). E-procurement systems can significantly benefit companies in a variety of ways. In 1999 Federal Express identified e-procurement as a key strategy in reducing costs and saving time in the purchase order process. FedEx purchased a B2B commerce platform from Ariba Inc. (http://www.ariba.com), which was implemented within a month and reportedly returned a positive ROI within three months. The new system manages about 20% of FedEx’s 25,000 annual requisitions and has reduced purchasing cycle times from 20 to 70%, depending on the type of items purchased. The purchase of new PCs now takes 2 days rather than the 17–19 days it took using a traditional paper-based approach. FedEx also managed to reduce the purchasing department staff by half, allowing these extraneous staff to be reassigned (Aberdeen Group, 2001a). Microsoft reports similar success with MS Market, a desktop procurement system designed to run from a Web browser over the company’s corporate intranet. MS Market is deployed to 55 Microsoft locations in 48 countries and saves the company an estimated $7.5m annually (Microsoft, 2000). In the United States, Microsoft use MS Market to order almost 100% of the company’s requisitions at an average cost of $5 per requisition.
E-collaboration E-collaboration is a term broadly used to describe any form of shared e-business activity in which companies collaborate together with the goal of providing a mutually more efficient business environment. E-collaboration can take many forms such as supply-chain integration, the joint design and development of products, joint demand forecasting, and joint planning. General Motors, for example, shares specialized engineering software and data files with its suppliers in a drive to reduce product development time. As part of this joint-product design strategy, General Motors partially or fully underwrites the cost
P1: A-42 Ray
WL040/Bidgoli-Vol I-Ch-10
110
July 11, 2003
10:53
Char Count= 0
BUSINESS-TO-BUSINESS (B2B) ELECTRONIC COMMERCE
of the software licenses for some of its suppliers in order to standardize the design platform across the supply chain. By standardizing and sharing design tools, General Motors reduced the design-to-production cycle for new products by 50% to just 18 months (Slater, 2002). Collaborative Planning Forecasting and Replenishment (CPFR) is a collaborative approach that allows suppliers and sellers to jointly forecast demand for products in an attempt to better coordinate value chain processes such as restocking, managing exceptions, and monitoring effectiveness. The major idea underlying CPFR is that better forecasting between trading partners will lead to improved business processes, which in turn will reduce costs and improve customer service and inventories. Wal-Mart’s RetailLink is an example of a CPFR system that operates over the Internet using EDI documents to share information. RetailLink allows Wal-Mart’s suppliers to receive detailed information about store sales, inventory, effects of markdowns on sales, and other operational information that allows the suppliers to effectively manage their inventory at the individual Wal-Mart stores. CPFR and similar types of supply-chain integration can result in numerous benefits for companies interested in collaborating with business partners. After interviewing 81 companies, Deloitte & Touche concluded that companies that “collaborate extensively” with their supply-chain partners while focusing heavily on customer loyalty and retention are almost twice as profitable as companies that are “below average” in the areas of supply-chain collaboration. Deloitte & Touche also found that companies who understand customer loyalty are much more likely to exceed their goals on shareholder returns, return on assets, and sales growth, and 12% of the companies studied were in this group (Rich, 2001).
E-markets E-markets provide third-party integration services by supplying online applications that allow organizations to exchange goods and services using a common technology platform. In essence, these electronic marketplaces are designed to bring suppliers and buyers together in a common forum with a common technology platform. Emarkets are usually vertically aligned for industries such as energy or automotive but there are also horizontally aligned e-markets that service industries such as office supplies and information technology. Many e-markets are independent exchanges managed by a company that is neither a buyer nor a seller in the marketplace but provides third-party services that allow other businesses to collaborate through them. Alternatively, e-markets can be created and managed by a company or consortia of companies who are leaders in the industry being served. In the late 1990s third-party e-markets were the focus of a lot of new dot-com activity. Berlecon Research identified 2,500 independent exchanges on the Internet by 2000. Due to competition and the downturn in the economy, by late 2001 fewer than 150 of these exchanges were still in operation (Wichmann, 2002). Covisint (http://www.covisint.com) is an example of a global marketplace sponsored by a consortium of industry leaders rather than an independent third party. In
this case the industry leaders are DailmerChrysler, Ford, and General Motors among others. Covisint was jointly funded in 2000 to provide a global solution for the automotive industry with the goal to “improve the effectiveness of mission critical processes such as collaborative product development, procurement and supply chain management . . . through implementation and use of a secure and comprehensive online marketplace” (Covisint, 2002). Covisint’s mission is to “connect the automotive industry in a virtual environment to bring speed to decision making, eliminate waste and reduce costs while supporting common business processes between manufactures and their supply chain.” Covisint provides three products: Covisint Auctions, Covisint AQP (Advanced Quality Planner), and Covisint Catalog. Covisint Auctions are online bidding events that provide rapid, real-time, Web-based negotiations in a secure environment for the purpose of supporting the sourcing of parts and components. Covisint AQP is an Internet-enabled application that provides an environment for collaboration, reporting, routing, and visibility of information important to developing high-quality standards for components being designed for vehicle production. Lastly, Covisint Catalogs are electronic purchasing environments for indirect and Maintenance, Repair, and Operations (MRO) material. This application allows users to shop online and provides a system to automate approvals and the creation of necessary supporting documentation such as purchase orders. The total value of transactions managed by Covisint in 2001 was more than $129 billion. During the year, General Motors used Covisint to buy $96 billion worth of raw materials and parts for future vehicle models. In a single four-day period in May, DaimlerChrysler used the exchange to procure $3 billion worth of parts—the single largest online bidding event to date. Ford reported that it used the exchange to save $70 million, which is more than its initial investment in the exchange and expects to save approximately $350 million in 2002 (Konicki, 2001). Recent research has questioned the validity of the savings claims made by exchanges in general as they reflect the maximum theoretical savings that could be achieved at the close of the auction while the amount of actual savings is likely to be less (Emiliani & Stec, 2002).
METHODS FOR IMPLEMENTING B2B Integrating a business process in one company with a business process in another requires that both companies provide the necessary technology that allows the information systems to be integrated over a computer network such as the Internet. Depending on the type of activity, this could include interfacing with other companies’ supply-chain management, procurement, or sourcing systems, or involve something less complex such as simply establishing an e-mail system or sharing electronic files on tape or other digital media. Newer businesses are more likely to have modern information systems specifically designed for integration and collaboration; older companies are more likely to have legacy systems, which were never designed to be used for data sharing and pose significant challenges for B2B. A variety of methods that allow
P1: A-42 Ray
WL040/Bidgoli-Vol I-Ch-10
July 11, 2003
10:53
Char Count= 0
METHODS FOR IMPLEMENTING B2B
companies to overcome issues with legacy and incompatible business systems and allow them to effectively share business information have been developed.
Point-to-Point Transfer Methods The point-to-point transfer of business information using computer data files dates back to the earliest B2B integration efforts. By the 1950s companies had already started to use computers to manage business transactions. Business information moving between companies at this time used paper forms, which had to be re-entered into the recipient computer systems using manual methods: a process that was unreliable, expensive, and redundant (Schneider, 2002). By the 1960s companies had started to use magnetic tape and punch cards as a medium to record business information for transfer. The encoded tapes or card decks would be transferred to a recipient computer system and automatically processed using card or tape readers. The advantages of these automated methods were in the removal of the expensive and error-prone process of redigitizing the business information as the data could be entered directly from the transfer medium. Tape and card decks were replaced by automated filetransfer methods such as Tymnet, UUCP, Kermit, and later by the ubiquitous file transfer protocol (FTP) for use on the Internet. Using these applications, one computer system exports business data from an application to a data file; the file is then transferred to the recipient computer system over a phone line, computer network, or the Internet. At the recipient end the file is imported into the application using specialized software. FTP is part of a suite of protocols specifically designed for the Internet and is implemented today by most computer systems, allowing for a seamless transfer of binary or text data from one computer system to another. This “built-in” ease of use has largely replaced other file transfer applications for the business-to-business transfer of information. The point-to-point business integration model using file transfer methods is simple to design and implement and works well in situations where there is little change in the applications sharing information and a high-degree of collaboration between the business entities. Yee & Apte (2001) identify several disadvantages of this approach. With respect to the format of the data being shared: the sending and receiving applications must agree on a fixed file format, export and import logic is often restricted by the applications, and lastly, the overall system is brittle and can fail if either system changes the data format. From a data management perspective, data transfer is conducted in batch mode rather than real time, which introduces latency into the system; there are no methods for recovery or guaranteed delivery of the data. Sharing data with multiple business partners becomes difficult to manage as the impact of these disadvantages becomes multiplied with each additional business partner.
Database Integration Methods Point-to-point file transfer methods are perhaps the simplest integration approaches where the only factors in common between the collaborating systems are a
111
prenegotiated file format and data content and a common communication method. Accessing data directly in the databases is another data integration method that can be relatively straightforward to implement given the correct set of preconditions. Database software such as Oracle’s Enterprise Database, Microsoft’s SQL Server, and IBM’s DB2 software can be thought of as layered applications. At the lowest layer is the data itself and a set of procedures designed to efficiently manage the organization of the data on the host computer system. At the top level is a user interface, which allows users to connect to the database, manage the data, and format reports. Between the data and the user interface layers exists a suite of applications that implement the data management and query capabilities of the database. This application layer typically uses a specialized computer language called structured query language (SQL) to manage all aspects of a database and to interface between the users and the data itself. Database connectivity middleware such as JDBC, ODBC, and ADO allow direct programmatic access to the application layer of remote databases using a variety of programming languages and environments and replaces the user-interface layer of the database by generating SQL commands directly from the programming environment. This direct access allows systems designers to create points of direct integration between an application and a database over a telecommunications system such as the Internet. These database connectivity technologies also tend to shield the collaborating applications from the specifics of the data storage, overcoming differences in storage formats between disparate databases and computer systems. For example, a business transaction could be extracted by a remote application from an Oracle database instance on a Windows Server and inserted into an IBM DB2 database instance on an OS/400 server without difficulty, only relying on the understanding of the organization of the recipient database and permission to access the requisite database resources. Newer business systems tend to decouple the application logic from the storage of the business data and store the data in a commercial database. This design makes the process of integration relatively simple as the database can be accessed directly. Older business systems, however, tend to tightly couple the business logic to the data and are less “open” in systems terms. These types of systems may use proprietary data stores and/or data storage formats, which makes integration at the database level problematic. Linthicum (2001) suggests that integration with these older “closed” systems is best managed at the application level as it is often impossible to deal with the database without dealing with the application logic as well. A recent trend among database vendors is to facilitate interapplication data integration by allowing database software to send and receive data using XML syntax. XML data can be processed by the databases and either stored and queried in native (XML) form or converted to more traditional database storage types. This approach further serves to open up information systems for integration and often provides a way in which legacy systems with closed database platforms can integrate with newer systems.
P1: A-42 Ray
WL040/Bidgoli-Vol I-Ch-10
July 11, 2003
112
10:53
Char Count= 0
BUSINESS-TO-BUSINESS (B2B) ELECTRONIC COMMERCE
Many companies use multiple databases for storing and maintaining their business data. Each database could be associated with a single application or be a point of integration for two or more applications in use within the company. Integrating data from multiple data sources provides a complication for normal direct database integration approaches as these typically require a single data source to connect with. To overcome the issue of integrating a business system with multiple target data sources database gateways can be used. Database gateways are middleware technologies that provide a query facility, often using SQL, against multiple target data sources. The middleware acts as a proxy and accepts requests for data from client systems, which it then translates into a form that can be executed against one or more connected databases. The database gateway middleware merges the underlying databases to form a “composite” or “virtual” database, which is a conjoining of all or selected parts of the underlying (managed) database schemas. IBM’s Distributed Relational Database Architecture (DRDA) is an example of a database gateway built into IBM’s DB2 enterprise database systems to facilitate interoperation of multiple databases within heterogeneous computing environments. While proficient at providing read-only access to business data for remote applications, Yee & Apte (2001) maintain that database gateways systems have limitations for e-commerce systems. In particular, database gateways are inefficient when integrating multiple disparate systems as queries against the virtual database must be recast to query the underlying data sources and the results merged by the middleware to form a set of results that can be returned to the client. Further, Yee & Apte argue that the database approach to integration bypasses the business rules implemented in the application code and may result in redundant business logic, which must be both developed and maintained at some cost.
API Integration As an alternative to integration through direct access to the data, applications can often share information via a set of interfaces “built in” to the application and designed to be accessed using other programs. These application programming interfaces (APIs), as they are called, allow external applications (clients) to share data and business logic with a host application (server) often via the Internet or other connection media. Depending on the type of technology used, the applications, and the form of the APIs, the two independent programs can share simple computer data composed of text strings and numbers and even complex structured data objects that represent an atomic information component such as a customer or purchase order. Some APIs allow the client system to execute business logic functions on the server such as removing an element of data or performing some action. Integration via APIs is often more difficult in a heterogeneous computing environment as differences in how systems can be accessed and the representation of data components such as strings, numbers, dates, and complex higher-order objects can vary enough to render data generated on one system intelligible on another. These
well-known system compatibility issues often require middleware applications, which can broker between the data representations on the different systems. Remote procedure calls (RPCs) are a middleware technology that provides a framework for translating system-specific data to and from a common data format, which can then be transferred without loss of representation between client and server systems over a computer network. RPC frameworks, initially developed to network UNIX applications together, are available on most computer systems used by businesses today and rely on a common computer language called interface definition language (IDL), which all data must be translated to before it can be sent over the network. Other API technologies are in common use. Microsoft (http://www.microsoft.com) has extended its Component Object Model (COM) to allow computers to share data and methods between systems running different versions of Windows software over a network. This Distributed COM, also known as COM+, is available on all second-generation Microsoft Server products. The Common Object Request Broker Architecture (CORBA) is a system similar to Microsoft’s DCOM that was developed as an open specification by the Object Management Group (http://www.omg.org), a consortium consisting of over 800 independent software developers. The initial CORBA specification was developed in 1991 and, although it has been overshadowed in recent years by Java and XML, is still in use especially in the banking industry, who has adopted it as a standard method of integration. Message-oriented middleware (MOM) was developed to address some of the issues associated with tightly coupled solutions such as RPC and COM. MOM applications transfer data between applications in the form of messages: an application-defined unit of data that could be in binary format or text-based. Most message-driven applications use an asynchronous processing model rather than the synchronous model used by most tightly coupled systems. In an asynchronous message-driven system one application sends messages to another application. Unlike RPCs, which deliver the data immediately, the messages sent from one application to another using MOM are typically placed into a specialized piece of software that allows the messages to be stored and queued. When the recipient application is ready to process messages it accesses the queue and extracts the messages from the queue that are for the application. This model allows the sending system to send messages to possibly several systems at the same time without having to wait for the recipient applications to process the data. This “fire-and-forget” model allows all systems to work independently and at different speeds. IBM’s WebSphere MQ software—formally MQSeries—is an example of a widely deployed MOM system. Java is a computer language developed by Sun Microsystems (http://java.sun.com) in the 1990s specifically for use on the Internet. Unlike most other computer languages Java uses a data format that is common to all Java applications, is independent of the hardware/software environment, and negates the need for an intermediate language such as IDL to represent data that will be transmitted between applications over a network. This “one-size-fits-all” approach greatly decreases the cost of developing and maintaining networked applications
P1: A-42 Ray
WL040/Bidgoli-Vol I-Ch-10
July 11, 2003
10:53
Char Count= 0
METHODS FOR IMPLEMENTING B2B
although Java critics maintain that Java is slow compared to other programming languages due to its device-independent run-time architecture. Since its inception the Java language has been rapidly growing to accommodate the evolving needs of the business community. Versions of the Java language are now available specifically to meet the needs of enterprise applications and mobile users. Java 2 Enterprise Edition (J2EE) provides a framework for developing robust distributed applications using specialized software platforms called application servers. J2EE application servers such as BEA Systems WebLogic (http://www.bea.com/framework. jsp? CNT = index.htm&FP = / content / products / platform), IBM’s WebSphere (http://www-3.ibm.com/software/info1/ websphere/index.jsp), and Oracle’s 9iAS (http://www. oracle.com/ip/deploy/ias/) provide frameworks for efficiently accessing databases, managing transactions, creating message-driven applications, and developing Web-based interfaces. Java 2 Mobile Edition (J2ME) is a version of the Java language specifically designed for mobile devices such as Personal Digital Assistants (PDAs) or in-vehicle systems. The recent adoption and proliferation of XML and XML-oriented middleware has provided an alternative means for sharing data between systems and is the basis of the new generation of interapplication methods called Web services. Web services are similar to RPCs in that middleware exists on both the client and the server that communicate and act as proxies for the client and server applications. Web services use a common XML document format based on the simple object access protocol (SOAP) specification developed jointly by Microsoft, IBM, and others to share data and can utilize commonly available Internet network connections and protocols such as HTTP and TCP/IP to communicate. Web services further benefit system developers because they can implement dynamic discovery mechanisms and centralized service registries that allow client applications to locate services and “discover” the interfaces and data formats available from the services and automatically integrate with them, thereby reducing the amount of time and the complexity required to build integration components.
Process-Oriented Integration Process-oriented integration focuses on the logical sequencing of events and the processing of information as it moves within and between business organizations and relies on a business process rather than a technological foundation. The goal of process-oriented integration is for trading partners to share relevant business data with the intention of increasing competitive advantage through cooperation. This approach tends to be more strategic than tactical as the results are often hard to measure in terms of traditional investments as they involve developing trust relationships with suppliers and sharing private and often confidential information to realize more intangible benefits such as better products, increased customer satisfaction, and better supply-chain operation. Sharing production forecasts and schedules with suppliers, for example, allows business partners to better plan their own production activities, which in turn can lead to lower costs
113
overall as the guesswork involved in anticipating demand can be removed and the likelihood of stock-outs diminished. Changes in production schedules can similarly be communicated, allowing suppliers to automatically adjust their production schedules to match, thereby reducing waste and uncertainty. In its simplest form process-oriented integration might simply be a group of companies agreeing on a common suite of products to use for their internal systems such as SAP/R3 (http://www.sap.com) or Oracle’s e-business applications suite (http://www.oracle.com/applications) and then deciding which processes they are willing to externalize as integration points. Establishing a common technology platform for the business systems also establishes a framework for sharing information as the data moving between businesses are guaranteed to be compatible and interchangeable as most enterprise-scale systems have built-in APIs or messaging systems to facilitate the sharing of data within and across organizations.
Evaluating and Selecting Integration Approaches The wide variety of methods available for B2B system integration provides companies in the planning phase of a B2B initiative a number of alternatives they can choose among. Key design characteristics that must be evaluated for new systems are open vs closed implementations, integration at a data level using databases and/or middleware APIs or at a systemic level by adopting common business processes and systems, whether to use the public Internet as the data transport medium or to invest in private or semiprivate networks, and lastly, the ease of integration with existing and planned internal systems. One of the key decisions is to determine the level of effort necessary for trading partners to couple and decouple from the trading environment. Systems such as SAP/R3, which use proprietary technology and require specialized hardware and connectivity environments, can be expensive and difficult to implement. Hershey Foods, for example, spent three years and $115 million implementing a software system from SAP. The new system was designed to replace a number of older systems and tie into supplychain-management software from Manugistics and customer relationship management products from Siebel. Glitches in the ordering and shipping systems, however, resulted in a 12.4% drop in sales during the company’s busiest quarter (Osterland, 2000). ERP II is a new generation of Internet-centric enterprise resource planning (ERP) software designed specifically to address the types of implementation issues associated with sharing information across a supply chain experienced by Hershey Foods. For companies who have already invested in ERP systems, ERP II will allow them to leverage their investments and move toward a collaborative planning system by upgrading their existing ERP systems over time (Bond et al., 2002). Companies without an existing investment in ERP will be able to adopt a system that provides both internal and external business process integration. In contrast to ERP II, light-weight technologies like XML/SOAP can provide points of integration between
P1: A-42 Ray
WL040/Bidgoli-Vol I-Ch-10
July 11, 2003
114
10:53
Char Count= 0
BUSINESS-TO-BUSINESS (B2B) ELECTRONIC COMMERCE
business systems and facilitate relatively low costs of entry into a trading consortium. A possible disadvantage of this relatively low cost of entry and ease-of-implementation is that trading partners could move to a competitor system with relative ease, thereby undermining the trust relationship between companies. Other key decisions will center on the robustness, security, and scalability of the technology being selected. Last but not least, companies should examine the record of systems vendors and critically examine the vendors’ record with respect to implementation success, after-sale support, and case studies proving the advertised ROI.
B2B E-COMMERCE CHALLENGES There are several challenges to companies planning an e-B2B strategy, including managing and valuing e-B2B projects and getting up to speed on the regulatory environment surrounding e-B2B and the technical challenges associated with selecting and implementing e-B2B technologies.
Management Challenges One significant management challenge associated with developing and sustaining a viable e-B2B initiative is measuring the true tangible and intangible benefits of the investment. Intangible benefits are hard to isolate and quantify and can accordingly effect how an investment is perceived by the company and shareholders. Other challenges for management include developing and maintaining an information technology strategy in an environment that is rapidly changing and evolving and managing the trust and expectations of business partners.
Measuring Intangible Benefits Businesses choose to develop an e-B2B strategy for a variety of reasons, such as increasing process efficiency, reducing costs, and integrating new suppliers and customers. Tangible benefits such as increased sales, decreased production time, and reduced waste can be estimated fairly accurately. However, the intangible benefits of e-B2B are hard to evaluate using purely economic measures. For example, intangible benefits such as increased customer satisfaction and stronger relationships with business partners are two benefits that are highly sought but difficult to measure. The potential benefits from increased global visibility through an electronic presence on the Internet might include better hiring opportunities or more favorable investment opportunities. Again, such benefits are extremely difficult to quantify.
Managing Trust Relationships Coupling business processes through technological frameworks requires that participants trust each other with valuable and often confidential business information such as new product specifications, purchasing patterns, and production forecasts. As businesses increasingly move toward a pattern of sharing information in real time a bond of trust must be established and proactively managed to ensure continued and mutual benefits for all companies involved. This trust relationship involves not only guaranteeing the security and confidentiality of the
data being shared but also guaranteeing the accessibility and reliability of the systems being integrated. Managing trust relationships involves determining and maintaining predefined levels of system performance such as the speed and volume of transactions that can be processed, the stability of the information systems, and how systems should respond to erroneous or unprocessable data. Also important are agreements to establish responsibility and availability of the systems for routine maintenance and upgrade cycles.
Managing Information Technology Infrastructure In an e-business, the information technology infrastructure of a company is the foundation for business success. The information technology infrastructure must be carefully managed to support the business goals of the company and should be perceived internally as a strategic asset. The Hurwitz Report (Hurwitz Group, 2001) identifies several ways in which the management and perception of the role of a company’s IT organization is critical to success. In traditional companies, the IT group is responsible for managing internal processes and is used to support or maintain business functions. While IT groups are often treated as overhead and run as cost centers in traditional businesses, the IT organization in successful e-businesses is viewed as a revenue generator and treated as a competitive asset allowing innovation within the organization to drive competitive advantage and help reduce costs. The reliability of the infrastructure is also paramount and must be carefully managed. As companies become more reliant on online processes, the potential consequences to a company and its business partners of even a temporary loss of service can be devastating. For example, a survey of companies in 2001 showed that for 46% of the companies surveyed, system downtime would cost them up to $50k per hour, 28% of the respondents would incur a loss of up to $250k per hour, and 26% would loose up to or over $1m per hour (Contingency Planning Research, 2001).
Managing Expectations Simply developing the e-business technology is just the start of the electronic collaboration process. A key challenge to successfully implementing an e-business strategy is managing the expectations of the e-business and effectively communicating the benefits to business partners. At the request of the Boards of Grocery Manufacturers of America and Food Marketing Institute Trading Partners Alliance (TPA), A. T. Kerney developed an action plan to accelerate the degree of cooperation between TPA members after over $1 billion had already been invested in a variety of exchanges and electronic collaboration platforms within the industry. Central to the recommendations made by A. T. Kerney was the need for better communication among the partners to address common concerns over data synchronization, education about the benefits of collaboration and implementation best practices, and regular feedback through surveys and progress tracking initiatives. Also identified was the need for individual companies to proactively encourage trading partners to join through training and sharing of best-practices (Kerney, 2002).
P1: A-42 Ray
WL040/Bidgoli-Vol I-Ch-10
July 11, 2003
10:53
Char Count= 0
B2B E-COMMERCE CHALLENGES
Monitoring and Regulation Challenges There are many challenges associated with monitoring and regulating online businesses. These challenges are exacerbated by the increasing internationalization of business. Issues with taxation, security, and privacy are more difficult to manage when applied in a global environment where the laws and ethics governing business are often conflicting rather than complementary. Issues also exist with standardizing the accounting mechanisms for digital businesses and processes as traditional accounting principles must adapt to the new business environment. Lastly, due to the global nature of the Internet, issues with managing the security of the digital systems and prosecuting those who attempt to disrupt the flow of digital information provides significant challenges for international legal organizations.
Internet Business As a global phenomenon, e-B2B poses a complicated series of issues for those organizations charged with monitoring and regulating international business. The Internet compresses the natural geographic separation of businesses and the related movement of products and money between these businesses and allows business operations to span political borders literally at the speed of light. To date, the Internet is largely uncontrolled and business on the Internet follows suit. It is the responsibility of individual countries to decide how to regulate, how much to regulate, and who should regulate the Internet and Internetrelated business operations within their boundaries, as well as the degree to which they should accommodate the rules and regulations set up by other countries. Within any country there are opposing forces at work. Governments are trying to regulate the Internet in a way that stimulates their economies and encourages use. At the same time governments must protect the rights of citizens and existing businesses. It is not surprising that few Internet regulatory laws have been passed even though e-B2B has been in place for many years. To further complicate the regulatory issue, the ubiquity of the Internet has resulted in the rapid expansion of businesses interoperating across international borders. Traditionally only large, well-financed companies could perform international commerce; now there are no limits to how small a multinational company can be.
Confidentiality and Privacy As businesses and industries in general move toward higher degrees of collaboration at a digital level, issues about the security of the digital information being collected, stored, and transmitted as part of normal business operations becomes more important. Many of the regulatory laws that do exist pertain to protection of personal, confidential, or legally sensitive business data. For example, in the United States personal finances and healthcare records now must be protected from accidental or malicious exposure. The Health Insurance Portability and Accountability Act (HIPAA, 1996) is an attempt to regulate the movement, storage, and access to healthcare-related personal information through enforcement of privacy standards. HIPAA is a direct regulatory response within
115
the United States to several well-publicized breaches of doctor–patient trust including the errant transfer of prescription records on a computer disk sold to an individual in Nevada, medical records posted on the Internet in Michigan, and digital media containing medical records stolen from a hospital in Florida. These are just a few of the better-publicized cases. The HIPAA regulation is scheduled to go into effect in 2003.
Accounting for Digital Transactions and Digital Assets The Federal Accounting Standards Board (FASB, 2001) identifies several challenges to the accounting industry associated with the transition from a traditional paperdriven economy to one where business is transacted using digital documents. From an accounting perspective, there is no standard measure for determining the shortterm or long-term value of technology, knowledge capital, and intellectual property associated with implementing and managing an e-business infrastructure. Yet companies are investing huge amounts of capital in these activities that must be accounted for fairly. Valuing these intangible assets would require that the accounting profession extend its province to nonfinancial areas of business operations and generate standard frameworks and metrics for reporting and tracking nonfinancial information.
Computer Crimes There are many types of criminal activity associated with the Internet, ranging from petty acts of vandalism and copyright violations to organized systematic attacks with malicious intent to cause damage, financial loss, or in extreme cases, wholesale disruption of critical infrastructure. Each week it seems that there are reports on new attacks, break-ins, viruses, or stolen data made public. Different countries, however, treat different types of computer attacks in different ways with little consensus as to the degree of criminality and severity of punishment. Turban et al. (2001) provide a comparison of computer crime legislation in several countries and note that an activity such as attempting to hack (to gain access to an unauthorized system), while legal in the United States, is illegal in the United Kingdom. However, successful hacking, which causes loss, is criminal in both countries but punished far more severely in the United States, carrying a maximum penalty of 20 years. Security breaches and malicious attacks such as vandalizing Web sites and disabling computer servers using viruses, denial-of-service (DoS) attacks, and bufferoverflow techniques can prove costly to e-business by making the systems unavailable for normal operation, often resulting in incomplete transactions or lost business while the systems are incapacitated. A recent survey based on 503 computer security practitioners in the United States reported that 90% of respondents detected security breaches in their systems within the 12-month reporting period. Eighty percent of the respondents attributed financial loss to these security breaches, amounting to approximately $500 million from the 44% who were able to quantify their loss (Computer Security Institute, 2000). The global nature of the Internet makes regulating this so-called cybercrime a significant issue as attacks
P1: A-42 Ray
WL040/Bidgoli-Vol I-Ch-10
July 11, 2003
116
10:53
Char Count= 0
BUSINESS-TO-BUSINESS (B2B) ELECTRONIC COMMERCE
against systems can be launched from countries with less-stringent regulations and lower rates of enforcement. The United States has recently caused international concern by successfully detaining and convicting foreign nationals accused of crimes against systems in the United States even thought the attacks were launched from nonU.S.-based systems (U.S. Department of Justice, 2002). However, not all attacks are malicious and are often perpetrated to expose security holes in business systems as a means to draw public attention to issues of vulnerability with the intent of eliciting a general hardening of the systems by the systems developers (Hulme, 2002).
Technological Challenges Some of the most extreme information technology requirements found in the commercial business world are associated with designing and implementing large-scale ebusiness systems. Issues with the design and deployment of such systems include localizing the applications for international users, managing the quality of service, and protecting access to the data and applications.
Localization and Globalization Like other forms of e-commerce, e-B2B is increasingly multinational and differences in language and culture can affect the usability of systems. One technological challenge is the process of designing and developing software that functions in multiple cultures/locales, otherwise known as globalization. Localization is the process of adapting a globalized application to a particular culture/locale. A culture/locale is a set of rules and a set of data specific to a given language and geographic area. These rules and data include information on character classification, date and time formatting, numeric, currency, weight, and measure conventions, and sorting rules. Globalizing an application involves identifying the cultures/locales that must be supported by the application, designing features that support those cultures/locales, and developing the application so that it functions equally well in any of the supported cultures/locales. If business systems are to share information across national and cultural borders, the information they are sharing must be both syntactically correct and unambiguous for all the systems involved. Of particular importance to cross-cultural systems are the design and development of data storage and data processing systems that can accommodate the differences in data and translate the data between supported locales. Consider three companies operating in the United States, U.K., and Japan respectively, and sharing data about a shipment date that will occur on the date specified by the string “07/08/02.” In the United States this date string is interpreted as July 8, 2002. In the U.K. the date and month field are transposed, resulting in a local/culture-specific interpretation of August 7, 2002. In Japan the year and month field are transposed resulting in an interpreted shipment date of August 2, 2007. This is a simple example but the cost associated with developing new or re-engineering existing systems to translate culture/locale-specific values such as simple date
strings across a variety of culture/locales is high. There are similar issues associated with sharing data that contains time information, addresses, telephone numbers, and currency.
Scalability, Reliability, and Quality of Service The general planning considerations for engineering a service such as a B2B marketplace are to provide sufficient system functionality, capacity, and availability to meet the planned demand for use of the systems, which translates loosely into the number of transactions that can be processed in a given time period. The difficulty lies in estimating and planning the processing requirements for a system and engineering a system that can respond to periodic increases in use. In particular, system designers need to understand the demand on the system during “normal” operations and how the demand on the system might vary over a period such as a single day, a week, a month, or the course of a year. Further, system designers need to plan for the effects of “special events,” which might cause a short-term increase in the use of the system. Lastly, the system designers need to understand how the B2B interactions translate to workload against the internal systems such as an ERP and plan for matching the capacity of the internal systems with that of the external-facing systems being designed so that the internal systems do not become the weak link in the processing chain. From a trading-partner perspective, the primary concerns will pertain to the quality of service (QOS) provisions for the system they are going to integrate against. The QOS concerns can be broken down into four key areas: system reliability, system security, system capacity, and system scalability. E-business companies need to set design goals for these key QOS requirements and plan the level of investment and predicted ROI accordingly.
Protection of Business Data and Functions One result of growth in e-business activity is the associated increase in the transmission and storage of digital information and the corresponding increase in reliance on information systems to support business activities. This poses two major problems to information technology management: how to maintain the integrity and confidentiality of business information and, secondly, how to protect the information systems themselves from security breaches, malicious attacks, or other external factors that can cause them to fail. Potentially sensitive information shared between businesses is at risk not only during the transmission of the information from one system to another but also as it is stored on file servers and in databases accessible over the computer network infrastructure. The general trend toward open systems poses transmission security issues because open systems rely on text-based data formats such as EDI and XML. If intercepted, these documents can be read and understood by a variety of software publicly available on the Internet. Standards for encrypting XML documents are being developed but have yet to make the mainstream. Encryption technologies for securing data moving over the Internet between business partners, such as VPNs, have been available for a decade or more but rely
P1: A-42 Ray
WL040/Bidgoli-Vol I-Ch-10
July 11, 2003
10:53
Char Count= 0
GLOSSARY
on coordinating privacy schemes among businesses, sharing encryption keys, and above all developing effective implementation policies that must be constantly revised and tuned to adapt to changing events. Even using encryption systems is often not enough as systems using 40- or 56bit keys can now be broken using brute-force methods in a few minutes using a personal computer. On the other hand, strong encryption systems that protect data from these brute-force attacks can often not be shared with partners outside of the United States. Maintaining the integrity and confidentiality of business data and computer systems is important. An often overlooked part of the e-business process, however, is the protection of access to the business systems themselves. The tragic events of September 11, 2001, in New York City have served to underline the fragility of electronic business systems and have provided new priorities for ebusiness companies. Rather than just simply protecting business data stored in corporate databases by backing it up on tape, companies are now considering holistic approaches to protecting business operations, including protecting access to critical applications such as e-mail as well as protecting lines of communication to those businesses upon which they are dependent (Garvy & McGee, 2002).
B2B E-COMMERCE IN PERSPECTIVE Whether or not the actual global value of e-B2B activity meets the predictions for 2005 mentioned in the Introduction, the current rates of adoption and continuing pervasiveness of e-B2B activity across all types of business and all types of industry are indicators that e-B2B is going to remain a major driving force within the global economy. At the same time, e-B2B should be regarded as a nascent activity that is rapidly emerging and consequently exhibiting growing pains. Based on a survey of 134 companies around the world, the Cutter Consortium provides some interesting insight into the e-B2B implementation experience (Cutter Consortium, 2000): Asked to rank the obstacles to e-business, respondents chose “benefits not demonstrated” as the number-one obstacle, followed by financial cost and technological immaturity. Success with electronic supply-chain management is mixed, with half of those using it enjoying success rates of 76–100% and about a third experiencing success rates of 0–10%. Similarly, Deloitte & Touche examined data from 300 U.S.- and U.K.-based companies in a wide variety of industries and identified a highly conservative trend to e-business. Less than half of the companies examined expected their e-business strategy to involve transforming business processes while the majority expected to engage in simple Internet-based buying and selling. More telling is that only 28% of the companies had actually developed a formal e-business strategy, the majority being in some stage of investigation (Rich, 2001). Undemonstrated returns on investment, unsuccessful implementations, and lack of knowledge about the
117
technology, the risks, and the rewards associated with eB2B all combine to make businesses cautious as they plan ahead for a connected future. The Deloitte & Touche study reports that approximately 50% of respondents indicated that the major barriers to e-B2B lie in the lack of skills and training, or existing business culture compared to approximately 20% of respondents who perceive technological or security issues as the major barrier. These are issues that can be overcome with time as the global corporate knowledge base grows, as more case studies illustrating successful and profitable implementations are available, and as the technology framework becomes more robust, secure, and prevalent. The success stories available are highly compelling: General Motors, Federal Express, and Cisco are examples that clearly demonstrate the returns possible when e-business is implemented successfully. The bottom line that drives most organizations to evaluate and adopt e-B2B is that performing business electronically is both cheaper to execute and more efficient in terms of time than traditional means. An electronic transaction is also more accurate, which often draws trading partners into supply-chain integration efforts. Indeed, recent evidence indicates that businesses are still investing in e-B2B infrastructure, particularly for electronic supply-chain integration including e-procurement as well as for e-markets. Berlecon Research estimate that the number of B2B exchanges worldwide will grow 10-fold over the next two years particularly in Europe where there are several underserved industries that lack online B2B marketplaces (Wichmann, 2002). Clearly, implementing an e-business strategy is highly technical and involves many facets of information technology that are new to most companies and indeed new to the information technology industry as well. In essence, everyone is learning how to perform e-business and the technical solutions are adapting to meet the evolving requirements of e-B2B. However, as one author puts it, “Business comes before the e” (Horne, 2002), reinforcing the idea that e-business is not a means to an end but should be regarded as an extension of a sound and wellconstructed business plan.
GLOSSARY ActiveX Data Objects (ADO) Middleware developed by Microsoft for accessing local and remote databases from computers running a Windows-based operating system. Application programming interface (API) Components of an application that allow other applications to connect to and interact with the data and services of the application; usually published by the application developer as a formal library of functions. Dot-com Companies that emerged during the Internet boom of the late 1990s with a focus on building applications for or selling products on the World Wide Web; usually funded by large amounts of venture and private equity capital. E-business All types of business activity performed using electronic means; has a wider context than e-commerce and includes business exchanges that
P1: A-42 Ray
WL040/Bidgoli-Vol I-Ch-10
118
July 11, 2003
10:53
Char Count= 0
BUSINESS-TO-BUSINESS (B2B) ELECTRONIC COMMERCE
involve interorganizational support, knowledge sharing, and collaboration at all levels. E-commerce A form of e-business that results in a business transaction being performed using electronic means such as a buy or sell event. Electronic data interchange (EDI) One of the first forms of e-business used to pass electronic documents between computer systems often over private telecommunication lines or value-added networks; still used by many businesses today. Electronic funds transfer (EFT) An early form of e-commerce used to transfer money electronically between banks. Enterprise resource planning (ERP) A collection of applications designed to manage all aspects of a business. ERP systems are designed to integrate sales, manufacturing, human resources, logistics, accounting, and other enterprise functions within an organization. Intermediary A company that adds value or assists other companies in performing supply-chain activities such as connecting suppliers with buyers. Java Database Connectivity (JDBC) Middleware used for accessing remote databases from an application written using the Java programming language. Message oriented middleware (MOM) Middleware used to integrate applications using a system of messages and message queues; allows systems to share data using an asynchronous processing model. Middleware Software that enables the access and transport of business data between different information systems often over a computer network. Open Database Connectivity (ODBC) Middleware specification originally designed by Microsoft to access databases from Windows platforms using a standard API. ODBC has since been ported to UNIX and Linux platforms. Protocol A standard for the format and content of data passed between computers over a computer network; often maintained by independent organizations such as the World Wide Web Consortium. Remote procedure call (RPC) Middleware technology originally developed for UNIX systems for sharing data and methods between applications over a network. Structured query language (SQL) A computer language developed by IBM in the 1970s for manipulating data stored in relational database systems; became the standard language of databases in the 1980s. Supply chain The end-to-end movement of goods and services from one company to another during a manufacturing process. Transmission control protocol/Internet protocol (TCP/IP) Communication protocols originally developed as part of ARPANET that today form the basic communication protocols of the Internet. Transaction A record of a business exchange such as a sell or a buy event. Tymnet An early value-added network developed by Tymshare Inc. and used by companies for transferring computer files between computer systems; the largest commercial computer network in the United States and was later sold to MCI.
UNIX-to-UNIX-Copy (UUCP) A utility and protocol available on UNIX systems that allows two computers to share files over a serial connection or over a telephone network using modems. Value-added network (VAN) A form of computer network connection often between two companies to perform e-business that is managed by a third party; initially used to transfer EDI documents; modern ones can operate over the Internet. Virtual private network (VPN) A form of network connection between two sites over the public Internet that uses encrypted data transmission to provide a private exchange of data.
CROSS REFERENCES See Business-to-Business (B2B) Internet Business Models; Business-to-Consumer (B2C) Internet Business Models; Click-and-Brick Electronic Commerce; Collaborative Commerce (C-commerce); Consumer-Oriented Electronic Commerce; Electronic Commerce and Electronic Business; Electronic Data Interchange (EDI); Electronic Payment; Emarketplaces; Internet Literacy; Internet Navigation (Basics, Services, and Portals).
REFERENCES Aberdeen Group (2001a). FedEx taps e-procurement to keep operations soaring, cost grounded. Retrieved November 16, 2002, from http://www.ariba.com/requ est info/request information.cfm?form=white paper Aberdeen Group (2001b). E-procurement: Finally ready for prime time. Retrieved November 16, 2002, from http://www.aberdeen.com/ab company/hottopics/epro cure/default.htm Bond, B., Genovese, Y., Miklovic, D., Wood, N., Zrimsek, B., & Rayner, N. (2002). ERP is dead—Long live ERP II. Retrieved January 18, 2003, from http://www. gartner.com/DisplayDocument?id=314701 Colkin, E. (2002, September 16). Hastening settlements reduces trading risk. Information Week, 24. Computer Security Institute (2000). CSI/FBI computer crime and security survey. Retrieved November 16, 2002, from http://www.gocsi.com/press/20020407.html Contingency Planning Research (2001). 2001 Cost of downtime survey. Retrieved January 18, 2003, from http://www.contingencyplanningresearch.com Cutter Consortium (2000). E-business: trends, strategies and technologies. Retrieved November 16, 2002, from http://www.cutter.com/itreports/ebustrend.html Davis, W. S., & Benamati, J. (2002). E-commerce basics: Technology foundations and e-business applications. Boston: Addison Wesley. Emiliani, M. L., & Stec, D. J. (2002). Aerospace parts suppliers’ reaction to online reverse auctions. Retrieved January 18, 2003, from http://www.theclbm.com/ research.html Financial Accounting Standards Board (FASB) (2001). Business and financial reporting, challenges from the new economy (Financial Accounting Series Special Report 219-A). Norwalk, CT: Financial Accounting Foundation.
P1: A-42 Ray
WL040/Bidgoli-Vol I-Ch-10
July 11, 2003
10:53
Char Count= 0
REFERENCES
Gartner Group (2001). Worldwide business-tobusiness Internet commerce to reach $8.5 trillion in 2005. Retrieved November 16, 2002, from http:// www3.gartner.com/5 about/press room/pr20010313a. html Greenstein, M., O’Leary, D., Ray, A. W., & Vasarhelyi, M. (in press). Information systems and business processes for accountants. New York: McGraw Hill. HIPAA (1996, August 21). Health Insurance Portability and Accountability Act of 1996, Public Law 104191. Retrieved November 16, 2002, from http://aspe. hhs.gov/admnsimp/pl104191.htm Horne, A. (2002). A guide to B2B investigation. Retrieved November 16, 2002, from http://www. communityb2b. com/news/article.cfm?oid=620910D291EF-4418-99B311DBB99F937B Hulme, G. (2002, August 8). With friends like these. Information Week. Retrieved November 16, 2002, from h ttp://www.informationweek.com/story/ IWK20020705S0017 Hurwitz Group (2001). E-business infrastructure management: The key to business success. Framingham, MA: Hurwitz Group. Garvy, M. J., & McGee, M. K. (2002, September 9). New priorities. Information Week, 36–40. Kerney, A. T. (2002). GMA-FMI trading partner alliance: Action plan to accelerate trading partner electronic collaboration. Retrieved November 16, 2002, from http:// www.gmabrands.com/publications/docs/ecollexec.pdf Konicki, S. (2001, August 27). Great sites: Covisint Information Week. Retrieved November 16, 2002, from h ttp://www.informationweek.com/story/ IWK20010824S0026. Linthicum, D. S. (2001). B2B application integration: E-business enable your enterprise. Boston: AddisonWesley. Microsoft (2000). MS market—Intranet-based procurement. Retrieved January 18, 2003, from http://www.
119
microsoft.com/technet/treeview/default.asp?url=/tech net/itsolutions/intranet/case/msmproc.asp Norris, G., Hurley, J., Hartley, K., Dunleavy, J., & Balls, J. (2000). E-business and ERP: Transforming the enterprise. New York: Wiley. Osterland, A. (2000, January 1). Blaming ERP. CFO Magazine. Retrieved November 16, 2002, from http:// www.cfo.com/article/1,5309,1684,00.html Rich, N. (2001). e-Business: The organisational implications. Retrieved November 16, 2002, from http://www. deloitte.com/dtt/cda/doc/content/Man nrebriefing.pdf Schneider, G. P. (2002). Electronic commerce (3rd ed.). Canada: Thomson Course Technology. Slater, D. (2002, April 1). GM shifts gears. CIO Magazine. Retrieved November 16, 2002, from http://www.cio. com/archive/040102/matters.html Small Business Administration (SBA) (2000). Small business expansions in electronic commerce: A look at how small firms are helping shape the fastest growing segments of e-commerce. Washington, DC: U.S. Small Business Administration Office of Advocacy. Strategis (2002). Electronic commerce in Canada. Retrieved November 16, 2002, from http://ecom.ic.gc.ca/ english/research/b2b/index.html Turban, E., King, D., Lee, J., Warkentin, M., & Chung, H. M. (2002). Electronic commerce: A managerial perspective (2nd ed.). Englewood Cliffs, NJ: Prentice Hall. U.S. Department of Justice (2002). Russian computer hacker sentenced to three years in prison. Retrieved January 19, 2003, from http://www.cybercrime.gov/ gorshkovSent.htm Wichmann, T. (2002). Business-to-business marketplaces in Germany—Status quo, opportunities and challenges. Retrieved November 16, 2002, from http://www. wmrc.com/businessbriefing/pdf/euroifpmm2001/refer ence/48.pdf Yee, A., & Apte, A. (2001). Integrating your e-business enterprise. Indianapolis, IN: Sams.
P1: IML/FFX WL040-11
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 17, 2003
17:48
Char Count= 0
Business-to-Business (B2B) Internet Business Models Dat-Dao Nguyen, California State University, Northridge
Nature of B2B E-commerce B2B Business Models by Ownership Direct Selling Direct Buying Exchange/Trading Mall B2B Business Models by Transaction Methods Electronic Catalogs Automated RFQs Digital Loyalty Networks Metacatalogs Order Aggregation Auction
120 121 121 121 122 123 123 123 123 124 124 124
NATURE OF B2B E-COMMERCE In general, B2B e-commerce can be classified according to the nature of the goods/services in transaction, the procurement policy, and the nature of the supply chain. Businesses conduct B2B e-commerce to sell or buy goods and services for production and/or nonproduction. Production materials, or direct materials, go directly to the production of goods or services. Usually they are not shelf items that could be readily purchased at anytime in the marketplace. Their use is scheduled according to a production plan. They are purchased in large volume after negotiation and contracting with the sources to guarantee a continuous stream of input materials for the production process. Nonproduction materials, or indirect materials, are used in maintenance, repairs, and operations. They are also called MROs, containing low-value items. Although constituting 20% of purchase value, they amount to approximately 80% of an organization’s purchased items. Depending on the strategic nature of the materials in the production, a procurement policy may involve a longterm contract or an instant purchase/rush order. Strategic sourcing for a long-term contract results from negotiations between suppliers and buyers. Spot buying for an instant purchase/rush order concludes at a market price resulting from the matching of current supply and demand. Due to the strategic role of direct materials in the production, a manufacturer wishes to secure a consistent long-term transaction at an agreed upon price with its suppliers. Most organizations spend a great deal of time and effort for upstream procurement of direct materials, usually high-value items, and overlook low-value items, including MROs. Consequently, there are potential inefficiencies in the procurement process, such as delays in production due to insufficient MROs and/or overpayment for rush orders to acquire these MROs. 120
Bartering Merits and Limitations of B2B E-commerce Merits of B2B E-commerce Limitations of B2B E-commerce and Possible Solutions Critical Success Factors for B2B E-commerce B2B E-commerce Enable Technologies and Services Beyond Selling and Buying B2B Models Glossary Cross References References
124 125 125 125 126 126 127 128 128 128
From a value chain and supply chain perspective, B2B e-commerce can take place in a vertical market or a horizontal market. A vertical market involves transactions between and among businesses in the same industry or industry segment. Usually this market deals with the production process or direct materials necessary for the production of goods and services of firms in the same industry. A horizontal market involves transactions related to services or products of various industries. These materials are diversified and related to the maintenance, repair, and operation of a specific firm. Most of these materials do not contribute directly to the production of goods and/or services offered by the firms. Before the advent of B2B e-commerce, companies used the following tendering process. A department in a company submits a requisition for the goods/services it needs. The purchasing agent prepares a description of the project, giving its specification, quality standards, delivery date, and required payment method. Then the purchasing department announces the project and requests proposals via newspaper/trade magazine ads, direct mail, fax, or telephone. Interested vendors/suppliers may ask for and receive detailed information by mail. Then these suppliers prepare and submit proposals. Proposals are evaluated by several departments/agents at the buying company. Negotiation may take place and the contract is awarded to the supplier who offers the lowest price. In this process, communication mostly takes place via letter or fax/phone. Similarly on the selling side, a supplier announces in newspapers or trade magazines the inventory to be disposed of and invites interested parties to inquire and bid. A sales force may be set up to identify and make direct contact with potential buyers. The interested parties may request additional information about the goods to be delivered by mail or fax. Then they decide to submit a sealed bid by mail. At the closing date, the bids are examined and the highest bid wins the auction.
P1: IML/FFX WL040-11
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 17, 2003
17:48
Char Count= 0
B2B BUSINESS MODELS BY OWNERSHIP
This manual and paper-based process takes a long time and is prone to error. Electronic processes conducted via telecommunication systems are faster but require an investment in a dedicated private network. Recently, Webbased technologies have made business communication much less expensive and easier to administer. Transactions over the Internet also make it possible to reach a larger pool of business partners and to locate the best deal for the project. There are many classification schemes for B2B e-commerce business models (Laudon & Traver, 2001; Turban et al., 2002; Pavlou & El Sawy, 2002). The classifications provide details on the context and constraints of the various business models so that an interested business may select an appropriate strategic choice to gain a competitive advantage. In the following, B2B e-commerce business models are classified on the basis of business ownership and transaction methods.
B2B BUSINESS MODELS BY OWNERSHIP Depending on who is controlling the marketplace and initiating the transactions, B2B e-commerce can be classified as company-centric or an exchange model. A companycentric model, representing a one-to-many business relationship, involves one business party initiating transactions and deals with many other parties interested in buying or selling its goods and services. In a direct-selling model, a company does all the selling to many buyers, whereas in a direct-buying model a company does all the buying from many suppliers. In these models, the initiative company has complete control over the supportive information systems. However, a third party may serve as an intermediary to introduce buyers to sellers and vice versa, and to provide them with a platform and other added-value services for transaction. In many cases, buyers and suppliers having idle capacity in their Internet host sites for B2B e-commerce have served as intermediaries for other smaller businesses. An exchange or trading model, representing a manyto-many business relationship, involves many buyers and many suppliers who meet simultaneously over the Internet to trade with one another. Usually there is a market maker who provides a platform for transactions, aggregates the buyers and sellers, and then provides the framework for the negotiation of prices and terms. A variation of the exchange model is a consortium trading exchange, which constitutes a group of major companies that provide industry-wide services that support the buying and selling activities of its members. The activities can be vertical/horizontal purchasing/selling.
Direct Selling This is a company-centric B2B model focusing on selling, in which a supplier displays goods and services in a catalog at its host site for disposal. The seller could be a manufacturer or a distributor selling to many wholesalers, retailers, and businesses.
121
In this model, a large selling company transacts over a Web-based, private-trading sales channel, usually over an extranet, to its business customers. A smaller business may use its own secured Web site. The company could use some transaction models, such as direct selling from electronic catalogs, requests for proposal (RFP), and selling via forward auctions, and/or one-to-one dealing under a long-term contract. The classification of these transaction methods in B2B e-commerce is discussed in detail in the next section. In the B2B direct selling, the involved parties may benefit from speeding up the ordering cycle and reducing errors processing. They also benefit from reducing order processing costs, logistics costs, and paperwork, especially the reduction of buyers’ search costs in finding sellers and competitive prices and the reduction of sellers’ search costs in advertising to interested buyers. Most major manufacturers have conducted B2B e-commerce with their business partners. For example, Dell.com, Cisco.com, IBM.com, Intel.com, and Staples. com, among others, have special secured sites for registered partners to provide them with information on products, pricing, and terms. At this site, business customers can browse the whole catalog, customize it, and create and save a shopping list/shopping cart for internal approval before placing orders. The sites have the tracking facility for customers to follow up on the status of their orders. These sites also have links to shipper’s Web site (UPS, FedEx, Airborne Express, etc.) to help customers keep track of delivery. In this model, depending on whether a manufacturer/distributor that hosts its own Web site and support provides complete transaction or not, the company may have to pay fees and commissions to intermediaries for hosting and value-added services. Usually, corporate buyers have free access to the e-marketplace after a free registration to the site.
Direct Buying This is a company-centric B2B model focusing on buying, in which a company posts project specifications/ requirements for goods and services in need and invites interested suppliers to bid on the project. In this model, a buyer provides a directory of open requests for quotes (RFQs) accessible to a large group of suppliers on a secured site. The buying company doesn’t have to prepare requests and specifications for each of these potential tenders. Suppliers could be notified automatically with an announcement of available RFQs, or even the RFQs sent directly from the buyer site. Independent suppliers can also use search-and-match agent software to find the tendering sites and automate the bidding process. Then suppliers can download the project information from the Web and submit electronic bids for projects. The reverse auction could be in real time or last until a predetermined closing date. Buyers evaluate the bids and negotiate electronically and then award a contract to the bidder that best meets their requirements. A large buying company can also aggregate suppliers’ catalogs at its central site for the ease of access from its own branch offices. These affiliations will purchase from the
P1: IML/FFX WL040-11
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
122
June 17, 2003
17:48
Char Count= 0
BUSINESS-TO-BUSINESS (B2B) INTERNET BUSINESS MODELS
most competitive supplier. In this case, the suppliers will be notified directly with the invitation for tender or purchase orders. This model streamlines and automates the traditional manual processes of requisition, RFQ, invitation to tender, issue of purchase orders, receipt of goods, and payment. The model makes the procurement process simple and fast. In some cases, it increases productivity by authorizing purchases from the units/departments where the goods/services are needed and therefore bypassing some paperwork at the procurement departments. The model helps in reducing the administrative processing costs per order and lowering purchase prices through product standardization and consolidation of orders. The model also contributes to improving supply chain management by providing information on suppliers and pricing. It helps to discover new suppliers and vendors who can provide goods and services at lower cost and on a reliable delivery schedule. It also minimizes purchases from noncontract vendors at higher prices for uncontrollable quality goods/services. An example for this model is GE’s Trading Process Network (TPN), where a company’s sourcing department receives internal material requests and sends off RFQs to external suppliers. Currently GE opens this network to other business partners, at gxs.com site. In this model, a buying company may set up its own Web site and may engage other services from intermediaries with licenses. Suppliers have to register at the host site and may have to pay an access fee.
Exchange/Trading Mall The exchange model involves many suppliers and buyers meeting at the marketplace for transactions. The marketplace could be a dedicated site or a trading mall open to the public. Transactions in this marketplace involve spot buying as well as negotiation for a long-term buying/ selling contract. In spot buying, a deal is concluded at a price based on supply and demand at any given time at the marketplace. In systematic sourcing, the exchange aggregates the buyers and sellers and provides them with a platform for the negotiation of prices and terms. In the exchange, a company lists a bid to buy or an offer to sell goods/services. Other sellers and buyers in the exchange can view the bids and offers, although the identity of the tenderer or the bidder is kept anonymous. Buyers and sellers can interact in real time, as in a stock exchange, with their own bids and offers to reach an exact match between a buyer and a seller on price, quantity, quality, and delivery term. Third parties outside the exchange may provide supporting services, such as credit verification, quality assurance, insurance, and order fulfillment. The exchange provides an open marketplace so that buyer and seller can conclude/negotiate the transaction at a competitive price resulting from the supply/demand mechanism. It has the characteristics and benefits of a competitive market in terms of classic economics. A buyer may benefit from lower costs due to a large volume of goods/services being transacted. A supplier may benefit from reaching a larger pool of new buyers than is possible when conducting business in a traditional market.
In this business model, some exchanges act purely as information portals by transferring the order/inquiry to the other party via hyperlinks so that the transactions will take place at the seller/buyer sites. Others aggregate suppliers and/or buyers for the convenience of the trading parties. In supplier aggregation, the exchange standardizes, indexes, and aggregates suppliers’ catalogs and then makes them available to buyers at a centralized host site. Or requests for proposals (RFPs) from participant suppliers are aggregated and matched with demand from participant buyers. In buyer aggregation, RFQs of buyers, usually the small ones, are aggregated and linked to a pool of suppliers that are automatically notified of the existence of current RFQs. Then the trading parties can make bids. Another type of exchange is called a consortium trading exchange, formed by a group of buyers or sellers. In buying consortia, a group of companies joins together to streamline the purchasing process and to pressure the suppliers to cut prices and provide quality, standardized goods/services in vertical as well horizontal supply chain transactions. An example of a buying consortium is Covisint.com, an automotive industry joint venture by GM Motors, Ford, DaimlerChrysler, Renault, Peugeot Citroen, and Nissan. In selling consortia, suppliers in the same industry deal with other downstream businesses to maintain reasonable prices and controllable production schedules for goods/services in vertical trading. An example of a selling consortium is the Star Alliance, an alliance of major domestic and international airlines, consisting of Air Canada, Lufthansa, SAS, United Airway, and others. These companies sell or exchange seats in their airplanes to one another to assure full booking for their fleets. The use of exchange would especially benefit smaller businesses, which don’t have large customer bases or supplier sources. Transactions via exchange and intermediary sites don’t require additional resources for information technology infrastructure, staffing, and other related costs. If the exchange is controlled by an intermediary, this third party often assumes the responsibility for credit verification, payment, quality assurance, and prompt delivery of the goods. There are intermediary exchanges, such as eBay.com and the Trading Information Exchange of GE Global eXchange Services ( gxs.com), that provide an open marketplace for many suppliers/vendors and buyers. In other cases, manufacturers provide upstream and downstream partners with a service enabling them to do business with one another. One prominent example is Boeing’s secured site MyBoeingFleet.com, at which Boeing’s airline customers can access the PART page to order maintenance parts directly from Boeing’s suppliers. This service significantly streamlines time and labor in the procurement process for all business partners. One no longer needs to go through archives to look for blueprints, specifications, and sources of thousands parts of an aircraft for the requisition of a specific item. On the revenue models of exchanges, if a major partner of the supply chain owns the site, access to an exchange marketplace could be free of charge. In other cases, participants pay an annual registration fee and/or a transaction fee that is either a fixed amount or a percentage
P1: IML/FFX WL040-11
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 17, 2003
17:48
Char Count= 0
B2B BUSINESS MODELS BY TRANSACTION METHODS
123
Table 1 B2B E-commerce Business Models
BY TRANSACTION METHODS
Direct Selling
Electronic Catalogs Automated RFQs Digital loyal networks Metacatalogs Order aggregation Auction Bartering
of the transaction volume. The participants may also pay for added-value services, such as credit verification, insurance, logistics, and collection, provided by the exchange. Some exchanges generate extra revenue from online advertisements on the site.
B2B BUSINESS MODELS BY TRANSACTION METHODS B2B business models could be classified by the transaction methods a buying/selling company uses to conduct business with its partners in the e-marketplace. A company may use one or many transaction models suitable for its transactions.
Electronic Catalogs Using this model, a supplier posts an electronic version of its catalog in a Web site for free access from interested parties. The company benefits from exposure to a large pool of potential buyers over the Internet without the costly creation and distribution of voluminous catalogs. The electronic catalog can be updated in a timely manner. Most companies have this model as a supplement ary to their paper-based catalogs to reach more customers outside their physical facilities. The transactions incurred may be handled with a traditional procurement process. In this passive and low-cost business model, a supplier could inform potential buyers of the existence of the catalogs via regular mail or e-mail. The supplier may also register the Web site in the directories of some exchanges or intermediaries. Using a search engine, interested buyers may discover the competitive offer and then contact the supplier directly for further information about products and services.
Automated RFQs In this model, requests for quotes (RFQ) are automatically distributed from the buying company to its business partners via a private communication network. An example of this model is GE’s Trading Process Network (TPN). At GE, the sourcing department receives the requisitions electronically from other departments. It sends off RFQs containing specifications for the requisitions to a pool of approved suppliers via the Internet. Within a few hours, potential suppliers around the world are notified of incoming RFQs by e-mail or fax, instead of within days and weeks, as in the traditional paper-based system. Suppliers
BY OWNERSHIP Direct Buying
Exchange
r r
r
r r r r r
r r r r
have a few days to prepare bids and to send them back over the extranet to GE. The bids are then routed over the intranet to the appropriate purchasing agents and a contract could be awarded on the same day. GE reports that, using TPN, labor involved in the procurement process was reduced by 30% and material cost was reduced from 5% to 50% due to reaching a wider base of supplier online. Procurement departments of GE’s branches around the world can share information about their best suppliers. With TPN, it takes a few days for the whole procurement process instead of weeks, as before. Because the transactions are handled electronically, invoices are automatically reconciled with purchase orders and human errors in data entries/processing are minimized accordingly. In this business model, sourcing cycle time in the acquisition process is reduced significantly, with the distribution of information and specifications to many business partners simultaneously. It allows purchasing agents to spend more time negotiating for the best deal and less time on administrative procedures. A company also consolidates a partnership with suppliers by buying only from approved sources and awarding business based on performance. Consequently, it allows the company to acquire quality goods and services from a large pool of competitive suppliers around the world. With the advent of Web-based technology, networking becomes affordable and cost effective for interested businesses. A smaller company can engage an intermediary, or Web-service provider, to alleviate the cost of building and maintaining a sophisticated transaction network.
Digital Loyalty Networks As in traditional business, highly valued business partners in B2B e-commerce may get special treatment. In this model, a B2B e-commerce Web site differentiates visitors by directing the valued ones to a special site, instead of trading in a public area opened for other regular business partners. The system may also direct special requests/offers to a preferred group of business partners. Using this business model, different RFQs will be sent to different groups of potential suppliers from the approved supplying source for the company. A business may differentiate between its suppliers based on past performance in terms of product quality, pricing, delivery, and after-sales services. Similarly, a selling company may reward its preferred buyers with special discounts and conditions on transactions.
P1: IML/FFX WL040-11
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
124
June 17, 2003
17:48
Char Count= 0
BUSINESS-TO-BUSINESS (B2B) INTERNET BUSINESS MODELS
Metacatalogs
Auction
In this model, catalogs of approved suppliers are aggregated, indexed so that buyers will have the opportunity to deal with a large pool of suppliers of goods/services. These metacatalogs are usually kept in a central site for ease of access to potential buyers. Using this model, a global company may maintain a metacatalog of suppliers for the internal use of its branches. Or a trading mall can keep a metacatalog for the wide public access. For the internal use of a global company, the model aggregates items of all approved suppliers from their catalogs into one source. Buyers from affiliated firms or branches can find the items in need, check their availability and delivery time, and complete an electronic requisition form and forward it to the selected supplier. In this transaction, prices could be negotiated in advance. Potential suppliers tend to offer competitive prices, as they would be exposed to a larger pool of buyers, in this case the world-wide affiliations/branches of the buying company. In addition, suppliers may become involved in a long-term relationship with a global company and its affiliations/branches. The listing in the metacatalog is free to the suppliers as a result of the negotiation of terms and prices for the goods/services to be provided to the buying company. For wide public access, an intermediary or a distributor will create metacatalogs and make them available for its clients. Because buyers have an opportunity to deal with a large source of suppliers, these suppliers are under pressure to compete with one another in terms of price, quality, and services to win business. In this model, the supplier may have to pay a fee for listing on the catalog and/or a commission as a percentage of the transaction value. The buyer may have a free access or may pay a membership fee to the host/distributor.
To reach a deal, business partners involved in B2B could use auction and/or matching mechanisms. A forward auction involves one seller and many potential buyers. A reverse auction involves one buyer and many potential sellers. In double auction, buyers and sellers bid and offer simultaneously. In matching, related price, quantity, quality, and delivery terms from the bid and ask are matched. In a buying-side marketplace, a buyer opens an electronic market on its own server, lists items in need, and invites potential suppliers to bid. The trading mechanism is a reverse auction, in which suppliers compete with one another to offer the lowest price. The bidder who offers the lowest price wins the order from the buyer. Other issues, such as delivery, schedule, and related costs, are also taken into account when awarding contracts to bidders. In a selling-side marketplace, a seller posts the information for the goods/services to be disposed and invites potential buyers to bid. The trading mechanism is a forward auction, in which participating buyers compete to offer the highest price to acquire goods/services in need. The transaction can also take place at an intermediary site, at which buyers post their RFQs and suppliers post their RFPs. Depending on the regulations of the auction site, bidders can bid either only once or many times. In the latter case, bidders can view current supply and demand for the goods/services and change their bids accordingly. The transaction concludes when bidding prices and asking prices are matched. The advantage of this model is that it attracts many buyers to a forward auction and many suppliers to a reverse auction. The auction can be conducted at the seller/buyer private trading site or at an intermediary site. The auction can be in real time or last for a predetermined period. If the auction is conducted at an intermediary site, the involved business parties may have to pay an access fee. In addition, sellers may have to pay a commission on transaction value.
Order Aggregation In this model, RFQs from buyers are aggregated and sent to a pool of suppliers as invitations to tender. The order aggregation could be internal or external. In an internal aggregation, company-wide orders are aggregated to gain volume discounts and save administrative costs. In an external aggregation, a third party aggregates orders from small businesses and then negotiates with suppliers or conducts reverse auctions to reach a deal for the group. Usually, an intermediary will aggregate RFQs of participant buyers and match then with requests for proposals (RFPs) from participant suppliers. In order aggregation, small buyers benefit from the volume discount through aggregation that could not be realized otherwise. Similarly, suppliers benefit from providing a large volume of goods/services to a pool of buyers and save the transaction costs incurred from dealing with many, fragmented buyers. Order aggregation works well, with defined indirect production materials and services having relative stable prices. In this model, if the order aggregation is undertaken by an intermediary, then involved business parties may have to pay a flat fee and/or a commission on the transaction value.
Bartering In this model, a company barters its inventory for goods/services in need by announcing its intention in a classified advertisement. Actually, a company rarely finds an exact match by itself. The company will have a better chance if it joins an e-commerce trading mall, as it could reach a larger pool of interested parties over the Internet. An intermediary can create a bartering exchange, at which a company submit its surplus to the exchange and receives credits. Then it can use these credits to buy the items in need from the stock of goods/services listed for bartering at the exchange. An example is the bartering site of Intagio.com (formerly Bartertrust.com), where the owner claims to be the market leader in facilitating corporate trading in which goods and services are exchanged between businesses without using cash. Business parties using an intermediary site may have to pay for a membership fee and/or a commission on the transaction volume.
P1: IML/FFX WL040-11
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 17, 2003
17:48
Char Count= 0
MERITS AND LIMITATIONS OF B2B E-COMMERCE
MERITS AND LIMITATIONS OF B2B E-COMMERCE Along with other business models in the e-marketplace, B2B e-commerce has been welcomed as an innovative means of conducting transactions over the Internet. These business models promise not only effective and efficient business operations/transactions, but also competitive advantages to early adopters. Companies have adopted B2B e-commerce more slowly than predicted, but even conservative projections estimate that B2B transactions will top $3 trillion by 2004 (mro.com, 2002). It has been predicted that 66% of bids for MRO goods will be solicited over the Internet and 42% of MRO orders will be electronic (Fein & Pembroke Consulting, 2001). An example of cost savings in B2B e-commerce is the story of Suncor Energy Inc. of Canada. In 2000, the company was working with 1,000 suppliers, with a total MRO budget of $192 million. About 70% of its expenditures went to 40 suppliers, yet its purchasing staff spent 80% of their time manually processing transactions with the 900 smallest suppliers. The switching to e-Procurement was predicted to generate a savings of $32 million directly from e-Procurement, $64 million from the redeployment of the purchasing workforce, and $10 million in inventory reductions (mro.com, 2002). It is noteworthy that these savings come from process automation, not from forcing price concessions from distributors. Although having certain merits, these business models encounter some limitations that hinder the effective and efficient implementation and operation of a sustainable business. However, there are many possible solutions for overcoming these limitations.
Merits of B2B E-commerce B2B e-commerce in general exposes a selling/buying company to a larger pool of suppliers and corporate buyers. Transactions over the Internet help overcome the geographical barrier, bringing business partners from all over the world to the e-marketplace. A company may benefit from transactions with business partners beyond the local market. The Web-based technology of e-commerce helps minimize the human error found in the paper-based activities and supports timely, if not real-time, communication between and among partners. Different from traditional, costly telecommunications networks, Webbased technology makes transactions over the Internet affordable to most businesses involved in the e-marketplace. Also, the existence of many intermediaries also provides interested businesses with low-cost solutions for implementing a B2B e-commerce model. B2B e-commerce models address the concerns about the effectiveness and efficiency of the supply chain management of business partners—suppliers as well as company buyers. Supply chain management coordinates business activities from order generation, order taking to order distribution of goods/services for individual as well as corporate customers (Kalakota & Whinston, 1997). Interdependencies in the supply chain create an extended boundary that goes far beyond an individual firm, so
125
that individual firms can no longer maximize their own competitive advantage and therefore profit from cutting costs/prices. Material suppliers and distribution-channel partners, such as wholesalers, distributors, and retailers, all play important roles in supply chain management. B2B e-commerce models address the creation of partnerships with other parties along the supply chain, upstream as well as downstream, to share information of mutual benefit about the need of final customers. The key issue is that all upstream and downstream business activities should be coordinated to meet effectively the demand of final customers. Each partner in the stream should coordinate its own production/business plans (order fulfillment, procurement, production, and distribution) with those of the other partners so that sufficient streams of goods/services will reach customers in the right place at the right time. B2B business models also address issues of customer relationship management (Kalakota & Whinston, 1997), the front-end function of a supply chain. An effective business model helps in creating more loyal customers who are not inclined to shop for lower prices but rather who pay for quality and service, in retaining valued customers, and in developing new customers by providing them with new quality products and services. The customer base could be segmented on history of performance in sales/purchases. This information will serve as a basis for promotion and discount, promoting the loyalty of current customers.
Limitations of B2B E-commerce and Possible Solutions Some limitations of B2B e-commerce have been identified, such as conflicts with the existing distributing channel, cost/benefit justification for the venture, integration with business partners, and trust among business partners (Laudon & Traver, 2001; Turban et al., 2002) Most suppliers have existing distributing networks of wholesalers, distributors, and dealers. If a company decides to do business over the Internet directly with interested partners, it may cause conflict in terms of territory agreement and pricing policies on product lines. A possible solution could be redirecting these potential customers to the appropriate distributors and having the company handle only new customers outside the current sales territories of these distributors. Another alternative could be the company handling specific products/services not available within the traditional distribution channel. Or orders could be taken at the central site, with a distributor providing downstream added-value services (delivery, maintenance, support) to the new customers of the company. Another limitation is the number of potential business partners, and sales volume must be large enough to justify the implementation of a Web-based B2B system. Sellingside marketplaces for B2B e-commerce is promising if the supplier has a sufficient number of loyal business customers, if the product is well known, and if the price is not the critical purchasing criteria. For the buying side, the volume of transactions should be large enough to cover the investments and costs in the B2B e-commerce venture. In many cases, the interested business could participate
P1: IML/FFX WL040-11
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
126
T1: IML
WL040-Sample.cls
June 17, 2003
17:48
Char Count= 0
BUSINESS-TO-BUSINESS (B2B) INTERNET BUSINESS MODELS
in an exchange by paying a fixed fee or a commission on the volume of transactions. Using an intermediary could be feasible, as the company would not need to invest and maintain the expensive and sophisticated infrastructure of B2B e-commerce systems. On a technical perspective, unless a B2B e-commerce site has implemented a comprehensive network/system architecture, integration with a variety of business partners systems (Oracle, IBM, or other ERP systems) may cause an operational problem. These business partners should be able to transact on compatible network platforms and protocols of communication. Sometimes the conversion implies additional investments and requires an extra cost/benefit analysis for the project. Also the technology should handle global transactions, such as multiple currencies and multiple languages from multiple countries, multiple terms of contract, and multiple product quality standards. Commerce One has been offering a “Global Trading Web” solution to address these issues. Because transactions over Internet are not face-to-face, most business partners are unknown to each other. Consequently, the issue of trust in B2B is the same as in B2C e-commerce transactions. Many B2B exchanges have failed because they did not assure the creditability of the involved business partners. Trust in e-commerce could be enhanced with some quality assurance services and warranty seal programs, such as WebTrust and SysTrust of the American Institute of Certified Public Accountants (AICPA) (Nagel & Gray, 2001). In these programs, a third party (such as a CPA) audits the e-commerce transactions and infrastructure of a company to assure that it implements and follows some procedures and policies to guarantee the security of the online transactions and integrity in terms of fulfilling its obligation toward and honoring the privacy of its business partners. Once the company meets some prescribed criteria, it is awarded with a warranty seal to post on its Web site to inform the potential business partners on the security and quality of its online transactions.
CRITICAL SUCCESS FACTORS FOR B2B E-COMMERCE From the performances of current B2B e-commerce entities, one can highlight some critical success factors having an impact on sustainable business and competitive advantages (Laudon & Traver, 2001; Turban et al., 2002). A company has pressure to cut costs and expenses in the traditional paper-based procurement process related to vendor and product searches, vendor performance and cost comparison, opportunity costs, and errors of manual system. B2B e-commerce would provide ample opportunities and alternatives to optimize the procurement process. In this circumstance, the company has an incentive be involved in an effective and efficient cost-saving venture using Web-based technology. In addition, the top management will be interested in sponsoring and advocating the project. Another success factor would be for a company to have experience with EDI or other non-Web-based businessto-business electronic transactions and be willing to inte-
grate its current systems with new technologies in B2B. This would create a favorable climate supporting technology innovation. This factor is important in evaluating the technical feasibility of the B2B e-commerce project. It helps assess the readiness of the company, in terms of its technological maturity, to nurture an innovative system, and the availability of technical expertise needed to develop, operate, and maintain the system. The industry concentrates on selling and buying with fragmented supplier and seller, and experiences difficulties in bringing both parties together. The larger source of buyers and sellers offered by B2B e-commerce would provide a company with opportunities to optimize its supply chain management. In this context, the potential economic and operational benefits would justify involvement in a B2B e-commerce venture. Large initial liquidity is needed in terms of the number of buyers and sellers in the market and the volume and value of transaction to attract early business venture. In any economic feasibility analysis of a new venture, one needs to assess the cost of development and the payback period of the system. A large initial liquidity would justify the initial investment in a sustainable business. A full range of services, such as credit verification, insurance, payment, and delivery, is needed to attract small and medium businesses. The market maker also needs available domain expertise for these services. The added-value services would facilitate the transactions of smaller businesses. These business partners, without a sophisticated infrastructure and expertise, will need a one-point access to the e-marketplace to conduct a onestop transaction in B2B e-commerce. Business ethics should be respected, to nurture trust among business partners, fairness to all business parties, especially in non-face-to-face transactions over the Internet. Security issues should be implemented to protect privacy and trade secrets of involved business entities in an open networked marketplace. To address the issue of trust, the company may include some quality assurance services and seal programs, such as WebTrust and SysTrust of AICPA. Another factor is being able to successfully manage the channel conflict, to avoid any impact on the short-term revenue of supply chain partners. This conflict of interest is one of the limitations of B2B e-commerce and possible solutions to it were discussed in the previous section.
B2B E-COMMERCE ENABLE TECHNOLOGIES AND SERVICES The hands-off nature of such Internet technologies as the communication protocols TCP/IP and HTTP and the programming languages HTML and XML enables the progress of e-commerce. These technologies assure interoperability across businesses using various platforms, which is necessary for global communications and transactions. XML, as a widely distributed standard, is compact and easy to program and permits businesses to more completely describe documents and transactions over the Internet. Involvement in B2B e-commerce does not necessarily require intensive investment in hardware, software, or
P1: IML/FFX WL040-11
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 17, 2003
17:48
Char Count= 0
BEYOND SELLING AND BUYING B2B MODELS
staffing for a sophisticated telecommunications network and database. There are many application and service providers that offer cost-efficient solutions for business interested in participation in this innovative marketplace. These services may support the complete value chain processes/activities of a business from its upstream suppliers to its final customers. Below are reviews of some service providers in B2B e-commerce. GE’s Global eXchange Services (GXS) ( gxs.com) offers many services to B2B business. Its Trading Information Exchange (TIE) is a global extranet service with features such as online information publishing, dynamic delivery of supply chain data, and promotions– management workflow applications, enabling partners to share detailed operational information and to jointly manage key business processes. Its Source-to-Pay Services suite includes facilities for posting and responding to RFQs, online auctions, catalog purchases, invoice tracking, and payment. This provider has served more than 100,000 trading partners conducting about 1 billion transactions worth $1 trillion annually. GE itself has used these services internally since 1999 to connect with about 36,000 of its suppliers and has conducted about 27,000 auctions worth close to $10 billion (isourceonline.com, 2002). Commerce One promotes the Global Trading Web to link buyers, suppliers, service providers, and emarketplaces in a single, global community. This network is based on an open architecture—without closed and proprietary applications—and widely disseminated standards to assure technical and operational interoperability. The standards on data, content, and document format are established across the community. The technical interoperability assures that files, data, and applications can be transferred across platforms. The operational interoperability assures that e-marketplace processes and procedures can operate in unison. There are about 10,000 buyers and suppliers participating in the Global Trading Web. In addition nearly 100 e-marketplaces in all sizes and industries make up the membership that forms the backbone of the trading community (commerceone.com, 2000). Ariba (ariba.com) products and services have enabled B2B e-commerce processes for more than 100 leading companies around the world, including over 40 of the Fortune 100, in diverse industries. Ariba products have been implemented on more than 3,750,000 desktops around the world (ariba.com, 2002). Web-based marketplaces powered by Ariba unite fragmented value chains operated by interdependent trading partners, bringing together buyers, suppliers, and service providers in Internet-speed trading communities. Ariba provides solutions for the rapid deployment and configuration of online procurement portals and automates end-to-end commerce processes, including catalog searches, requisitioning, purchasing, and invoicing. Integration with Ariba Supplier Network provides the infrastructure and third-party services companies need to transact, manage, and route orders in real time. It connects companies to e-procurement on-ramps, supplier-hosted catalogs, and other marketplaces. Ariba Marketplace integrates seamlessly and comprehensively with dynamic sourcing
127
and RFQ capabilities, providing market makers with cost-efficient trading models, such as auctions, reverse auctions, bid/ask exchanges, and negotiations features. It can also handle real-time multicurrency translation for increased payment capabilities. i2 (i2.com) has over $1 billion in revenues with more than 1000 customers. It has delivered about $30 billion in audited value to customers (i2.com, 2002). i2 Global Network is an Internet collaboration space that enables buyers, suppliers, and marketplaces to rapidly connect to each other and use i2 Network Services for content, collaboration, and commerce. These services allow the enterprise to extend its e-procurement and collaboration initiatives beyond tier 1 suppliers. i2 industry solutions include preconfigured industry templates, packaged rolebased workflows, integration capabilities, product configurations, and example models and scenarios built specifically for an industry. From this starting point, companies can easily modify the template to meet their unique needs. i2 Supplier Relationship Management (SRM) supports the partnership with suppliers by coordinating processes across product development, sourcing, supply planning, and purchasing within a company and across companies. i2 Supply Chain Management (SCM) manages the supply chain within a company and across companies in the value chain as well. It provides multienterprise visibility, intelligent-decision support, and execution capability utilizing open, real-time collaboration with trading partners. i2 Demand Chain Management (DCM) synchronizes customer front-end processes with operations, enhancing responsiveness of the supply chain to maximize customer profitably and loyalty. MRO Software Inc. (mro.com) provides fully hosted, off-the-shelf online services for Web storefront, security, catalog management, supplier administration, customer relationship management, order management, transaction processing, and integration with other online B2B services in the e-marketplace. This provider has served more than 8,000 customers (mro.com, 2002).
BEYOND SELLING AND BUYING B2B MODELS B2B e-commerce extends to activities other than just selling and buying. For example, partners in a value chain could be involved in collaborative commerce (c-commerce) in a Web-based system to meet final consumer demand by sharing information on product design, production planning, and marketing coordination. Once consumer demand is identified, the quantity on hand of the raw material and semifinished and finished products of one partner will be made visible to others, avoiding bottlenecks along the value chain and supply chain (Laudon & Travers, 2001). In this type of business, some partners act as value chain integrators while others are value chain service providers. This business model assures the production of goods/services that effectively meet consumer demand with the collaboration between manufacturers and retailers. Then the product design and production cycle will be efficiently shortened with the collaboration between manufacturers and upstream suppliers.
P1: IML/FFX WL040-11
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
128
June 17, 2003
17:48
Char Count= 0
BUSINESS-TO-BUSINESS (B2B) INTERNET BUSINESS MODELS
GLOSSARY
REFERENCES
Aggregation of orders and/or RFQs A compilation of small orders and RFQs of many businesses into a larger package to gain volume discount and economic of scale. Bartering A trading method in which business partners exchange their surplus to one another without using cash. Company-centric B2B e-commerce A business model represents a one-to-many business relationship in B2B e-commerce. In this model, a company involves in direct selling or direct buying of goods and services with many business partners. Digital loyalty network A business model to offer special treatment to valued/preferred business parties in the value chain or supply chain in terms of priority, pricing and contract conditions. Exchange/ trading mall A business model represents a many-to-many business relationship in B2B e-commerce. In this marketplace, many buyers transact with many suppliers for goods and services. Meta-catalog A compilation and index of goods and services offered by many small businesses into one source for easy of access to the public or interested parties. MRO Non-production or indirect materials in maintenance, repairs, and operations. Request for proposal (RFP) A tendering system in which a seller lists the materials for disposal and asks potential buyers bid on the contract. Buyer offers highest bid (forward auction) will win the contract. Request for quote (RFQ) A tendering system in which the buyer lists the materials in need and asks the potential suppliers bid on the contract. Supplier offers lowest bid (reverse auction) will win the contract.
Ariba marketplace. Retrieved August 2002, from http:// www.ariba.com Conquering the catalog challenge: A white paper. Retrieved October 2001, from http://www.mro.com A distributor’s road map to e-business: A white paper. Retrieved February 2002, from http://www.mro. com Fein, A. J., and Prembroke Consulting (2001). Facing the forces of changes: Future scenarios for wholesale distribution. Washington, DC: National Association of Wholesalers. GE GXS enters the sourcing area. Retrieved May 2002, from http://www.isourceonline.com The global trading Web—Creating the business Internet: A white paper. Retrieved November 2000, from http:// www.commerceone.com i2 corporate overview. Retrieved August 2002, from http://www.i2.com Intelligent supply chain. Retrieved May 2002, from http:// www.gegxs.com Kalakota, R., & Whinston, A. B. (1997). Electronic commerce: A manager’s guide. Reading, MA: Addison– Wesley. Laudon, K. C., & Traver, C. G. (2001). E-commerce: Business, technology, society. Boston: Addison–Wesley. Nagel, K. D., & Gray, G. L. (2001). CPA’s guide to ebusiness: Consulting and assurance services. San Diego: Hartcourt Professional Publishing. Pavlou, P. A., & El Sawy, O. A. (2002). A classification scheme for B2B exchanges and implications for interorganizational e-commerce. In M. Warkentin (Ed.), Business to business electronic commerce (pp. 1–21). Hershey, PA: Idea Group Publishing. Trading information exchange electronically links retailers with their suppliers. Retrieved May 2002, from http:// www.gxs.com Turban, E. F., King, D., Lee, J., Warkentin, M., and Chung, H. M. (2002). Electronic commerce 2002, a managerial perspective. Englewood Cliffs, NJ: Prentice Hall Internation. WebTrust principles and criteria. Retrieved January 2001, from http://www.aicpa.org/assurance/webtrust
CROSS REFERENCES See Business-to-Business (B2B) Electronic Commerce; Business-to-Consumer (B2C) Internet Business Models; Click-and-Brick Electronic Commerce; Collaborative Commerce (C-commerce); Consumer-Oriented Electronic Commerce; Electronic Commerce and Electronic Business; Emarketplaces.
P1: IML/FFX WL040A-10
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 17, 2003
18:9
Char Count= 0
Business-to-Consumer (B2C) Internet Business Models Diane M. Hamilton, Rowan University
Introduction Internet Business Models (B2C) The Retail Model The Auction Model The Reverse Auction Model The “Name-Your-Price” Model The Flea Market Model The Group-Buying Model The Shopping Bot (Buyer Advocate) Model The Full-Service Market-Making Model The Online Currency Model
129 129 130 131 131 131 132 132 132 133 133
INTRODUCTION For as long as commerce has existed, there have been diverse business models. A business model, very simply stated, is the method by which a firm manages to remain a going concern, that is, the way in which a company earns sufficient revenue to remain in business. By far the most often encountered business model is one in which an organization sells a product or provides a service in exchange for currency. Consider the manufacturing business model and, as an example, Dell Computer. Dell is in the business of building personal computers and selling them either to other businesses or to the general public. With the retailing business model companies also sell products in exchange for cash payment—for example Zales Jewelers. Zales does not manufacture the jewelry it sells; it provides a retail outlet, that is, a physical location, where potential customers can engage in the purchase of the goods that Zales has to offer. A variant on the retail model is the catalog business model. Catalog companies often don’t have physical facilities (although some do); rather, they offer their goods for sale via their catalog and then ship the goods to the purchaser. An example of a catalog retailer is Figis, which sells gourmet snacks and gifts. The service business model has been growing significantly over the past several decades. Service-type businesses are quite varied, including, for example, beauty salons, attorneys, plumbers, and physicians. What they all have in common is that a service is rendered in exchange for a cash payment. Not all business models involve providing a product or service in exchange for cash. Consider, for example, the way some health clubs provide child care to their members. In some cases, the health club provides a room where children can remain happily occupied while their parent exercises. In order to avail themselves of the facilities for their children, health club members must be willing to personally provide the supervision for a fixed number of hours. That is, health club members who take advantage of the room by leaving their children there when they exercise pay for this benefit by working a few hours themselves
The Free-Information Model The Service-for-a-Fee Model The Free-Service Model Internet Strategy Competitive Advantage Through the Internet Lessons Learned Conclusion Glossary Cross References References
133 134 135 135 135 136 136 137 137 137
in the child-care room. This service is paid for, then, not by cash but by a corresponding service. For any business model to be successful, it must generate some type of revenue or “value” that will enable the organization to continue in operation. The arrival of the Internet’s World Wide Web has brought about myriad new business models as well as a variation on the business models that had already existed, most notably the retailing model. These new business models take advantage of the Internet in many ways: to change a delivery system (e.g., digital delivery of software instead of physical shipment of the product); to improve customer service (e.g., online tracking of orders through the United Parcel Service); or to reach a wider audience (all geographical and time constraints are removed). Regardless of the model, however, the principle remains the same. A successful business model is one in which an exchange occurs between entities (companies or individuals) such that the organization can be selfsustaining through the receipt of revenue or something else of value, and it is worth noting that no level of sophisticated technology can make up for the lack of a good business model. The next section is devoted to the definition and illustration of the most popular business-to-consumer (B2C) Internet business models. Later, some important tenets for Internet strategy are presented, along with lessons learned as a result of the quick rise and fall of many dotcoms.
INTERNET BUSINESS MODELS (B2C) Some businesses have migrated to the Internet without changing their business model at all. These companies built Web sites, aptly called brochureware, which simply provide information about the company—as it exists in the physical world. They don’t attempt to provide another sales channel, that is, to engage in virtual commerce. Other businesses, called brick-and-clicks, moved to the 129
P1: IML/FFX WL040A-10
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
130
June 17, 2003
18:9
Char Count= 0
BUSINESS-TO-CONSUMER (B2C) INTERNET BUSINESS MODELS
Internet by offering their current products for sale online. Still others didn’t exist prior to the advent of the World Wide Web and couldn’t operate without it. There are clear advantages to electronic commerce from the perspective of both the business and the consumer. For example, an online business is available to a much larger set of customers than would be possible if the customer had to physically visit the business establishment. Also, this larger potential market allows a business to sell its product with lower marketing costs. Further, online businesses can actually improve customer service, for example through online help desks. Finally, an online business can better utilize human resources and warehouse/retail space. Benefits also accrue to consumers who shop online. Customers can now shop at their convenience—any day and any time of day. They have a much larger selection to choose from; that is, it is possible to visit many more online retailers than would be feasible if they had to get in their car and drive from store to store. Price comparisons are more easily made online, especially with the aid of shopping bots, as is described later. It is rare, however, to find a business environment that provides advantages without also having potential problems. Electronic commerce is no different. Problems are possible for both the online business and the online consumer. For example, online businesses suffer from a much higher rate of credit card fraud than their real-world counterparts, according to Visa—24% for online transactions compared with 6% overall for all transactions (Mearian, 2001). In the following sections, the most popular business-toconsumer (B2C) business models are described and illustrated. It should be noted that although there are many ways to classify business models, no generally accepted categorization scheme for Internet business models currently exists. The breakdown provided in this paper attempts to show the diversity of business models operating on the Web according to the type of transaction (or activity) engaged in, coupled with the way revenue is earned. Revenue can be earned, for example, through traditional sales purchases, as a commission on auction purchases, or through advertising that supports free information. Some of these business models, for example auctions and reverse auctions, existed in the physical world prior to the advent of the World Wide Web. However, the Web has allowed these business models to be redefined in virtual space, taking advantage of the huge Internet community. There are other business models that only came into existence after the advent of the World Wide Web, for example shopping bots. The way each of these unique models operates on the Internet is described in the following sections.
The Retail Model Businesses adopting the retail model sell a product in exchange for cash. These retailers can be further differentiated according to whether the company exists solely online (online-only storefronts) or whether the Internet serves as just one of multiple sales channels (brick-andclick retailers). Online-only storefronts are more popularly
called dot-coms; these businesses did not exist prior to the advent of the World Wide Web. As a matter of fact, it was the World Wide Web that provided the conditions allowing for the emergence of the dot-coms—companies that exist only on the Internet. Perhaps the most well-known dot-com company is Amazon.com, which started as a bookseller and migrated into a marketplace that now sells many diverse product lines, such as electronics, toys and games, music, cars, and magazine subscriptions. Amazon titles their home page “Amazon.com—Earth’s Biggest Selection.” They have clearly ascribed to the “reach” strategy as espoused by Evans and Wurster (1999) as they have grown their company. (This strategy, along with several others, is explained in the final section of this chapter.) Somewhat less well know than Amazon is Pets.com, one of the more famous “failures” in the sea of dot-coms, going out of business late in 2000. Amazon owned a large share in Pets.com. Unfortunately for Amazon, this was the second company they backed that went out of business (Living.com was the first). Organizations conducting business in the physical as well as the virtual world are often referred to as brickand-click companies (and sometimes click-and-mortar). In contrast to dot-coms, these firms had an established business before the advent of the World Wide Web, and after the Web was created, an online presence was added. More recently, firms in this category have found that the most successful strategy is to use the Internet as a way of supporting their primary channel—in most cases, the retail store. Although some companies may have created a Web site that’s simply brochureware (information without any transaction ability), brick-and-clicks are capable of conducting retail business over the Web. Consider two extremely well-known, yet very different, brick-and clickcompanies, Lands’ End and Wal-Mart. Lands’ End appeared on the Internet very early, in 1995, and today boasts the world’s largest apparel Web site. Well known as a catalog retailer, Lands’ End also sells through outlet stores and the Internet. One reason for their Internet success is likely their initial catalog experience. Catalog stores have much in common with electronic commerce and have, therefore, proved to be successful when the Internet channel is added. Another reason for Lands’ End’s success is the way in which they have utilized the Web to add value for their customers. For example, “My Virtual Model” allows customers to “try on” clothes after creating a virtual model of themselves by entering their critical measurements. Lands’ End also provides a feature they call “My Personal Shopper,” wherein customers answer a series of questions about themselves, thus allowing a virtual personal shopper to make various recommendations. Another one of their features, “Lands’ End Custom,” provides custom tailored apparel on request, and they also provide online chat facilities with customer service. WalMart became the nation’s number one retailer in the early 1990s and the nation’s largest employer in 1997. It is famous for its huge superstores, where a consumer can find just about anything at an everyday low price. Walmart.com, a wholly owned subsidiary of Wal-Mart Stores, Inc., came into existence in January 2000, making it a somewhat late entry in the e-commerce market.
P1: IML/FFX WL040A-10
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 17, 2003
18:9
Char Count= 0
INTERNET BUSINESS MODELS (B2C)
Wal-Mart has been successful on the Web because they have exploited their channels, rather than keeping them separate. For example, a consumer can make a purchase on the Web and return it to a store, if desired. Consumers can see the current Wal-Mart circular or the current advertised sales by visiting the Web site, and they can request e-mail notice of special values.
The Auction Model The auction business model has existed for a very long time; consider, for example, Sotheby’s, who auctions valuable items to the public in large halls where many prospective buyers can assemble. Although Sotheby’s has now added online auctioning to their services, eBay has clearly revolutionized the auction business model. eBay calls itself “The World’s Online Marketplace.” Their mission is to “help practically anyone trade practically anything on earth” (http://www.ebay.com). eBay has reinvented the auction by including every conceivable type of item— from the most expensive (e.g., automobiles and rare collectibles) to the least expensive (e.g., junk souvenirs) to professional services (e.g., Web design or accounting). eBay is one of the very few dot-coms that managed to make a profit almost immediately after it appeared on the Internet. It earns revenue in several ways: from “posting” fees, from “special feature posting” fees, and from commissions on item sales. That is, sellers who wish to join the auction pay a very nominal posting fee (ranging from a few cents to a few dollars) in return for eBay providing space to show their item. If the seller wishes to use a special feature that might catch the attention of prospective buyers (e.g., highlighting an entry or listing an item at the “top” of a search), he or she pays another small fee. These two types of fees are paid by the seller whether or not the product eventually sells. However, because they are such nominal fees, there is very little at risk to the seller when the item is placed for bid. The final type of fee is collected by eBay if anyone bids on the item and it eventually sells. This fee is based on the amount of the sale; that is, the seller would pay more for a car that is auctioned than for a used book or CD that sells. What eBay provides is the mechanism for completing a transaction between a buyer and a seller. It provides the complete software interface for the transaction, and it does so splendidly. Its proven dependability and accuracy has attracted almost 40 million registered users and made it the leading online marketplace for the sale of goods and services—almost $15 billion was transacted on eBay in the year 2002. Many reasons exist for eBay’s success, not the least of which is the flawless operation of its Web site, or the low cost for putting items up for bid. In addition to these basic items, however, eBay has provided added value in several other ways. For example, “Buy it Now” is a feature that allows a buyer to purchase an item immediately, at a stated price, if no one has yet placed a bid on the item, without waiting for the full auction time (typically a week) to expire. eBay even has a feature that allows bidders to participate (real time) in auctions that are currently being conducted in the world’s leading auction houses. Other Web sites offer auctions (e.g., Yahoo and Amazon), but their volume and success is far less than that achieved by eBay since it came into existence, in 1995.
131
The Reverse Auction Model Like the auction model, the reverse auction business model was not born on the Internet; businesses have utilized this approach for many years through requests for proposals (RFPs). Whereas an auction offers a good or service for sale and invites interested parties to bid higher and higher to obtain it, a reverse auction specifies the demand for some good or service and invites sellers to offer the item for sale at lower and lower prices. Although there are many reverse auction sites available on the Net, no individual Web site in the B2C market has become a household name. Examples of reverse auction sites include Respond and PillBid. Respond offers consumers the opportunity to request either a product or a service in widely diverse categories, such as automobiles, computers, legal services, travel, and home improvement. As a matter of fact, about 60% of all online requests are actually for services, rather than for hard goods, according to Respond. The most often requested service categories are hotels/motels, auto insurance, apartment/house rentals, DSL service providers, and day spas (Greenspan, 2002). Respond earns their revenue from two basic sources: affiliates and product/service providers. Any site can become an affiliate by adding a link to Respond’s site, and affiliates are paid $1 every time a visitor clicks through and makes a request at Respond. Product/service providers can choose from one of two payment options: a subscription plan or a straight fee schedule. Under the subscription plan, providers are given access to all leads along with lead management and scheduling tools. Under the fee schedule, providers pay a small amount for each lead to which they choose to respond. A niche-oriented reverse auction occurs at PillBid, where consumers post an interest in filling a prescription. PillBid serves as the intermediary between the consumer and pharmacies and offers an additional, value-added service: a search of the PillBot database. PillBot searches online pharmacies and advises the consumer about the lowest prices available on the Net. What the Internet provides for any Web site utilizing this type of business model is the potential for a huge community of participants, which is important for the ultimate success of a reverse auction. The ability to facilitate auctions without human intervention and the potential for a very large number of participants are the factors provided by the Internet that allow for the sale of low-price items and the ability of a company to achieve profitability by charging very low commissions. This would not be possible in the physical world.
The “Name-Your-Price” Model Many people consider Priceline to be a reverse auction, and it is often referred to as such in the literature. However, Priceline claims that it is not a reverse auction because sellers do not bid lower and lower for the ability to fill the demand for an item (www.priceline.com). Instead Priceline has patented a technology that they call “Name Your Own Price.” Their system works as follows. Consumers make purchase offers with a fixed price for such things as airline tickets and hotel rooms. Additionally, they agree to some level of flexibility with respect to
P1: IML/FFX WL040A-10
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
132
June 17, 2003
18:9
Char Count= 0
BUSINESS-TO-CONSUMER (B2C) INTERNET BUSINESS MODELS
vendor, travel date, number of connecting flights, and so forth. Finally, they agree in advance to make the purchase (by supplying credit card information) if their price and flexibility levels can be matched by a vendor. Priceline then takes these offers and attempts to find a vendor that will provide the product or service for the terms specified. This business model exploits the Internet by improving information about supply to buyers and information about demand to sellers. That is, Priceline provides added value by serving as an information intermediary. Like other market makers, Priceline obtains its revenue as the difference between the price of acquiring the product or service and the price at which it turns the product or service over to the consumer (Mahadevan, 2000). Priceline, now a household name, has seen its share of success and failure. Its stock has traded at as high as $158 and as low as just a few dollars. It has found tremendous success in the sale of airline tickets; as a matter of fact, Priceline has claimed to be the largest seller of leisure air tickets in the United States (Anonymous, 2000). On the other hand, it lost so much money attempting to sell groceries and gasoline that it had to exit these markets soon after entry. This would lead one to conclude that some categories are far better suited to the name-your-price business model than others.
The Flea Market Model Each of the three previous models (auction, reverse auction, and name your price) belong to the broad category of models categorized as the “broker” model by Rappa (2001). In its simplest form, a broker is a site that brings buyers and sellers together, and that facilitates buying transactions by providing a location and a system where the buying can take place. Like the previous models, the flea market model acts as a broker. However, it differs from the other models because the purchase transaction more clearly takes place between the buyer and the seller, rather than through some type of auctioning system. The interaction “feels” more like a typical flea market purchase. This type of model is employed at Half, which is a subsidiary of eBay. Sellers can post items for sale at Half, and when buyers search for them, they see the selling price and data about the current “condition” of all items available. Once the buyer selects an item, the sale is enacted immediately at the posted selling price. Half, acting as a broker, accepts payment from the buyer, deducts its net commission (based on selling price and shipping costs), and gives the proceeds to the seller. Half also earns revenue by inviting other Web sites to become affiliates (similar to the affiliate program explained earlier for Respond.) An even more typical flea market transaction takes place at iOffer. iOffer has an interface that closely resembles eBay, but it is not an auction. At iOffer, a prospective buyer can search for an item and see a listing of items that match the search criteria. (If nothing matches the buyer’s criteria, the buyer can post a free “want ad.” These want ads are then used to provide relevant sellers with potential sales leads.) Once the buyer finds an interesting item, he/she can either buy the item at the selling price or make an offer. This second choice allows the buyer and seller to interact directly, making offers and counteroffers, until either the sale is transacted or the buyer “walks away.” As
with other broker sites, the seller pays a fee based upon the selling price of the item.
The Group-Buying Model Manufacturers and distributors have always provided price breaks to buyers as the number of units ordered increased. That is, a company would expect to pay less per unit for an order of 1,000 units than for an order of one unit. The group-buying business model attempts to provide individual consumers with this same advantage. Operationally, items for sale appear on the groupbuying Web site for a period of time, generally several days to a week. As time passes and more buyers join the “group” (i.e., place an order at the current price), the price goes down, reflecting the large volume discount. At the end of the buy cycle, all group members are billed for the final price. Group members who have placed an order are likely motivated to attempt to get others to join the group, causing the ultimate price to go down even further. This sounds like a good idea; however, in practice, this business model has failed in the B2C market. In January 2001, both of the well-known consumer groupbuying sites, Mercata and MobShop, closed their doors to consumers. Mercata had performed well since their opening, in May 1999, and was the top-ranked online buying service in Spring 2000, according to Gomez Advisors. MobShop, which opened in October 1998, had been attempting to shift to the business-to-business (B2B) and government markets when it discontinued consumer service. MobShop’s software is now used by the U.S. General Services Administration and some B2B marketplaces (Totty, 2001).
The Shopping Bot (Buyer Advocate) Model A “bot” (short for robot) is a software tool that can search through tons of data on the Internet, return the found information, and store it in a database. Shopping bot programs crawl from Web site to Web site and provide the information that will ultimately populate the databases of search engines and shopping sites. Shopping bots can provide prices, shipping data, product availability, and other information about products available for sale online. This data is then aggregated into a database and is provided to a consumer when he or she is interested in making a purchase online. Most shopping bots were started by small, independent organizations and quickly sold to large companies for proprietary use on their site. For example, Amazon bought the technology supporting Junglee in 1998 and MySimon was purchased by Cnet in 2000. Shopping bot technology is incorporated into many sites; for example, DealTime, is incorporated into AOL’s shopping section and that of other portals, such as Lycos and iWon. Initially, bots were often blocked by retail sites for fear of uncompetitive prices. However, bots are now considered advantageous because they drive consumer traffic to the site (buyers can “click through” to the desired retailer from the bot site.) As a matter of fact, bots now partner with shopping sites for revenue enhancement. Multiple revenue streams exist in this business model. For example, merchants can pay for advertising space on the bot site to promote special incentives and sales. Some bot sites,
P1: IML/FFX WL040A-10
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 17, 2003
18:9
Char Count= 0
INTERNET BUSINESS MODELS (B2C)
such as mySimon and DealTime, allow merchants to pay for a higher position in the retailer listing, and mySimon also obtains revenue for placing some products in a list of so-called “recommended” items (White, 2000). Other bot sites, rather than accept revenue for a higher place in the listing, charge all retailers simply to list on the site (Borzo, 2001). It will be interesting to watch the success of this business model, which earned quick popularity by exploiting a strategy of affiliation with the consumer, but which has systematically moved away from affiliation with the customer and closer to affiliation with the retailers.
The Full-Service Market-Making Model Market makers don’t sell products they personally produce. Rather, they bring together potential buyers and sellers and provide a virtual marketplace where all parties can discuss and engage in transactions. When the products and services are all related, a full-service site emerges. Revenue generally comes from two areas—program fees and advertising. Autobytel, for example, improves the vehicle purchasing process by providing data for the initial research and comparison of models, the actual auto purchase, and the ultimate auto resale. It also supports car ownership by listing data about recommended service schedules and vehicle recalls, and it gives users the ability to schedule service and maintenance appointments online. Finally, the site provides featured articles and links to myriad auto parts and accessories retailers. Autobytel describes itself as an “Internet automotive marketing services company that helps retailers sell cars and manufacturers build brands through efficient marketing and customer relationship management tools and programs.” Autobytel is the largest syndicated car buying content network, and its four branded Web sites received nine million unique visitors in just the fourth quarter of 2001 (http://www.autobytel.com). Like a good many other Internet business models, Autobytel earns revenue from a combination of sources, including dealer program fees, advertising, enterprise sales, and other products and services.
The Online Currency Model Initially, online currency was designed to fill a desire for consumer privacy. That is, many thought that consumers might want to make online purchases where the detail wouldn’t show up on a credit card statement. In other cases, e-cash was designed to enable payment of micropurchases. Qpass online currency, for example, can be used to make micropurchases (e.g., data on mutual funds) at Morningstar online. Currently, there are very few opportunities for micropurchases on the Net; however, as revenues from online advertising continue to decline, micropurchases may yet emerge successful for the purchase of small information items, such as news and medical information. Online currency has also been used to encourage surfers to visit particular sites, and their visits were rewarded with small amounts of online currency. This has not proved to be a successful business model, however. Cybergold, for example, paid consumers for various actions, such as reading ads, answering surveys, and
133
registering at Web sites. Beenz acted similarly and attempted to fill a niche by providing e-cash in various international currencies. Cybergold and Beenz have both gone out of business. Flooz, another failed brand, was marketed as a gift certificate for Internet shopping. The model used for all these online currencies was to take a small percentage from transactions at their participating retailers, that is, the same as the credit card model. Paypal, a successful brand, with an initial public offering in early 2002, mainly facilitates eBay transactions and processes payments via major charge cards or direct checking account access. It now offers multiple currencies to facilitate global transactions. In some ways it appears that generic online currency is a solution still waiting for a problem.
The Free-Information Model Free information of all types is ubiquitous on the Internet. For example, virtually all major news organizations maintain Web sites with regularly updated news items. When people don’t have access to a newspaper or a television set (and even when they do), they can visit their favorite news site to find out what’s happening in the world, what the weather is expected to be, and how their favorite sports team is faring. CNN, USA Today, and others use these sites to strengthen their brand and earn revenue through advertising. News isn’t the only type of information available on the Web, however. Diverse types of free information can be found, for example, at MapQuest, Nolo, and Encarta. MapQuest allows users to get driving directions between any two addresses in the United States—at no charge whatsoever. Nolo provides free advice about various legal issues, for example as related to creating a will or obtaining a divorce, and Encarta, provided by the Microsoft Network (MSN), is an online encyclopedia. These sites collect revenue in various ways. For example, MapQuest licenses its technology to thousands of business partners and provides advertising opportunities using banner ads and electronic coupons for hotels, restaurants, and the like, which target consumers as they travel. Nolo sells legal software and books on its site, and the free information that drives users to the site could also entice a user to make a purchase there. The Encarta site provides a great deal of information for free, but the full Encarta Reference Library is available for sale on the site, as are many other products. (This is the MSN, after all.) Just how long so much free information will remain free is unknown. In the future, these types of sites might be excellent candidates for micropurchases, should that concept ever become popular.
Search Engines Search engines provide free information. However, they are unique enough and important enough to be discussed as a subcategory of the free-information model. Search engines serve as indexes to the World Wide Web. At a search engine site, people use keywords to indicate the type of information they are looking for, and the site returns matches considered relevant to the user’s request. The data provided by a search engine comes from their
P1: IML/FFX WL040A-10
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
134
T1: IML
WL040-Sample.cls
June 17, 2003
18:9
Char Count= 0
BUSINESS-TO-CONSUMER (B2C) INTERNET BUSINESS MODELS
database, which is continually updated. These databases are populated and updated through a continual search of the Web by programs called spiders. A number of search engines exist, for example Google, Lycos, and AltaVista. Because the spider code differs from search engine to search engine, as does the capacity of the database, the results one gets will also differ. Portals (discussed below) sometimes partner with a search engine in order to provide search facilities; for example, Comcast utilizes Google on its home page and Yahoo uses Google as its back-end search engine (Notess, 2002).
Directories Directories are different from search engines in that a user typically selects from among a choice of categories, which have multiple levels of subcategories, to find the desired data. Yahoo was the first directory on the Web, and in addition to grabbing territory early, it has continually grown and reinvented itself. Yahoo now provides both directory and search engine facilities. It provides a tremendous amount of content, as well as services (e.g., e-mail), and shopping. As a result of the immense capability of its site, users spend increasingly large amounts of time there, and this, in turn, provides a great deal of information to Yahoo. This information is used to target advertising, which yields significant revenue, because targeted ads can sell for up to 60 times more than untargeted ones (Anonymous, 2001).
Portals Although portals don’t necessarily have to provide free information, in reality they generally do, so they are best discussed as a subcategory of the free-information model. A portal is a place of entry into the virtual world, that is, the site that a user chooses as his or her “home” in a browser, and most portals are, in fact, search engines or have search engine capabilities built in. Yahoo, for example, is one of the most popular sites used as a Web portal. Many other sites serve as the portal of choice for Web users, most notably those provided by Internet service providers, such as AOL or MSN. Because these generalpurpose portals appear by default on the desktop when a browser is launched, they have a distinct competitive advantage. News sites such as USA Today or CNN are also popular portals. General-purpose portals typically provide current news items, shopping, and the ability to configure the page according to personal interest. Many also provide e-mail capabilities, online calendars, and Webbased storage. Some Web users prefer a portal targeted to their specific interests, such as Healtheon/WebMD, funtrivia.com, or MavenSearch (a portal to the Jewish world). Regardless of the type, portals derive their revenue from site advertising and partnership agreements, or in the case of Internet service providers (discussed below), subscription fees help to defray the costs of maintaining their portal.
The Service-for-a-Fee Model Although products can and are purchased on the Web, services are available for online purchase as well. The service-for-a-fee model operates in the same fashion as the
retail model in that consumers pay for services rendered, and these fees account for the great majority of earned revenue. At Resume.com, for example, professionals will write, revise, or critique a resume. They will also prepare cover letters and thank you letters or even serve as a “personal career agent.” The price for these services ranges from $50 for a cover letter, to $99 for a “classic” resume, to $1,499 to act as a personal career agent. In the same basic industry, Monster provides services to both job seekers and employers. Monster’s slogan is “Work. Life. Possibilities.” They accept job postings from both employers and potential employees and allow either group to browse the available listings. The price for a job posting starts at $120, which is much less than many large, urban newspapers will charge, yet the audience reach is significantly broader. They also provide a number of job-related services, such as the “Personal Salary Report,” which gives job seekers an idea of their potential worth in the current job market, and the “Inside Scoop,” which purports to tell job seekers what interviewers “really want to know.” Monster is the largest employment site on the Internet, operating in at least 21 countries, thus providing a huge marketplace for employers and employees to meet. They are a real success story among the dot-coms, having about 30 million unique visits and over 12 million resumes. Yet they continue to seek avenues for new growth. For example, Monster recently joined with AOL, providing it with the largest consumer base on the Net, over 30 million members (www.tmpw.com), and it began providing online training and employee development services in August 2001 (Moore, 2001). This site is especially useful for employees who are willing to relocate or for employers who want the broadest possible geographical exposure for their positions. A search function using keywords is available for both employers and employees, to simplify the search process. E∗ Trade, an online brokerage and banking firm, was launched in 1983 as a service bureau before the World Wide Web came into existence. It linked with AOL and CompuServe in 1992, becoming Etrade.com, one of the first all-electronic brokerage firms. It is now a global leader in personal financial services, providing portfolio tracking, free real-time stock quotes, market news, and research. E∗ Trade Bank, now the largest purely online bank, was added to its portfolio in 2000 to offer a variety of financial services, including checking, saving and money market accounts, online bill paying, mortgage and auto loans, and credit cards. In 2001, E∗ Trade announced its first proprietary mutual fund, which is to be solely managed by E∗ Trade Asset Management (www.etrade.com).
Internet Service Providers (ISPs) Internet service providers are a little different from other types of services available on the Web, because they are not generally thought of as a “destination” on the Internet. Rather, ISPs provide consumers with the ability to access the Internet, and to send and receive electronic mail. Some of the most well known of these ISPs are AOL, CompuServe, and MSN. More recently, cable providers (e.g., Comcast and Cox) and DSL providers (e.g., Verizon) have earned and continue to increase market share by
P1: IML/FFX WL040A-10
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 17, 2003
18:9
Char Count= 0
INTERNET STRATEGY
providing a high-speed alternative to modem access. As of early 2002, 7% of U.S. households were using one of these broadband alternatives for Internet access and the percentage is growing (Associated Press, 2002). ISPs charge a monthly fee for their service, which accounts for the vast majority of their revenue. NetZero is the most well-known free ISP, but in return for being “free,” it requires users to allow their surfing and buying behavior to be snooped. Thus, although the user is not paying for this service with cash, he or she is paying for it by providing a large amount of personal information. This information is worth a great deal to NetZero, who can turn it into cash by selling the information to other businesses. NetZero also fills the user’s screen with more than the usual amount of advertising banners. Some believe that the free ISP model is not sustainable in the long run (Addison, 2001), and it should be noted that even NetZero has two types of service—free and “platinum.” Platinum costs less than most well-known ISPs (currently $9.95/month), but it offers better service than the free subscription does, and it removes the plethora of banner ads from the screen. Other examples of the free-service model are mentioned below. NetZero is an example of both the service-for-a-fee model and the free-service model. It is included here because it is an ISP.
The Free-Service Model It is often difficult to distinguish between free information and free services. For example, are the driving directions Mapquest provides a service or information? For purposes of categorization in this chapter, a service that actually provides information is considered as belonging to the free-information model. Only services providing something distinctly different from information are categorized as service models. One type of free service is the provision of games, such as games of chance. At Freelotto, for example, visitors can play an online lottery, which is supported by online advertisements. Napster provided a much publicized example of an extremely popular, freeservice site. At Napster, users could trade music files; however, in reality, no trading was actually required. Members provided music files free to anyone who wanted to download them, and they could download as many files as they desired. Napster was shut down as a result of a lawsuit brought by music providers. There are similar sites still operating (e.g., www.imesh.com), but none of these sites is anywhere as successful as Napster. It is worth noting that the music industry has attempted to replace the popular Napster service by offering “fee-for-download” sites, such as RealOne and Pressplay. However, these sites are relatively expensive (about $10/month) and have alliances with only a subset of the large record companies, resulting in members being able to download only music sold by alliance companies. Most sites that offer a free service earn their revenue through site advertising and/or reselling of consumer information.
INTERNET STRATEGY Creation of a successful Internet business model must also include the clarification of a firm’s strategy, including the
135
consideration of how to best exploit the features of the Internet. It should be noted that Internet strategy is not the same as a business model. The strategy outlines the specifics of how a company will implement a given model, and any business model can be implemented by an almost unlimited variety of strategies. The fact that some companies succeed when using a particular business model (e.g., auction) while others fail clearly attests to this fact. During the process of delineating an Internet strategy, one must clearly define how the day-to-day operations of the organization will allow the firm to become self-sustaining within a reasonable period of time. The amazing proliferation of dot-coms, and the incredible losses that accrued to some, provide us with many “lessons learned,” which are also briefly discussed in this section.
Competitive Advantage Through the Internet Evans and Wurster (1999) suggest that success on the Internet will accrue to those organizations that exploit one of the following: reach, affiliation, or richness. Reach is defined along two dimensions: (a) how many customers a firm can reach and/or (b) how many products/services can be provided to a customer. The advantage of reaching a large potential customer base is obvious, and that’s one of the main reasons why many businesses set up shop on the Internet. Customers from virtually anywhere in the world can shop online 24 hours a day, 7 days a week. On the Internet a business can also provide a huge number of products, significantly more than can be provided in a finite physical space. For example, Amazon, one of the Internet’s pioneers, offers 25 times more books for sale than the largest bookstore anywhere in the world. Competitive advantage can also be achieved through affiliation—specifically, through affiliation with the consumer. Consumers will have more trust in an organization that appears to be on “their side.” For example, if someone is going to buy an automobile, whose opinion will carry more weight with the buyer, the auto manufacturer (e.g., Ford or Totota) or an unbiased third-party (e.g., Edmunds.com or Consumer Reports)? The Internet has seen the rise of new types of business models that affiliate themselves with the consumer, for example search engines such as Yahoo or Google. Consumers can visit these sites, request data, have their requests honored, and pay nothing for the service. The final dimension suggested by Evans and Wurster is richness. They define richness as (a) how much detail the organization provides to the consumer or (b) how much information about the consumer is collected by the organization. In the first case, firms can achieve competitive advantage by providing significant detail to the consumer—more than the amount provided by their competition. This strategy will be most effective when the product is technical and/or changes frequently. Examples of the types of products that might benefit from a strategy of richness are computers, wireless phones, and digital cameras. A firm can also achieve competitive advantage if it holds a significant amount of information about its customers, because information is an asset when used effectively. Many online companies use information about
P1: IML/FFX WL040A-10
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
136
June 17, 2003
18:9
Char Count= 0
BUSINESS-TO-CONSUMER (B2C) INTERNET BUSINESS MODELS
their customers’ buying habits to customize the site for each visitor and also to provide suggestions about additional products the visitor might be interested in.
Lessons Learned The advent of the World Wide Web brought with it a rush of businesses hoping to be first to achieve market share online. During this frenzy of the mid-1990s, venture capitalists were willing to invest in almost any type of online organization. The belief seemed to be that just about any idea could bring a fortune online. Nothing appeared to be too extreme. However, by the late 1990s, many of the companies that, years later were still losing money, started to appear less elegant, and much of the venture capital funding that was so easily obtained earlier was drying up. One phenomenon worth considering is the reliance on Web advertising for revenue. In the early days of Internet surfing, the banner ad was something new and intriguing, and many people were motivated to click through, hoping to discover a “terrific deal” or to find out what they “won.” However, this initial curiosity was followed by consumer disinterest, as a result of both previous disappointments and the sheer numbers of ads popping up everywhere. Business models that relied solely on advertising revenue were among those fighting for their virtual life. Considering history and the current environment, a variety of recipes for online success have been espoused. Patton (2001) suggests several rules for online success. One is be diverse. Travelocity, for example, has found success by moving from its initial business model, which was to sell airline tickets and collect revenue from online advertising, to its revised model, which no longer depends so much on advertising, relying instead on its extensive customer database to sell things other than airline tickets (e.g., hotel rooms, membership in travel clubs, and suitcases) This is also an example of using “richness” to achieve competitive advantage. A second rule is exploit channels. Retailers have historically used multiple channels—such as retail stores and catalogs (e.g., Victoria’s Secret or L.L. Bean)—for selling their merchandise. The mistake made by some retailers was to consider the Web as a separate, competing channel, so that what a person bought in one channel, for example online, could not be returned to a different channel, such as the retail store. This practice was met with consumer frustration and dissatisfaction. On the other hand, the more successful enterprises exploited their channels, using one channel to promote another and treating all channels equally. This practice strengthened their brands and increased customer loyalty. Hamel (2001) suggests that there are three ways to imminent online failure, which he labels “dumb,” “dumber,” and “dumbest.” He claims that it is a dumb idea to miscalculate timing. That is, there are circumstances that require speed, such as where customer benefits are substantial and competitors are likely to appear quickly. On the other hand, there are circumstances that call for a slower approach, that is, where complementary products or new customer behaviors are required in order to take advantage of the product. Even dumber is to overpay for market share. For example, Pets.com paid $180 for ev-
ery customer it ultimately acquired. This huge cost, plus its inability to differentiate itself, led to its speedy failure (Patton, 2001). In order to be effective, a company must acquire customers at a discount, not at a premium. The dumbest idea, according to Hamel, is to come to market without a good business model, and he claims that there are two fundamental flaws that will kill a business model. They are (a) misreading the customer (are there really customers who actually want what you are going to sell?) and (b) unsound economics (are there really enough customers available to render your business profitable?).
CONCLUSION A business model is the method by which a firm manages to remain a going concern, and a business strategy helps to define the goals of the particular business model chosen. With the advent of the Internet, some existing business models were revised and other new models were invented. Many of the revised business models simply incorporated the addition of a new sales channel—the World Wide Web. Companies utilizing these models are called brick-andclicks. However, simply adding this new sales channel has provided no guarantee of success. Rather, companies who added this channel successfully did so through their specific implementation strategy. For example, Lands’ End added value to their Internet channel by incorporating special features and services that would entice people to shop on their Web site, and Wal-Mart utilized their Web site to strengthen their primary channel—the Wal-Mart store. These Internet strategies (i.e., value-added features or services and channel exploitation), along with others described above, helped aspiring brick-and-click companies achieve Internet success. Some companies revised an existing business model by adding value that could only be provided by the Internet. For example, eBay reinvented the auction model by providing an efficient and effective online marketplace, where sellers could market their goods with very little risk and for a very reasonable expenditure level. This has resulted in a volume so large as to exploit both the reach and affiliation strategies as espoused by Evans and Wurster (1999). Another example, the free-information model, is sometimes a variation on the traditional broadcast (television or radio) model. Companies such as CNN and USA Today provide free information to the public, and they earn their revenue through advertising. In other cases, the free-information model more closely resembles an ongoing “promotion” using free samples. Consider, for example, the information provided on the Nolo site, which might entice the user to purchase the related software package. Finally, the Internet has provided the catalyst for some completely new models, and these models would not exist without the Internet. Examples in this category are search engines, Internet service providers, and portals. These brand new business models exist for the purpose of helping people effectively access the World Wide Web. That is, ISPs provide the technology necessary to connect to the Web, portals provide a place to start activity once access to the Web is gained, and search engines help users find the right path to Web sites of interest. Portals and
P1: IML/FFX WL040A-10
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
T1: IML
WL040-Sample.cls
June 17, 2003
18:9
Char Count= 0
REFERENCES
search engines exploit an affiliation strategy, and the most successful ones provide the quality and quantity of information and services desired by the consumer, thereby exploiting richness. Regardless of whether a business model merely includes the Internet as a separate sales channel or adds value to an existing model in order to exploit the unique features of the Internet, or whether a model was invented specifically for the Internet, it has been shown that no firm can be successful without a good business model coupled with sound strategy, resulting in sufficient revenue to allow the firm to remain a going concern. It is also clear that technology alone, no matter how sophisticated it is, cannot overcome the problems inherent in a business model that, for example, misreads the customer or for which insufficient customers actually exist. The days of unlimited venture capital are over; future funding will require sensible models united with a profitable strategy.
GLOSSARY Bot Programs that can autonomously search through data on the Internet and return the data for storage in a database (short for robot; also called spider). Brick-and-Click Companies conducting business in both the physical and the virtual world. Business Model Method by which a firm manages to remain a going concern. Click-and-Mortar Companies conducting business in both the physical and the virtual world. Dot-com Companies that exist solely on the Internet. Ecash Special-purpose currency that can be spent when shopping online. Internet Service Provider (ISP) Provides Internet services such as access to the World Wide Web and e-mail. Micropurchase An online purchase having a cost of from just a few cents to a dollar or two. Portal A place of entry into the virtual world; the site a user selects as “home” in a browser. Search Engine An index to the World Wide Web; allows users to enter keywords that help to find desired information. Spider Programs that can crawl from Web site to Web site and retrieve data that is stored in the database of a search engine.
CROSS REFERENCES See Business-to-Business (B2B) Electronic Commerce; Business-to-Business (B2B) Internet Business Models; Click-and-Brick Electronic Commerce; Collaborative Commerce (C-commerce); Consumer-Oriented Electronic Commerce; Electronic Commerce and Electronic Business; Emarketplaces.
REFERENCES Addison, D. (2001). Free web access business model is unsustainable in the long term. Marketing, August 9, p. 9. Amazon.com (0000). Retrieved July 26, 2002, from http:// www.amazon.com Anonymous (2000). E-commerce: In the great web bazaar. The Economist, 354(8159), S40–S44.
137
Anonymous (2001). Internet pioneers: We have lift-off. The Economist, 358(8207), 69–72. Associated Press. FCC report: High-speed net access is growing. Retrieved February 8, 2002, from http:// www.hollywoodreporter.com / hollywoodreporter / convergence / article display.jsp?vnu content id= 1320918 Autobybel. Retrieved April 2, 2002, from http:// www.autobytel.com Borzo, J. (2001). A consumer’s report—Searching: Out of order?—You may think you’re getting the cheapest merchant when you use a shopping bot; but instead, you may just be getting the biggest advertiser. Wall Street Journal, September 24, p. R13. Ebay. Retrieved March 15, 2002, from http://www.ebay. com Encarta. Retrieved March 20, 2002, from http://encarta. msn.com Etrade. Retrieved February 21, 2002, from http://www. etrade.com Evans, P., & Wurster, T. (1999). Getting real about virtual commerce. Harvard Business Review, 77(6), 84–94. FreeLotto. Retrieved March 19, 2000, from http://www. freelotto.com Greenspan, R. (2002). Service seekers stay in the ‘hood. Ecommerce-guide.com. Retrieved July 21, 2002, from http: // ecommerce.internet.com/news/insights/trends/ article/0,,10417 1135801,00.html Half. Retrieved July 21, 2002, from http://www.half. com Hamel, G. (2001). Smart mover, dumb mover. Fortune, 144(4), 191–195. Imesh. Retrieved March 18, 2002, from http://www.imesh. com Ioffer. Retrieved July 21, 2002, from http://ioffer. com Lands’ End. Retrieved March 17, 2002, from http://www. landsend.com Mahadevan, B. (2000). Business models for internetbased e-commerce: An anatomy. California Management Review, 42(4), 55–69. MapQuest. Retrieved March 12, 2002, from http://www. mapquest.com Mearian, L. (2001). Visa purchases online security on merchants, banks. Retrieved July 21, 2002, from http://www.cnn.com/2001/ TECH / industry / 05/15/visa. security.idg/?related Monster. Retrieved March 22, 2002, from http://www. monster.com Moore, C. (2001). IBM, Monster.com kick start corporate e-learning. Retrieved December 29, 2001, from http://iwsun4.infoworld.com /articles /hn/xml/01/08/20/ 010820 hnelearn.xml Napster. Retrieved March 17, 2002, from http://www. napster.com Nolo. Retrieved March 19, 2002, from http://www.nolo. com Notess, G. R. Review of Google. Retrieved March 19, 2002, from www.searchengineshowdown.com/features/ google/index.shtml Patton, S. (2001). What works on the Web. CIO Magazine, 14(23), 90–96.
P1: IML/FFX WL040A-10
P2: IML/FFX
QC: IML/FFX
WL040/Bidgolio-Vol I
138
T1: IML
WL040-Sample.cls
June 17, 2003
18:9
Char Count= 0
BUSINESS-TO-CONSUMER (B2C) INTERNET BUSINESS MODELS
Paypal. Retrieved July 26, 2002, from http://www.paypal. com Pill Bid. Retrieved March 18, 2002, from http://www. pillbid.com Pressplay. Retrieved March 14, 2002, from http://www. pressplay.com Priceline. Retrieved March 21, 2002, from http://www. priceline.com Qpass. Retrieved July 26, 2002, from http:// member.qpass.com/MACHelpCenter.asp?ReturnUrl = %2Fmacwelcome%2Easp&BrandingID = 0 Rappa, M. Business models on the web. Retrieved December 12, 2001, from http://digitalenterprise.org/ models/models.html RealOne. Retrieved March 15, 2002, from http://www. realone.com Respond. Retrieved July 21, 2002, from http:// respond.com
Resume. Retrieved March 21, 2002, from http://www. resume.com TMP Worldwide. Retrieved March 14, 2002, from http://www.tmpw.com Totty, M. (2001). Openers—Changing clients: How some e-tailers remade themselves as B-to-B businesses. Wall Street Journal, May 21, p. R6. Travelocity. Retrieved July 26, 2002, from http://www. travelocity.com Wal-Mart. Retrieved March 22, 2002, from http://www. walmart.com White, E. (2000). The lessons we’ve learned—Comparison shopping: No comparison—Shopping ‘bots’ were supposed to unleash brutal price wars; why haven’t they? Wall Street Journal, October 23, p. R18. Yahoo. Retrieved July 26, 2002, from http://www. yahoo.com
P1: A-15 Oshana
WL040/Bidgolio-Vol I
WL040-Sample.cls
September 14, 2003
12:5
Char Count= 0
C Capacity Planning for Web Services Robert Oshana, Southern Methodist University
Introduction Planning Considerations Service Level Agreements Determining Future Load Levels Capacity Planning Methodology Business Planning Functional Planning Resource Planning Customer Behavior Planning Understanding the System and Measuring Performance Determining Response Time Interaction Performance The Performance and Capacity Process Model for Capacity Planning Usage Patterns Modeling Parameters for Web-Based Systems Modeling Approaches for Capacity Management Software Performance Engineering—Managing the Process
139 139 140 140 141 141 141 141 141 142 142 142 143 143 144 145 145 146
INTRODUCTION The Internet is continuing to grow rapidly throughout most of the world. The most popular Internet applications continue to be the World Wide Web (Web), electronic mail, and news. These applications use a client– server paradigm: Systems called servers provide services to clients. Client–server describes the relationship between two computer programs in which one program, the client, makes a service request from another program, the server, which fulfills the request. Although the client–server idea can be used by programs within a single computer, it is a more important idea in a network. In a network, the client–server model provides a convenient way to interconnect programs that are distributed efficiently across different locations. The client–server model has become one of the central ideas of network computing. Most business applications written today use the client–server model. So does the Internet’s main program, TCP/IP (transmission control protocol/Internet protocol). It is critical that these servers are able to deliver high performance in terms of throughput (requests per second) and response time (time taken for one request). As new capabilities and services make their way to the Web, the ability for forecast the performance of integrated information technology networks will be critical to the future success of businesses that provide and rely on these capabilities and services.
Definition of Software Performance Engineering The Software Performance Engineering Process SPE Assessment Requirements SPE for Web-Based Applications Availability Modeling and Planning Process People Product Availability Specification Measuring Availability Design Principles for System Availability Tools to Support Capacity Planning Examples of Capacity Planning Tools Capacity Management—Performance and Stress Testing of Web-Based Systems Types of Performance Testing Stress Testing Model for Web-Based Systems Glossary Cross References References
146 147 147 148 148 148 148 148 148 148 149 149 149 150 150 150 151 151 151
The explosive growth of Internet sites and e-commerce has presented new challenges in managing performance and capacity. Also, the ability to estimate the future performance of a large and complex distributed software system at design time can significantly reduce overall software cost and risk. These challenges have led to the development of a discipline of managing system resources called capacity planning, which Menasce and Almeida (2000, p. 186) described as “the process of predicting when future load levels will saturate the system and of determining the most cost-effective way of delaying system saturation as much as possible.” Performance management is an important part of the capacity planning process. The goal of performance management is to proactively manage current system performance parameters such as throughput and latency to provide the user adequate response time as well as minimal downtime.
Planning Considerations Determining future load levels can be difficult if a disciplined process is not used. There have been many examples of systems that suffered economic and reputation damage as a result of not planning correctly and adequately for growth and capacity. Building systems “on Internet time” appears to have been a way to get around disciplined system development processes in some cases. 139
P1: A-15 Oshana
WL040/Bidgolio-Vol I
WL040-Sample.cls
140
September 14, 2003
12:5
Char Count= 0
CAPACITY PLANNING FOR WEB SERVICES system databases
servers customers network security
enterprise systems
switches
strategic partners
routers
e-business load balancing
suppliers
messaging
intranet
Figure 1: Modern information technology systems are complex networks with many stakeholders and performance requirements. Modern information technology systems are complicated and involve a number of stakeholders and users (Figure 1). New capabilities are continuously being added to further complicate these already taxed systems. Planning the growth and evolution of these systems and managing them effectively cannot be a part-time job. There are too many interrelated factors that prevent it from being such, and businesses have come to be extremely dependent on these systems for their livelihoods. Nevertheless, the determination of future load levels should consider how the workload for the system will evolve over time. If a Web system is just coming online, supplemented by an increasing advertising campaign, developers should expect an increasing workload due to an increasing number of visits to the site. If there are plans to deploy a system in increments, adding new capabilities periodically during a phased deployment schedule, this should be considered in the planning of future load levels. Finally, changes in customer behavior should be considered. Whether this is due to a sale, a world event, or seasonal functions being added to the site, the estimation should be predicted and managed accordingly. Regardless of the driver for future load level changes, the goal should be to head toward a predictive pattern and not based solely on experimentation.
Service Level Agreements Service level agreements (SLAs; Figure 2) should be put in place with the customer (internal or external) to establish the upper and lower bounds of performance and availability metrics. For example, the customer may need to have the site availability greater than 99.95% for security reasons. This should be stated in the SLA. If the server side response time to information requests must be less than 5 seconds per request, this should also be stated in the SLA. SLAs vary by organization. The creation of an SLA is a management function with feedback from the user as well as the information technology developers. If any of the IT functions are outsourced, the SLA becomes more important and should be the basis for subcontractor management.
Determining Future Load Levels Determining future load levels of a system requires careful consideration of three major factors. The first is the expected growth of the existing system workload. Consideration should be given to how the company or business will evolve over time. The second is the plan for deploying new system services and new or upgraded applications. These new capabilities will require more memory, processing power, and I/O (input/output). These capabilities must also be measured against how often they will be used in order to determine the overall system impact when they are deployed. For example, for a new semiconductor design workbench capability added to its Web site, an estimate of the processing resources required to run the new application as well as the number of new users drawn to the site that will be using the new capability must all be considered. Finally, the third factor to consider is changes in customer behavior. This includes surges in site traffic due to world events, news stories, sales and advertisements, and other events that draw people to a specific site. The site must be able to sustain these temporal changes to load levels. If the semiconductor design workbench capability is accompanied by a large advertising campaign, the system should be built to handle the surge in new users registering for this capability, each of which will need system resources to operate their virtual design workbench.
Service level agreements
customers
Specified Technologies & standards
Response time < 1 sec, Availability > 99.7%
NT servers, Oracle DB, etc
Adequate capacity
management Cost constraints
Startup cost < $5 million Maintenance cost < $500K/year
Figure 2: Capacity planning is driven by several system parameters.
P1: A-15 Oshana
WL040/Bidgolio-Vol I
WL040-Sample.cls
September 14, 2003
12:5
Char Count= 0
CAPACITY PLANNING METHODOLOGY
In determining adequate capacity for a site, there are several factors that must be considered (Figure 2; Menasce & Almeida, 2000, p. 187): r
System performance. This includes factors such as performance and availability metrics for the system. Examples of performance metrics to be considered are server-side response time and session throughput. (A session in this context will normally consist of several Hypertext Transfer Protocol [HTTP] requests performed by a user during the period the user is browsing, shopping, etc.) Examples of availability metrics include site availability. Site availability is driven by many other factors including backup time, time to install new hardware and software, reboot time for failed applications, system maintenance time, and so on. r System technologies. Internet solutions can draw from a number of technologies in the industry. Servers are available from many vendors that have provided certain advantages for specific types of applications (for example, transaction-based applications). There are many database solutions available as well. The choice of these technologies and the decisions of how to integrate these technologies has a significant effect on the overall site performance. r System cost. System performance and system technologies are ultimately affected by the cost constraints for the system. Cost drives many of the decisions with respect to the type of servers, the number of servers, the technology used, training on the new technology, and so on.
CAPACITY PLANNING METHODOLOGY Menasce and Almeida (2000) described a capacity planning methodology as consisting of four core planning processes: business, functional, resource, and customer behavior planning.
141
Functional Planning A functional model of the system can be created that includes important information about the functions provided by the system including how the user interacts with the system, the Web technology used, the type of authentication used for e-business sites, and so on. The functional model is required before resource allocation can be estimated. The functional model can vary by business type and needs and will have a considerable impact on the overall resource allocation. The selection of the search engine, for example, will have a direct impact on the processing resources required to execute the particular search algorithms (which vary from search engine to search engine). The type of database model will dictate the type and form of query required, which will also drive resource demands. An online workbench for designers will require a certain amount of processing resources and memory to support this type of activity.
Resource Planning Resource planning is used to map resources based on customer behavior models the functional and business planning. Resource planning requires the development staff to characterize the current information technology (IT) environment and model it for performance analysis. This model is then analyzed and calibrated, if necessary, iteratively until the model is validated against the requirements of the system and SLA. Because few organizations have infinite cost resources to accommodate all wishes, the performance model must be consistent with the cost model for the project. The iterative nature of this phase attempts to develop a system, as closely as possible, that matches both the performance model as well as the cost model of the system. Because of this goal, a lot of “what if” analysis is usually performed in this phase to achieve the most optimal solution.
Business Planning
Customer Behavior Planning
The main goal of this process is to generate a business model that describes the type of business conducted. Web-based businesses can be business to business (B2B), which is a business that sells products or services to another business; business to consumer (B2C), which is a business that sells products or services to a targeted set of end user consumers; and consumer to consumer (C2C), in which untrusted parties sell products and services to one another. Business planning also considers the type of delivery associated with the business model. For example, selling books requires a fulfillment model that is different from distributing an upgrade to a software application that may be a digital delivery directly over the Internet. Business planning must also consider the use of third-party services that can vary from such functions as fulfillment and delivery, to site maintenance, to customer support. Other quantitative measures are also considered, such as the number of registered users, which may be important for an online subscription product to consumer buying patterns, which may be important to the advertisers sponsoring some or all of the site development.
Customer behavior models are also required to perform adequate capacity planning. These models are used to determine the possible navigation patterns by the user. This information is then used to determine required or needed site layout adjustments as well as new services. For example, if the navigation pattern data showed users spending a majority of time performing information queries, the database server will need to be designed to accommodate this pattern. If the navigation patterns showed customers navigating to a function that displayed various product images, this information can be used to provide adequate performance in image presentation. The customer behavior characterization leads to workload characterization and forecasting. Based on the navigation patterns of the user and where the user spends most of the time when navigating a Web site, the workload can be estimated to provide a goal for resource allocation. A well-thought-out and planned methodology for capacity planning will lead to more effective systems integration and execution. Effective capacity planning also depends on accurate predictive performance models.
P1: A-15 Oshana
WL040/Bidgolio-Vol I
WL040-Sample.cls
September 14, 2003
142
12:5
Char Count= 0
CAPACITY PLANNING FOR WEB SERVICES
HTTP Server TCP/IP Operating System Hardware (processors, disks, network interface, etc) Figure 3: Layered infrastructure for Web-based systems.
UNDERSTANDING THE SYSTEM AND MEASURING PERFORMANCE A complex infrastructure supports the delivery of Web services. The main components of any Web-based architecture are the hardware (central processing units [CPUs], disks, network interface cards, etc.), the operating system, the communication protocol (TCP/IP, UDP [uniform datagram protocol], etc.), and the HTTP server (Figure 3). The bottleneck in performance can be anywhere along the chain of delivery through this infrastructure. It is therefore important to understand what to measure and how to measure to model and predict future needs accurately. Web services infrastructure consists of not only servers that crunch user requests, but also local area networks (LANs), wireless area networks (WANs), load balancers, routers, storage systems, and security systems. To make matters worse, many of these resources are shared between many other Web services, groups, companies, and individuals.
Determining Response Time The overall response time can then be broken down into two major components: the network time, which is the time spent traveling through the network, and the Web site time, which is the time the request spends being processed at the Web site. The network time consists of the transmission time of the request. This is the time it takes for the required information to be sent from the browser of the user to the Web site. This can vary depending on the technology the user has (modem, digital subscriber line [DSL], cable, etc.) as well as how much data is being sent (which dictates how many packets of information will be sent). Even the TCP/IP stack on the user side has an impact on the overall performance. There are many commercial and customer implementations of TCP/IP stacks. Benchmarking must be performed to get the true performance. The other component of network time is the latency, which is a measure of how many round-trip messages are required to be sent from the user to the Web site. Again, this varies and is dependent on many factors. For example, if the Web site requires the use of a cookie to be placed on the user machine, then each time the Web site is visited, the cookie exchange will take additional overhead to complete and must be added to the overall response time. The main components of the Web site time include the service time and the queuing time. Each of these
components is dependent on the hardware available to handle these functions—the CPU, the storage disks, and the LAN network. The service time is the time spent getting service from the CPU, storage device, and so on. Modern programming models protect these resources using mechanisms such as semaphores to prevent problems such as the shared data problem in which more than one task can corrupt data structures based on their calling sequence. Mechanisms such as semaphores inherently imply that while one task is using a resource like a CPU, other tasks that also want to use that resource must wait until the resource is free. This leads to a queuing model, which must be considered and modeled because it can add a substantial amount to the overall Web site time. Engineers must understand what the specific queuing model is (operating system, ping-pong buffer, etc.) and consider the impact in the overall performance numbers. In some network devices such as routers, the queuing mechanism may simply be a hardware buffer where all requests get stored and processed. Configuration of network resources has a significant impact on the overall performance estimate. For example, adding fast memory (random access memory, or RAM) to a server will improve performance by some amount (which must be measured). Access time to RAM is much better (by orders of magnitude in some cases) than access time to magnetic media like a hard disk. The trade-off is usually cost and the cost-versus-performance analysis must be a variable that is known in advance (usually described in the SLA). Even the configuration of the user browser can have an impact on overall performance. Cache size settings in the browser, for example, can have an impact. Because this is a parameter set by the user, the model should consider either default values in the browser or be set to some average industry setting. Web page download time must also be considered because this varies considerably based on the application. Modern Web pages contain markup language as well as embedded images and other embedded objects. To estimate the average download time for a Web page (independent of network traffic and other factors not associated with the Web page itself, although these factors must be considered in the final analysis), analysis must be performed on various computer configurations and settings. Models must be developed that take into consideration the number of embedded objects, the size of the embedded objects, HTTP header size, and the number segments per object. The more elaborate the Web page, the more processing is required to get the page to the user and provide the response time called for in the SLA. Keep in mind that the response time for a Web page with complicated embedded objects will vary considerably depending on the connection type. If many users will be working or accessing the site from home, dial-up modems’ performance should be considered, not only the performance assuming a T1 line or other high-performance connection.
Interaction Performance As with most user interface systems, it is not just the performance of the system itself that must be considered, but also the performance of the person interacting with the Web site. When analyzing any user interface
P1: A-15 Oshana
WL040/Bidgolio-Vol I
WL040-Sample.cls
September 14, 2003
12:5
Char Count= 0
THE PERFORMANCE AND CAPACITY PROCESS
DMZ
Layer 1
143
Layer 2
Layer 3
Last mile (Cable, DSL Dialup)
ISP
Intranet/Internet
Router Firewall
DB Servers (e.g.mainframes)
Load Balancer Web Servers
App Servers
Figure 4: The Web infrastructure consists of many components, all of which must be considered for accurate capacity planning. for performance-related issues, there is a delay associated with the interaction associated with the user think time. This is the period of time in which the user is perceiving the information and deciding what to so next. If the Web page is complicated and difficult to navigate, this “think” time will increase. Easy to understand and navigate Web sites reduce the think time. Designers of Web sites need to be careful when using the latest Web development technology to create fancy animated images, using multiple colors, and so on because these cause the user to become distracted from the task and lead to unnecessary delays in the overall interaction. A Web infrastructure contains many interacting components (Figure 4). Servers, Internet service providers, firewalls, several levels of servers, load balancers, and so on combine in different ways to achieve a certain performance level. There can be a significant performance difference depending on whether the user is accessing information from inside or outside a firewall, for example. With the growing popularity of wireless technology and the Wireless Application Protocol (WAP), the complexity will continue to grow. This presents significant challenges with respect to performance analysis and capacity planning. The randomness with which the thousands or millions if users interact with a company intranet or extranet or the Internet, in general, makes the forecasting and capacity planning job extremely difficult. The first step is to understand the various components involved in the deployment of a system and model the current and predicted workloads as accurately as possible.
THE PERFORMANCE AND CAPACITY PROCESS Menasce and Almeida (2002, p. 178) defined adequate Web capacity as having been achieved “if the Service Level Agreements (SLAs) are continuously met for a specified
technology and standards, and if the services are provided within cost constraints.” This assumes that the organization has an SLA. If not, adequate capacity will normally be defined by users who complain or stop using the service if they do not consider the performance adequate.
Model for Capacity Planning There are many models for capacity planning. Figure 5 is one simple model that describes the major factors involved in capacity planning. Regardless of the model, before improvements can be made and plans for the future can be drawn up, there needs to be a way to assess current
Determine existing performance
Determine future performance
Model, analyze capacity requirements
Customer behavior model Cost model Functional/Resource model Business model
Determine options
Develop capacity plan
Implement and manage
Figure 5: The performance and capacity planning process.
P1: A-15 Oshana
WL040/Bidgolio-Vol I
WL040-Sample.cls
September 14, 2003
144
12:5
Char Count= 0
CAPACITY PLANNING FOR WEB SERVICES
Real workload
Real system
Measured performance
Validate
Modeled system
Model workload
Projected workload
Projected system
Predicted performance
Projected performance
Figure 6: Modeling a Web system produces a platform by which to estimate future performance performance. This requires understanding the system and environment and estimating its use. To understand the environment adequately requires an understanding of the client and server technology, the applications running on the servers and clients, the network connectivity used, the access protocols in the system, and the usage patterns of various types of customers and clients (e.g., when do they shop, when do they log on, what pages are frequented most often, etc.).
Usage Patterns Once the main components of the system are understood, the usage patterns should be mapped onto the system model to determine the overall workload characterization. This provides the capacity planner an estimate as the workload intensity allocated to each of the main components in the system. This is useful for determining bottlenecks and knowing where to focus the effort in terms of overall system performance. Each of the components of the system can be measured in terms of the following parameters: r r r r r r
Number of client visits, Average message size, Total CPU use, Number of transactions per unit time, Number of e-mails sent and received per day, and Number of downloads requested.
The actual parameters depend on the component being measured, and other components in addition to these can come into play as well. They are different depending on whether the component is an e-mail system, a search engine, an interactive training module, and son on. The capacity planner must determine what makes sense to measure for each of these components. Once the components have been selected and the important parameters have been chosen for each component, the capacity planner must collect the appropriate data and perform a series of benchmarks to measure the performance on the actual physical machine. The main
benchmarking technique is to run a set of known programs or applications in the target system and measure the actual performance of the system in relation to the parameters chosen in the analysis phase. It is not important to run a truly representative application, as long as the workload is well defined on a given system so an accurate comparison can be made. For Web-based systems the major benchmarking measures are time and rate. Time is from the users point of view; how long it takes to perform a specific task is important to this user group. From a management perspective, the main measure is the rate that drives how many users can be processed per unit time, which relates to overall productivity, revenue, or cost, depending on the application. The process of benchmarking and measuring true performance on a real system is an important step and one that must be completed before proceeding to the step of modeling and predicting future performance. As shown in Figure 6, real workload should first be run on real system and the performance measured. The next step is to model the workload so that an accurate projection can be made about the future. This modeled workload must be run on the “system of the future,” which is unknown at this time. To perform the required “what if” analysis, the system must also be modeled. This allows the modeled workload to be run on a modeled system. Measurements are made to ensure that the modeled system is an accurate representation of the real system. These measurements are made directly between the real system and the modeled system (of the real system). Any differences in the modeled measurements and the real measurements must be explained and understood. The modeled workload should be run through the modeled system to get the predicted performance. As a final validation step, the predicted performance produced by the modeled system should be compared with the measured performance of the actual system. Again, any differences should be understood. Only after the model of the system has been validated can the effort of developing a projected workload and a projected system model be made. The projected workload should come from a variety of sources, including the SLA.
P1: A-15 Oshana
WL040/Bidgolio-Vol I
WL040-Sample.cls
September 14, 2003
12:5
Char Count= 0
THE PERFORMANCE AND CAPACITY PROCESS
145
[Image not available in this electronic edition.]
Figure 7: Deciding how to model the Web system depends on the questions that are needed to be answered. From Scaling for E-Business (p. 289), by D. A. Menasce and V. Almeida, 2000, Upper Saddle River, NJ: Prentice Hall, 2000.
Modeling Parameters for Web-Based Systems The key parameters associated with modeling a Webbased system are workload, performance and configuration. These parameters can be used in various ways to answer different questions. For example, analyzing workload plus configuration gives a measure of performance, which helps in predicting the future system requirements. Likewise, configuration and performance give a measure of workload, which helps to determine saturation points of the system. Finally, performance and workload help determine system configuration, which helps determine the sizing required in the future system.
Modeling Approaches for Capacity Management Capacity planning can be simplified substantially by creating representative models of the real world using a simplified capacity model. This model should be based on critical or bottleneck resource availability as well as by interpreting the demand on that resource alone to determine the overall likely output. This technique can provide a rough order of magnitude verification that demand and capacity are in balance. The phrase “all models are wrong but some are useful” is accurate in the sense that models, by definition, are abstractions of reality and therefore not 100% reflective and accurate with respect to reality. As the level of abstraction increases, the accuracy of the model decreases. System level models are higher levels of abstraction but can nevertheless be useful for modeling complex systems like Web-based systems. They can provide meaningful information to the capacity planner when making decisions about how to build and deploy future systems. Depending on the data needed for analysis, the question that needs
to be answered is in what detail does the system (exiting and proposed) need to be modeled (Figure 7). Of the many modeling approaches available, I focus here on several proposed by Menasce and Almeida (chapter 4), which are applicable to Web-based systems. The client–server interaction model is one approach that helps one to understand the interaction between the clients and servers in the system. In a multitiered system, this model can be useful in showing the important interactions at each tier. This can be used for future workload estimates. As shown in Figure 8, each e-business function of importance can be modeled in this way to show all possible interactions with that function. In Figure 8a, the interactions at the different computing tiers over time in a model that resembles a UML (unified modeling language) sequence diagram (where time increases from top to bottom) can be represented. This provides a time-based model of the interactions over time. Figure 8b shows the interactions between the different servers (application server, database server, and Web server) in a Markovian model. The Markovian model can be thought of as consisting of states (server nodes), arcs (arrows connecting the nodes showing interaction between the servers), and probabilities that represent the navigation patterns for the specific e-business function. This information effectively represents the customer behavior when interacting with the e-business function. Web sites can be optimized by applying modeling approaches such as the Markov model to the analysis of Web logs (Venkatachalam & Syed, n.d.). Finally, the message sizes can also be represented and are shown in the diagram as well. With this information, a mathematical model can be developed that estimates the workload, message traffic, and other meaningful information about the system. Keep in mind that, like all other Markovian models, the model is only as accurate as the probability data assigned to the arcs. Relatively accurate
P1: A-15 Oshana
WL040/Bidgolio-Vol I
146
WL040-Sample.cls
September 14, 2003
12:5
Char Count= 0
CAPACITY PLANNING FOR WEB SERVICES
[Image not available in this electronic edition.]
Figure 8: A client…serverinteraction diagram showing all possible interactions for an e-business function: (a) the interactions over time; (b) the interactions showing navigation patterns and message sizes. From Scaling for E-Business (pp. 74…75),by D. A. Menasce and V. Almeida, 2000, Upper Saddle River, NJ: Prentice Hall, 2000. data can be obtained from analyzing current systems, the logs on these systems for existing applications, estimates from other sources such as prototypes, and other products in the field. These models are also hierarchical, which allows them to be decomposed into lower level, more detailed models when necessary. At the next lower level of detail (the component level), additional detail can be added. For example, queuing affects can be analyzed as service requests pass through the various levels of a multitiered application. Figure 7 can be represented from a queuing perspective as shown in Figure 9, in which each symbol represented a queuing function. Combination of these queuing functions form a queuing network in which each queue represents a system resource, such as a CPU or a disk drive, and the request waiting to use that resource. Average response times can be applied to each queue. Each of the system resources may have a different queue characteristic; use of the resource may be load independent which means the service time is not dependent on the queue length (e.g., a disk access), it may be load dependent where the service time is a function of the queue length (e.g., a CPU), or the queue may be a simple finite delay element (e.g., a network). Regardless of the modeling approach used and which parts of the system are modeled, the result should be answers to the important questions raised during the planning process and used to drive the decisions on what and where to improve to meet future demands.
SOFTWARE PERFORMANCE ENGINEERING—MANAGING THE PROCESS Many Web-based systems must meet a set of performance objectives. In general, performance is an indicator of how well a software-intensive system or component meets a set of requirements for timeliness. Timeliness can be measured in terms of response time, the time required to respond to some request, and throughput, which is an indicator of the number of requests that can be processed by the system in some specified time interval. Scalability is another important dimension of an embedded real-time system. Scalability is a measure of the systemís ability to continue to meet response time or throughput requirements as the demand for the system increases. Choosing the right server, network, software, and so on for the job means nothing without proper performance management through the development life cycle. The consequences of performance failures can be significant, from damaged customer relations, to lost income, to overall project failure and even loss of life. Therefore, it is important to address performance issues throughout the life cycle. Managing performance can be done reactively or proactively. The reactive approach addresses performance issues by using a bigger server, dealing with performance only after the system has been architected, designed, and implemented and waiting until there is actually something to measure before addressing the problems. Proactive approaches to managing performance include tracking and communicating performance issues throughout the life cycle, developing a process for identifying performance jeopardy, and training team members in performance processes.
[Image not available in this electronic edition.]
Definition of Software Performance Engineering Figure 9: A queuing model of the different service layers from Figure 8. From Scaling for E-Business (p. 289), by D. A. Menasce and V. Almeida, 2000, Upper Saddle River, NJ: Prentice Hall, 2000.
Software performanceengineering (SPE) is a proactive approach to managing performance. SPE is a systematic, quantitative approach to constructing software intensive systems that meet performance objectives (Smith & Williams, 2002). SPE is an engineering approach to performance, which avoids the “fix it later” mentality in
P1: A-15 Oshana
WL040/Bidgolio-Vol I
WL040-Sample.cls
September 14, 2003
12:5
Char Count= 0
SOFTWARE PERFORMANCE ENGINEERING—MANAGING THE PROCESS
designing real-time systems. The essence of SPE is using models to predict and evaluate system tradeoffs in software functions, size of hardware resources, and other resource requirements.
r
The Software Performance Engineering Process The SPE process consists of the following steps (Smith & Williams, 2002): r
r
r
r
r
r
r
r
r
Assess performance risk. What is the performance risk of the project? The answer to this question helps determine the amount of effort to put into the SPE process. Identify critical “use cases.” Use cases are an informal, user-friendly approach for gathering functional requirements for a system (Booch, Rumbaugh, & Jacobsen, 1999). Critical use cases are those use cases that are important from a responsiveness point of view to the user. Select key performance scenarios. These represent those functions that will be executed frequently or are critical to the overall performance of the system. Establish performance objectives. In this step, the system performance objectives and workload estimates are developed for the critical use cases. Construct performance models. A relevant model is developed for the system to measure performance. This can be an execution graph, a rate monotonic resource model (Rate Monotonic Analysis, 1997) or other relevant model to measure performance. Determine software resource requirements. This step captures the computational needs from a software perspective (e.g., number of messages processed or sent). Add computer resource requirements. This step maps the software resource requirements onto the amount of service required from key system devices in the execution environment (e.g., a server processor, fast memory, hard drive, router). Evaluate the models. If there is a problem a decision must be made to modify the product concept or revise the performance objectives. Verify and validate the models. Periodically take steps to make sure the models accurately reflect what the system is really doing.
SPE Assessment Requirements The information generally required for a SPE assessment for network systems is as follows:
r
r
r
147
I/O bandwidth. The choice, in part, depends on the customer requirements. Software characteristics. This describes the processing steps for each of the performance scenarios and the order of the processing steps. One must have accurate software characteristics for this to be meaningful. This data can come from various sources such as early prototype systems using similar algorithm streams. Algorithms description documents, if available, also detail the algorithmic requirements for each of the functions in the system. From this, a discrete event simulation can be developed to model the execution of the algorithms. Execution environment. This describes the platform on which the proposed system will execute. An accurate representation of the hardware platform can come from a simulator that models the I/O peripherals of the embedded device as well as some of the core features. The other hardware components can be simulated as necessary. Resource requirements. This provides an estimate of the amount of service required for the key components of the system. Key components can include CPU, memory, and I/O bandwidth for each of the software functions. Processing overhead. This allows the mapping of software resources onto hardware or other device resources. The processing overhead is usually obtained by benchmarking typical functions (search engine, order processing, etc.) for each of the main performance scenarios. One example of a flow used to develop this data is shown in Figure 10.
The model is only as accurate as the data used to develop the model. For example, key factors that influence the processor throughput metric are as follows: r
The quantity of algorithms to implement Elemental operation costs (measured in processor cycles) r Sustained throughput to peak throughput efficiency r Processor family speed-up r
The quantity of algorithms to perform is derived from a straightforward measurement of the number of mathematical operations required by the functions in the algorithm stream. The number of data points to be processed is also included in this measurement. The elemental operation costs measures the number of processor cycles required to perform typical functions. The sustained throughput to peak throughput efficiency factor derates the “marketing” processor throughput number to
r
Workload— the expected use of the system and applicable performance scenarios. It is important to choose performance scenarios that provide the system with the worst case data rates. These worst case scenarios can be developed by interfacing with the users and system engineers. r Performance objectives. This represents the quantitative criteria for evaluating performance. Examples include server CPU utilization, memory utilization, and
Systems Eng. Tasks Algorithm Document
Algorithm Prototyping
Algorithm Sizing Spreadsheet
Discrete Event Simulation
Software Eng. Tasks
Algorithm Metrics
Real-time Adjustment Factors
Final Performance Metrics
Figure 10: Performance metric calculation flow.
P1: A-15 Oshana
WL040/Bidgolio-Vol I
WL040-Sample.cls
September 14, 2003
148
12:5
CAPACITY PLANNING FOR WEB SERVICES
something achievable over the sustained period of time a real world code stream requires. This factor allows for processor stalls and resource conflicts encountered in operation. The processor family speed-up factor can be used to adjust data gained from benchmarking on a previous generation processor. Key factors that influence the memory utilization metric are as follows: r r r r r
Size and quantity of intermediate data products to be stored, Dynamic nature of memory usage, Bytes/data product, Bytes/instruction, and Size and quantity of input and output buffers based on worst case system scenarios (workloads).
SPE for Web-Based Applications For Web-based applications, the SPE calls for using deliberately simple models of software processing. These should be easy to build and solve (like the ones discussed in the previous section) and provide the right feedback on whether the proposed software architecture will be sufficient to meet the performance goals. The SPE approach relies on execution models which then get mapped to system resources. The goal is to provide enough data to make the capacity planner comfortable with the following (Smith & Williams, 2002, p. 132): r r r
r r
Char Count= 0
Placement of objects on the right processor and processes, Understand the frequency of communication among the objects, Understand which forms of synchronization primitives are required for each communication between the software objects, The amount of data passed during each communication, and The amount of processing performed by each software object.
AVAILABILITY MODELING AND PLANNING When online systems are down, productivity and revenue is lost. The magnitude of the loss varies, depending on the type of system and how it is used. But the numbers can be in the hundreds of thousands per hour and even per minute. Online systems operating 24 hours a day, seven days a week, 365 days per year for international business opportunities have become a key mechanism for delivering services and products to customers. Downtime is just as important as a store being closed in the middle of a business week. Customers unable to access online systems are likely to take their business elsewhere, resulting in long-term revenue loss as well. Although all downtime (also referred to as outage) is a potential loss of productivity or revenue (or both), some downtime is unavoidable. There must be “planned” downtime for system and application upgrades, new hardware
and software, and backups. It is the “unplanned” downtime which must be minimized. Unplanned downtime occurs because of hardware failures, software crashes, computer viruses, and hacker attacks. The solution to online system availability is not adding more hardware and other system resources. The system platform accounts for about 20% of the total system availability. The other 80% comes from a combination of people, process, and product (McDougall, 1999, p. 2).
Process The industry has developed many processes over the last couple of decades to increase overall system availability. Some of the well-proven processes include system installation standards, change control, release upgrade processes, and backup and recovery testing. Just as standard software development processes are in place to allow quicker development of quality software, so are good processes and techniques important for maintaining online systems.
People Availability should be considered an attitude instead of a priority. The staff responsible for maintaining the system should be trained properly to deal with backup and recovery techniques, as well as the standard processes for conducting business.
Product The system platform itself contributes to overall downtime but not as much as the system process and people techniques described earlier. The system includes the hardware, the network infrastructure, and network operating system, and other required software, and hardware support. The investments made in the product (hardware and software) will add to the overall availability, but the right system configurations must be tested to ensure reliability of all the system parts working together. Given the large combination of different configurations, proper planning for this form of system testing is paramount to prevent unanticipated system surprises.
Availability Specification Before beginning to address system availability, there must exist a set of requirements that define the key system goals for availability. An example availability specification may have the following statements: r
“During the peak hours of the system, 90% of queries will be completed in less than 1 second on average, with up to 100 users online. No queries will exceed three seconds” (modified from Cockroft & Walker, 1999, p. 31). r “The order processing system for the XYZ online bookstore will be capable of sustaining 3,000 transactions per second during normal business hours.”
Measuring Availability As with any kind of process improvement effort, there must be a way of knowing whether the investment in
P1: A-15 Oshana
WL040/Bidgolio-Vol I
WL040-Sample.cls
September 14, 2003
12:5
Char Count= 0
TOOLS TO SUPPORT CAPACITY PLANNING
process or product improvement is working. The cost of achieving additional availability can be extremely expensive, so knowing when to stop investing is important. Most measurements of availability are measured in percentage terms. Typically “five nines” availability, 99.999% is a often cited goal, but can be difficult to achieve. Before beginning to address system availability, there must exist a set of requirements that define the key system attributes: r
Coverage—the normal business hours of the system. If the system is reporting stock exchange transactions, for example, the normal business hours will be a certain part of the business day and week. Availability should always be measured in terms of the coverage time only. r Outage—This represents the number of minutes per month that the system can be down. Availability is then measured as (coverage–outage)/ coverage × 100. Availability can also be measured in terms of mean time between failures (MTBF—how long the system is up) and mean time to repair (MTTR—how long the system is down). Availability = MTBF/MTBF + MTTR As can be seen from this availability equation, as the MTTR approaches zero, the availability approaches 100%. As the MTBF gets larger, the MTTR begins to have lesser impact on availability. Availability must be measured at several levels including the hardware platform level, the network level, and the application level—there can be failures in any one of these layers that can bring down the entire system.
Design Principles for System Availability There are several general design principles that should be used when designing system to meet availability goals. These include but are not limited to the following (Zuberi & Mudarris, 2001): r r r r r r r r
Select reliable hardware, Use mature and robust software, Invest in failure isolation, Test everything, Establish service level agreements, Maintain tight security, Eliminate single points of failure, and Availability modeling techniques.
There are approaches to modeling systems to produce availability estimates, but this is a difficult process because many system components that must be modeled are interrelated which makes accurate modeling more difficult. The common modeling approaches used to determine system availability include Markov models and Monte Carlo simulation techniques (Gilks, Richardson, & Spiegelhalter). These approaches model the system as a
149
series of known system states and state transitions. Additional information such as time taken to go between system states and the probability of moving between system states are included in the model to improve accuracy.
TOOLS TO SUPPORT CAPACITY PLANNING Capacity-planning software was created to help network executives and capacity planners plan for long-term growth and implement new initiatives. These same planning tools can also be used to help companies make their existing assets stretch further. A common thought with capacity planning is “I want to buy more,” but this is actually a flawed understanding. Capacity planning is oftentimes not about buying more. It is no surprise that as the importance of capacity planning has grown, commercial tools have become available to help manage the process, collect data, and visualize it in useful ways. When deciding on a capacity planning tool, it is important to understand how the tool will be used in the capacity planning process. Selecting the right tool for the job remains just as important as it always has.
Examples of Capacity Planning Tools Sun Microsystems Resource Management Suite focuses on the capacity planning for storage systems. Some of the common storage-related problems are as follows: r
The inability for companies to keep pace with increasing storage demands; r Storage is too expensive and complex to manage effectively; r The inability to accurately plan, budget, and justify future storage needs; and r The lack of data to support new storage architectures. When planning for storage capacity, the common questions to answer are as follows: r
How much storage do I have today? How can I prevent storage related crashes? r How can I predict future capacity needs accurately? r
The tool provides the necessary infrastructure to forecast storage growth, manage heterogeneous storage, create and enforce storage policies, track usage, and create plans for future upgrades. Trend graphs are available to characterize usage patterns over selected time intervals. Compaq’s Enterprise Capacity and Performance Planner is a modeling tool used to predict the performance of both stand-alone and clustered systems. The tool is used to determine system performance levels for various workloads and system configurations. The tool also collects and analyzes data collected on the various platforms. Performance predictions are made with the tool using analytic queuing network models. A graphical component allows for “what if” analysis. A baseline model is developed from the data collected from the existing system and becomes the starting point for assessing the impact of changes to the system configuration of user estimated workloads.
P1: A-15 Oshana
WL040/Bidgolio-Vol I
WL040-Sample.cls
September 14, 2003
150
12:5
Char Count= 0
CAPACITY PLANNING FOR WEB SERVICES
The “what if” analysis forms the foundation for the overall capacity planning effort. Performance statistics are provided in detailed reports that encompass the main capacity planning performance statistics: r
Resource utilization, Response time, r Throughput, r Queue length, and r Service time. r
CAPACITY MANAGEMENT— PERFORMANCE AND STRESS TESTING OF WEB-BASED SYSTEMS The only true way to assess the performance of a product is to try it out. The same holds true for a Web-based system. Performance testing is done to gain an understanding of the specific services being offered under specific and actual workload conditions. The capacity planning team should work in cooperation with other groups— development, production, and management staff—to plan, execute, and follow up on the results of this process.
Types of Performance Testing There are several types of performance testing, including load testing which measures the performance of the system under load conditions called out in the SLA. The load can be actual or simulated depending on the system and process used. Spike testing is another useful performance test that tests a Web service under specific circumstances, and, just like the name implies, subjects the system to a heavy spike of traffic to determine how the system will react under this specific load condition. Probably the most important type of performance testing is stress testing. Stress testing attempts to expose and help address the following important concerns: r
Stability issues—unexpected downtime and poorly written Web objects, r Performance problems—locate bottlenecks and whether the application will handle peak loads, and r Capacity planning—how many machines are needed to support usage. Stress testing, if performed correctly, can help find and fix problems to avoid financial losses as well as ensure customer satisfaction. Stress testing is effective at locating the following bottlenecks: r
Memory, Processor, r Network, r Hard disk, and r COM (Common Object Model) component. r
Stress Testing Model for Web-Based Systems A traditional stress test model is shown in Figure 11. The Web server is the system under test and is connected to
Figure 11: A traditional stress test model for Webbased systems.
a number of stress clients that simulate the workload by performing certain tasks, requests, operations, and so on from the Web server. The test is controlled by a controller stress client that directs the other stress clients as to the workload patterns while the test runs as well as collects the information needed to analyze the results of the testing process. The basic approach to stress testing involves the following steps: r
Confirm that the application functions under load, Find the maximum requests per second any application can handle, r Determine the maximum number of concurrent connections the application can handle, and r Test the application with a predefined number of unique users. r
Based on the information obtained from the testing process, certain types of performance can be calculated. For example the following formula can be used to measure performance to aid in future capacity planning: MHz Cost = N ∗ S ∗ avg (PT)/avg(Rps), where N = number of processors, S = speed of processors, PT = % total processor time (this is a measure from the actual system servers), and Rps = requests per second, reports view (this comes from an analysis of the results of a test and is an application service provides (ASP) based measurement). As an example, consider a test using a four processor Web server that achieved 750 requests per second, with the processors 80% utilized. This works out to 4 processors ∗ 500 MHz =⇒ 2 GHz 80% processor utilization =⇒ (2 Gig) ∗ (0.80) =⇒ 1.6 GHz used 750 ASP (Active Server Page) requests per second 1.6/750 =⇒ 2.1 million cycles per ASP request
P1: A-15 Oshana
WL040/Bidgolio-Vol I
WL040-Sample.cls
September 14, 2003
12:5
Char Count= 0
REFERENCES
There are commercial tools available to aid the capacity planner in performing different types of performance testing, including stress testing. One such tool from Microsoft called Application Center Test (ACT) automates the process of executing and analyzing stress tests for Web-based systems. The explosive growth of Internet sites and e-commerce has presented new challenges in managing performance and capacity. Also, the ability to estimate the future performance of a large and complex distributed software system at design time can significantly reduce overall software cost and risk. Capacity planning is the disciplined approach of managing system resources and defines the process of predicting when future load levels will saturate the system and of determining the most cost-effective way of delaying system saturation as much as possible.
151
Stress testing The attempt to create a testing environment that is more demanding of the application than it would experience under normal operating conditions. Unified modeling language (UML) A general purpose notational language used to specify and visualize software-based systems. Web site time The time that a request for information spends being processed at a Web site.
CROSS REFERENCES See Client/Server Computing; E-business ROI Simulations; Electronic Commerce and Electronic Business; Return on Investment Analysis for E-business Projects; Risk Management in Internet-Based Software Projects; Web Services.
GLOSSARY
REFERENCES
Capacity planning The process of predicting when future load levels of an Internet-based system will saturate the system and the steps required to determine a cost-effective way of delaying system saturation as much as possible. Client–Server A communication model between computer systems in which one system or program acts as the client and makes requests for service from another system or program called the server that fulfills the request. E-commerce The buying and selling of goods and services on the Internet. Markov Model A model consisting of a finite number of states and a probability distribution relating to those states, which governs the transitions among those states. Network time The time spent for a packet of information to travel through a network. Performance management An integral part of the capacity planning process in which system performance is managed proactively to ensure optimum efficiency of all computer system components so that users receive adequate response time. Service level agreement An agreement between the customer and the system developer that outline the acceptable levels of performance and capacity requirements for the system. Service time The time spent getting service from the central processing unit, storage device, and so on. Software performance engineering A proactive, systematic and quantitative approach to constructing software intensive systems that meet performance objectives.
Booch, G., Rumbaugh, J., & Jacobsen, I. (1999). The unified modeling language user guide. Reading, MA: Addison-Wesley. Cockcroft, A., & Walker, B. (2001). Capacity planning for Internet services. Santa Clara, CA: Sun Microsystems Press. Gilks, W. R., Richardson, S., & Spiegelhalter. D. J. (Eds.). (1995). Markov chain Monte Carlo in practice. Boca Raton, FL: CRC Press. McDougall, R. (1999, October). Availability—what it means, why it’s important, and how to improve it. Sun BluePrints Online. Retrieved August, 2002, from http://www.sun.com/solutions/blueprints/1099/ availability.pdf Menasce, D. A., & Almeida, V. (2000). Scaling for e-business: Technologies, models, performance, and capacity planning. Upper Saddle River, NJ: Prentice Hall. Menasce, D. A., & Almeida, V. (2002). Capacity planning for Web services: Metrics, models, and methods. Upper Saddle River, NJ: Prentice Hall. Rate monotonic analysis keeps real time systems on schedule. (1997, September). Electronic Design News, 65. Smith, C. U., & Williams, L. G. (2002). Performance solutions, a practical guide to creating responsive, scalable software. Reading, MA: Addison-Wesley. Venkatachalam, M., & Syed, I. (n.d.). Web site optimization through Web log analysis. Retrieved July 2002 from http://icl.pju.edu.cn/yujs/papers/pdf/HMMS.pdf Zuberi, A., & Mudarris, A. (2001, March). Emerging trends for high availability. Presented at the Chicago Uniforum Chapter. Retrieved from http://www.uniforum.chi.il. us/slides/highavail
P1: JDT WL040-16
WL040/Bidgoli-Vol I-Ch-14
October 3, 2003
17:56
Char Count= 0
Cascading Style Sheets (CSS) Fred Condo, California State University, Chico
Introduction Structure and Presentation in HTML Benefits of CSS CSS Standards Document Validation Media Types CSS Box Model Inheritance and the Cascade Inheritance Cascade CSS Selectors CSS1 Selectors CSS2 Selectors Grouping
152 152 153 153 153 153 154 155 155 156 156 157 157 158
INTRODUCTION Structure and Presentation in HTML Hypertext markup language (HTML) provides for embedding structural information into text files. User agents (Web browsers) contain a default style sheet that controls the display of HTML. In the mid-1990s, the makers of Web browsers, principally Netscape and Microsoft, introduced presentational extensions into HTML, without regard to the wisdom of such extensions. These extensions gave Web page producers the ability to override the default browser styles. Examples of these extensions include the font element for specifying type styles and colors, and the center element for centering text and graphics horizontally. The result of using these presentational extensions is a commingling of presentation information with the structural markup and content of Web pages, or, worse, a substitution of presentational markup for structural markup. Such a result may not be intuitively disadvantageous, but there are several problems that arise from the practice of mixing content with presentational directives. For example, there is no way to express the idea that all levelone headings (h1) throughout a Web site should be centered, set in a sans serif font, in red. Instead, the same directives must be specified along with every instance of an h1 tag. The result is that every page in the site carries redundant presentational information, which the server must transmit over the network each time. Maintaining the site becomes error prone and inefficient. If the designer decides to change the presentation of h1 elements to a serif typeface, for example, then someone must edit every page of the Web site. A person editing many pages is likely to miss a few pages or a few instances of the h1 tag; hence, the appearance and consistency of the site will degrade. Because a real site will have many more style rules than just this one example, the complexity and risk 152
CSS Properties Value Types, Units, and Representations Practical CSS in Action Linking Styles to Web Pages W3C Core Styles Alternate Style Sheets Examples Best Practices Recommendations of Lie & Bos (1999a) Other Best Practices Evolution of CSS Glossary Cross References References
159 159 159 159 159 159 160 160 160 162 162 163 163 163
of error in maintenance are compounded. In the worst case, a na¨ıve Web page producer may omit the h1 tags entirely. Such a practice makes it impossible for user agents to detect headlines, which has its most serious impact on vision-impaired users, who depend on user agents to list headlines by voice. Consider an HTML page that uses font and b tags for headings: Text of heading. This code devotes more text to presentation than to content, and the text concerning presentation has to appear with every instance of a heading. With style sheets, the HTML code becomes much simpler: Text of heading. The HTML code is easier to understand, and it carries with it the important information that the text is a major heading. In the corresponding style sheet, this code appears (but only once for the entire Web site): h1 {font-weight: bold; font-size: 125%; font-family: Arial, sans-serif; color:#F00;}. Cascading style sheets enable HTML authors to change the default specifications in a style sheet rather than in HTML code, so that new style rules may be applied to all instances of a specified HTML context throughout a site. In addition, cascading style sheets help preserve the structural context of HTML that indicates how pieces of content relate to one another. For instance, it is possible to grasp the structural difference between content within level-one heading tags and content within level-two heading tags. Level-one headers take precedence over level-two headers. It is not possible to grasp with certainty the structural differences between content wrapped in one set of presentation tags and content wrapped in another set of presentation tags by evaluating the HTML code. For instance, HTML authors will often forgo the list and list item elements in order to control the spacing of list items and the images used as bullets. This usually results in HTML code that cannot be recognized as describing a list. As well
P1: JDT WL040-16
WL040/Bidgoli-Vol I-Ch-14
October 3, 2003
17:56
Char Count= 0
MEDIA TYPES
as helping authors maintain a separation of structure and presentation, CSS affords other benefits that were never available under HTML extensions.
Benefits of CSS Separation of Presentational Information From Structure and Content Much of the trouble associated with the embedded presentational directives of HTML is attributable to the commingling of content with presentational information. Because the only way to associate presentational HTML tags and attributes with their targets is to place them in close proximity in the same file, there is no way to disentangle them. CSS provides a mechanism whereby the association of content and presentation can exist without physical proximity. Designers can, as a result, physically and logically separate the content from its associated presentational styles. The most important benefits of CSS arise from this partition of content and style.
Centralization of Presentational Information Because Web page producers no longer need to reduplicate style information for every page, they can collect general style rules into a single file or small group of files. HTML 4.01 and XHTML 1.0 provide mechanisms for associating a page with one or more style sheets. Once that link exists, it is no longer necessary to edit each Web page to effect style changes. Instead, the designer edits the central style sheet, and the change immediately takes effect throughout all pages that are linked to the style sheet. The process of changing a central style sheet is vastly simpler and less error prone than performing a complex set of edits on every page in a Web site.
Adaptability to Multiple Media It is no longer safe to presume that users access the Web solely by means of a graphical Web browser such as Mozilla, Opera, or Internet Explorer. Web-access devices, like human users of the Web, are more diverse than ever. The Web needs to adapt to presentation via handheld computers, cell phones, computer-generated speech, and printers.
Designer Control Superior to Presentational HTML Extensions Designers, too, need control of Web presentation. Their demands motivated early browser makers to pollute (as it turned out) HTML with presentational controls. For example, the font element enables Web producers to influence the face, size, and color of type. The drawback is that such presentation control is intermingled with the rest of the Web page such that changing the presentation requires tedious and error-prone editing. Moreover, presentation controls embedded in the page increase disk space requirements and network transmission times and obscure the structural relationships between the HTML elements of a page.
User Control of Display Users of the Web, too, need control of presentation. This idea makes no sense in traditional print media, which are
153
permanent, rigid, and fixed at the time of printing. Users need to adjust the Web’s presentation to accommodate their special needs. For example, some users need to use large print or accommodate a deficiency in color perception. Such users benefit if they can override the designer’s presentation styles; they would be excluded from the Web without such a capability.
Goes Beyond HTML Throughout this chapter, reference is made to HTML elements and attributes, because HTML styling is the principal application for CSS. CSS, however, is not limited to HTML. With the appropriate software and hardware, CSS can style any markup language. For an example of a nonHTML style sheet, see the style sheet for RDF at W3C, http://www.w3.org/2000/08/w3c-synd/style.css.
CSS Standards The World Wide Web Consortium has promulgated two recommendations for CSS. These are cascading style sheets level 1 (CSS1; Lie & Bos, 1999a), and cascading style sheets level 2 (CSS2; World Wide Web Consortium, 1998). CSS2 adds to CSS1 support for different media types, element positioning, and tables and for a richer set of selectors.
DOCUMENT VALIDATION For a browser such as Mozilla or Internet Explorer to apply styles to the appropriate part of an HTML document, it must be able to analyze unambiguously both the HTML document and the style sheet. This requirement imposes a modest burden on authors of Web pages: Their pages must follow the actual rules of the HTML specification, and the style sheets must conform to the CSS specifications. Fortunately, the computer can test pages and style sheets for conformance to the standards, and it can even correct some HTML errors automatically. Using a code generator rather than coding “by hand” in a general text editor also helps prevent errors. Both the World Wide Web Consortium (2001) and Quinn (2002) provide online HTML validators. The latter will recursively or batch validate up to 100 pages at a time. Both validators are Web front ends for Clark’s (n.d.) nsgmls software, which runs on the Unix or Windows command line. The World Wide Web Consortium (2002a) provides a CSS validator for CSS levels 1 and 2. Finally, HTML Tidy (n.d.), software for many platforms, automatically cleans up common errors in HTML and can convert presentational HTML into embedded styles. Using such tools helps ensure that standards-conformant browsers will render pages as expected.
MEDIA TYPES Media types apply styles according to the kind of display device on which a user is viewing (or hearing) your page. Media types are part of CSS2. There are four dimensions that characterize media types. Although many style capabilities work across media, some styles are specific to a particular medium. For example, there are no font styles for audio media, nor is there stereo separation for print,
P1: JDT WL040-16
WL040/Bidgoli-Vol I-Ch-14
154
October 3, 2003
17:56
Char Count= 0
CASCADING STYLE SHEETS (CSS)
but both print and screen media support background colors. Each of the discrete points on the four dimensions labels a media group. The four dimensions that characterize media types are as follows: continuous or paged; visual, aural, or tactile; grid or bitmap; interactive or static. Continuous media consist of uninterrupted streams, such as a scrolling window. Paged media have discrete segments, such as paper sheets for print, or stepped screens on a handheld system. Visual, aural, and tactile media are, respectively, seen, heard, and felt. Grid media consist of a grid or array of discrete character positions. Examples include traditional computer terminals and character printers. Bitmap media can freely represent graphics, letterforms, and glyphs, like the screen of a modern personal computer. Interactive media permit the user to interact with the display, as in traditional screen-based Web browsing. Static media do not allow interaction. Printed pages are an example of a static medium. To associate a media type with a group of styles, you can create a separate style sheet file for each media type, or you can group them in a construct such as the following: @screen {...}. All the styles between the braces would apply to the screen medium. When creating separate style sheet files, specify the media type in the link or style tag in the HTML document with a media attribute, such as media="screen". There are nine media types in CSS2, plus a tenth type, all, which makes no distinction based on medium. Many style sheets comprise a base sheet associated with all media types, with media-specific style sheets that override styles or provide additional styles. The nine media types are, alphabetically, as follows: aural, braille, emboss, handheld, print, projection, screen, tty, and tv. CSS2.1 (World Wide Web Consortium, 2002b) will drop the aural media type and add speech and will split the aural media group into speech and audio. In the sections below, each media type is briefly described and is characterized according to media group on the four dimensions mentioned above (using the CSS2.1 designations). If a dimension is not listed for a particular media type, the media type belongs to no group on that dimension. Some media types may occupy more than one group on a dimension, depending on the context or particular display device.
Speech. Speech styles apply to speech synthesis. Users of speech synthesizers include the visually impaired, those who must not divert their visual attention, such as the drivers of automobiles, and those who cannot use written words. In terms of the four dimensions, aural media are continuous, speech, and interactive or static.
Braille. Braille is for dynamic braille tactile devices, not for braille embossed in a fixed medium. Braille is continuous, tactile, grid, and interactive or static. Emboss. Emboss is for braille printers. Emboss is paged, tactile, grid, and static.
Handheld. Handheld is for handheld devices. The CSS2 specification characterizes such devices as having small,
monochrome screens and limited bandwidth. Already, the emergence of color handheld devices has overtaken the standard. No doubt some future revision of the standard will catch up with technological changes. Handheld is continuous or paged, visual, audio, or speech, grid or bitmap, and interactive or static.
Print. Print is for printing on traditional opaque media, most commonly paper. Print is paged, visual, bitmap, and static. Projection. Projection
is for paged presentations, whether on a computer projector or printed onto transparencies. It is not for the general case where a standard computer screen is enlarged via a projector (screen is the media type in that case). Projection is paged, visual, bitmap, and interactive.
Screen. Screen is for common computer screens. Screen is continuous, visual or audio, bitmap, and interactive or static.
Tty. Tty is for devices, such as teletypes, terminals, or some handheld displays, that use a fixed-width character grid. The pixel unit is not allowed for this media type, as such displays are not pixel addressable. Tty is continuous, visual, grid, and interactive or static.
Tv. Tv is for low-resolution displays with sound available, but with weak or clumsy scrolling controls. Tv is continuous or paged, visual or audio, bitmap, and interactive or static.
CSS BOX MODEL Every element displayed on a visual medium has an associated box with various parts that influence the display. A diagram of the box model appears in Figure 1. The box model defines the following regions: content, padding, border, and margin. Only the content region is not optional. Each region defines a set of properties. Each region is described below, working from the center outward. The content is the region where the element is displayed. Its dimensions correspond to the width and height properties of the element. This means that the box may be larger than the content-width and content-height dimensions. The overall height or width of the box is the sum of the height or width property and the adjacent padding, border, and margin. The padding is the region surrounding the content. No foreground text or graphic appears in the padding, but background properties of the element, such as color, do appear in the padding region. The border is the region where border properties are drawn. Some browsers may incorrectly continue the background into the background of the border, so beware. The specification says that the border’s appearance shall depend solely on the element’s border properties. Even though you might think of a border as a line of zero width, in the box model, it may have any arbitrary width. The margin is a transparent region outside the border region. It separates the visible parts of an element’s box
P1: JDT WL040-16
WL040/Bidgoli-Vol I-Ch-14
October 3, 2003
17:56
Char Count= 0
INHERITANCE AND THE CASCADE
155
Outer edge
Border
Inner edge Width Height
Content
Padding
Margin
Figure 1: CSS box model. from those of any adjoining boxes. Because the margin is transparent, the background of any enclosing element shows through. Because HTML elements may contain other elements (as the body element contains a sequence of p elements), each box is nested within the box of its parent element. The entire box, including the padding, border, and margin, of the inner box, is inside the content region of the outer box. Thus, CSS boxes resemble nested Russian dolls. Boxes do not overlap, with two exceptions. First, a box whose position has been altered through positioning properties may overlap other boxes on the page. Second, adjacent vertical margins, such as those of two paragraphs in sequence, collapse. You see only the larger of the two margins. Without the latter exception, the space between blocks of text would be excessive and tedious to control.
INHERITANCE AND THE CASCADE In any given instance, several style rules may be “competing” to apply to a particular element of a Web page. This situation may seem chaotic, but allowing multiple style sheets to participate in styling Web pages is a major feature of CSS. It enables designers to create modular style sheets that are easy to maintain. It enables designers to override general styles for special cases. And it enables users to override designers so that the Web may accommodate special needs (for large type, for example). A typical use of the cascade is to have two site-wide style sheets to which each HTML page has links in its head section. The first link refers to the style sheet for all media, and the second link refers to the style sheet for print. The print style sheet contains only overrides of the all-media style sheet. The remaining styles “flow” into the print style sheet (the term “cascading” is meant in analogy to a series
of waterfalls). A page in the site that needs unique styles may have a style element in its head section. That style tag must appear after the link tags that refer to the overall style sheets for the site. By appearing last, the style rules in the style element can override the site-wide styles. Finally, a context requiring unique treatment may have a tag with a style attribute. The style attribute (not tag) is the most specific context of all and provides a local override of all other styles. Figure 2 shows a (contrived) HTML page that contains links to style sheets for multiple media, a style tag, and a style attribute. When a browser assigns a value to a property for an element it is rendering, it goes through a process that, in principle, is quite simple. First, if the cascade (see below) yields a value, use that. Next, if the property in question is one that is inherited (see below), use the inherited value. Otherwise, use the initial, or default, value defined in the CSS specification.
Inheritance Inheritance is based on an analogy to a family tree. Each HTML element that appears within some other element is said to be a child of the enclosing element. For example, in the HTML fragment of Figure 3, the emphasis element is a child of the paragraph element, and the paragraph is the child of the body. For properties that the specification says are inherited, descendents to any arbitrary level of descent (child, grandchild, great-grandchild . . .) inherit the property, unless the cascade overrides it. Elements that are children of the same parent element are called siblings. Some properties are not inherited. This is for the convenience of the style sheet author. For example, it would be inconvenient in most cases if border properties were inherited, so they are not. Consider how tedious it would
P1: JDT WL040-16
WL040/Bidgoli-Vol I-Ch-14
October 3, 2003
17:56
156
Char Count= 0
CASCADING STYLE SHEETS (CSS)
h1 { font-family: "Times New Roman", Times, serif; color: yellow; } How Cascading Works THIS HEADING1 TEXT IS BLUE ARIAL Figure 2: Link, style-tag, and style-attribute contexts in the cascade.
be to have to turn off the inherited borders on each element within a paragraph with a border! Because descent and inheritance are ambiguous when elements overlap, overlapping elements are not permitted in HTML. This is one of the reasons you need to validate your HTML code.
Cascade Style Sheet Origin Each style sheet has one of three origins: the style sheet’s author, the user (who may have a personal style sheet), or the browser. Except when the !important weight is in play, the author’s styles take precedence over the user’s styles, which take precedence over the default styles of the browser. Browsers may not have !important rules.
Cascading Order The cascade determines which style rule applies to a given situation by means of a four-step procedure. 1. Find all matching declarations from all style sheets involved. If there is but one matching declaration, use it and stop.
This is very important.
I am a sibling of the paragraph above.
Figure 3: Containment.
2. Otherwise, sort the matching declarations by origin and weight. Author styles override user styles, and user styles override browser styles. Any !important declaration overrides a normal declaration regardless of origin. If both author and user have an !important style, the user wins in CSS2 (the author won in CSS1, but this was an impediment to accessibility). If only one declaration is a clear winner, use it and stop. 3. Otherwise, sort by how specific the selector (see CSS Selectors, below) is. The more specific selectors override the less specific ones. If one style rule has the most specific selector, use it and stop. 4. Otherwise, if there is still a tie, the last rule wins. Apply it.
CSS SELECTORS Every style rule consists of a selector and a set of declarations, each consisting of a property and a value, in this format: selector { property: value; }. There may be any number of declarations between the braces. The selector determines the context to which the declarations apply. Selectors are patterns that match particular contexts in HTML documents. When a selector matches a context and makes it through the cascade, its style declarations apply to the whole context. For example, a selector “body” would match the context of the body element of an HTML document. For this reason, it is common practice to assign default values for the style sheet to the body selector. Through inheritance, those properties propagate to all the descendants of the body element, unless the cascading rules override them. In addition to element names such as body, selectors may use the values of id or class attributes. Further, the specifications define a set of pseudoclass and pseudoelement selectors that behave like class and element selectors. They differ from real classes and elements in that
P1: JDT WL040-16
WL040/Bidgoli-Vol I-Ch-14
October 3, 2003
17:56
Char Count= 0
CSS SELECTORS
they arise from a variable context or from user-generated events. For example, the :first-line pseudoelement depends on the width of the browser’s display, and the :visited pseudoclass depends on the user’s browsing history. In addition, selectors may be grouped to apply a given style to several disjoint contexts. Finally, selectors may be arranged in various ways to match very specific contexts. CSS1 defines a basic selector model that supports element, class, id, and descent contexts, including some pseudoelements and pseudoclasses. To this model CSS2 adds additional pseudoelements and pseudoclasses, as well as more precise contexts based on sibling and descent relationships between elements.
157
Example 2: p#note1 { border-width: thin; border-style: solid; }
Link Pseudoclasses The link pseudoclasses are :visited and :link. They correspond to links that the user has or has not followed. These pseudoclasses replace the link and vlink attributes of the HTML body element. a:link { color: #0000FF; } a:visited { color: #FF00FF; }
CSS1 Selectors CSS1 defines the following kinds of selector: type selectors, descendant selectors, class selectors, ID selectors, link pseudoclasses, and typographical pseudoelements.
Type Selectors Type selectors are so designated because they match all elements (tags) of a single type. A more intuitive way to regard such selectors is as redefiners of the appearance of HTML tags. Any HTML element, such as body, may act as a type selector.
Typographical Pseudoelements The typographical pseudoelements in CSS1 are :first-letter and : first-line. They apply styles to the corresponding parts of an element. In the case of :first-line, it dynamically adapts to changes in the window size or font. The example below creates a drop-cap on a paragraph designated with the id “initial”. p#initial:first-letter { float: left; font-size: 3em; }
body { background-color: #FFFFFF; }
Descendant Selectors A space-separated list of selectors matches if the context named on the left contains the context named on the right. The sample code below sets emphasized text in red, but only if the emphasized text is in a level-two outline heading. h2 em { color: #FF0000; }
Class Selectors A class selector is introduced with a dot and matches the value of an HTML tag’s class attribute. Example 1 below applies a border to any element whose class is “note”; example 2 does the same, but only if the element is a paragraph. Example 1:
CSS2 Selectors CSS2 adds to CSS1 the following kinds of selectors: child selectors, the universal selector, adjacent-sibling selectors, dynamic pseudoclasses, the language pseudoclass, textgenerating pseudoelements, and attribute selectors.
Child Selectors Child selectors are pairs of element names separated by >, in which the right-hand element must be a child of the left-hand element. The first example sets the line height of paragraphs that are children of the body (child paragraphs of div would be unaffected). The second example sets ordered lists to have uppercase Roman numerals as markers, and ordered lists nested one level deep in other ordered lists to have uppercase alphabetic markers. Example 1:
.note { border-width: thin; border-style: solid; }
body > p { line-height: 1.5; }
Example 2:
Example 2:
p.note { border-width: thin; border-style: solid; }
ol { list-style-type: upper-roman; } ol > li > ol { list-style-type: upper-alpha; }
ID Selectors ID selectors work the same way class selectors do, but they match the id attribute. In the style sheet, they are introduced with the # sign rather than a dot. Bear in mind that IDs must be unique in the HTML document; often a class is more convenient. Example 1: #note1 { border-width: thin; border-style: solid; }
Universal Selector
The universal selector ∗ is a wild-card selector. It matches any HTML element name. The example below sets in red any emphasized text that is exactly a grandchild of the body element, regardless of what the parent of em is. body > * > em { color: #FF0000; }
P1: JDT WL040-16
WL040/Bidgoli-Vol I-Ch-14
October 3, 2003
17:56
158
Char Count= 0
CASCADING STYLE SHEETS (CSS)
Adjacent-Sibling Selectors Adjacent siblings are elements that have the same parent and no other elements between them. The general form of an adjacent-sibling selector is x + y, where x and y are elements. The style applies to y. The example applies a drop-cap to a paragraph if and only if it immediately follows a level-one heading. h1 + p:first-letter { float: left; font-size: 3em; }
:lang(de) > Q { quotes: '»' '«' '\2039' '\203A' }
Text-Generating Pseudoelements The :before and :after pseudoelements inject text before and after the content of the selected element. The sample code below inserts the bold label “Summary:” at the beginning of every paragraph whose class is “summary.” Note the space between the colon and the closing quotation mark.
Dynamic Pseudoclasses The dynamic pseudoclasses are :hover, :active, and :focus. Hovering occurs when the user points at but does not activate an element, as when pointing but not clicking on a link. An item is active while the user is activating it, such as during the time between clicking and releasing the mouse button on a link. Focus occurs when an element is able to receive input from the keyboard (or other text input device). In the example below, links are blue when unvisited, red when visited, yellow when pointed at, and green during the mouse click. Note that the order of the style rules is significant, because some of the selectors are equally specific. This cluster of style rules should appear in the order shown or some of the styles will never show up. a:link a:visited a:hover a:active
{ { { {
color: color: color: color:
blue; } red; } yellow; } green; }
Language Pseudoclass The language pseudoclass is :lang(C), where C is a language code as specified in the HTML 4 standard (Raggett, Le Hors, & Jacobs, 1999), or RFC 3066 (Alvestrand, 1995). (The CSS specification actually refers to the obsolete RFC 1766, but implementations should use the current definition of language tags.) Some instances of language codes are en for English, en-us for United States English, and fr-ch for Swiss French. The pseudoclass selector matches according to a liberal algorithm such that :lang(en) would match either of the English codes shown above, just as :lang(fr) would match the Swiss French code. The language of an element may be determined from any of three sources in HTML: a lang attribute on the element, a meta tag, or HTTP headers. Other markup languages may have other methods of specifying the human language of a document or element. The example below is quoted from the CSS specification section 5.11.4 (Bos, Lie, Lilley, & Jacobs, 1998). It sets the appropriate quotation marks for a document that is either in German or French and that contains quotations in the other language. The quotation marks are set according to the language of the parent element, not the language of the quotation, which is the correct typographic convention. HTML:lang(fr) { quotes: '« ' ' »' } HTML:lang(de) { quotes: '»' '«' '\2039' '\203A' } :lang(fr) > Q { quotes: '« ' '»' }
p.summary:before { content: "Summary: "; font-weight: bold }
Attribute Selectors Attribute selectors match according to an element’s attributes in one of four ways: [attribute] matches when the element has the specified attribute, regardless of value; [attribute = value] matches when the element’s attribute has the exact value specified; [attribute ∼ = value] matches when the attribute can be a space-separated list and one of the members of the actual list is the specified value; [attribute | = value] matches the value according to the rules for matching language codes, as specified in RFC 3066 (Alvestrand, 2001). The example below applies a dotted border below any abbreviation or acronym element that has a title attribute. In addition, it asks the browser to change the mouse pointer to the help cursor when hovering over the element. Few browsers successfully render all these styles, however. abbr[title], acronym[title] { border-bottom: black; border-width: 0 0 1px 0; border-style: none none dotted none; } abbr[title]:hover, acronym[title]:hover { cursor: help; }
Grouping To apply the same style to a set of distinct selectors, separate the selectors with commas. The example below applies a green text color to all six levels of heading. h1, h2, h3, h4, h5, h6 { color: #006666; background-color: transparent; }
P1: JDT WL040-16
WL040/Bidgoli-Vol I-Ch-14
October 3, 2003
17:56
Char Count= 0
PRACTICAL CSS IN ACTION
CSS PROPERTIES Every style rule comprises a selector and one or more declarations. Each declaration has a property and a value. The values permitted for each property depend on the domain of that property. For example, the margin property accepts only length values, and the font-family property accepts only a list of font names as its value. Many properties have two forms, a specific form and a combined, or shorthand, form. For example, the border property sets the width, style, and color on all four sides, so it is much more compact to write than it is to write all the individual properties pertaining to borders. The shorthand form, however, has a subtle yet important impact on the cascading rules. Any properties that you do not set in the shorthand form act as though you had explicitly set the initial, or default, value in the style rule. Recall that although the cascading rules give low priority to default values, explicit rules (even implicitly explicit ones) take on the priority of the associated selector. You need to be aware of this subtle difference when you choose to use shorthand properties. The CSS specifications define over 100 properties, and space does not permit them to be listed here. The property definitions are readily available online (Lie & Bos, 1999b; Bos et al., 1998) and in exhaustive references, such as Meyer (2000) and Lie & Bos (1999a). If you do refer to the specifications themselves, be sure to refer to the associated errata as well.
Value Types, Units, and Representations Color Color has four representations in style sheets: color names as defined in HTML 4; hexadecimal color codes (long and short forms); percentage RGB codes; numeric RGB codes. The following examples all represent the same color. yellow #FFFF00 #FF0 rgb(100%, 100, 0%) rgb(255, 255, 0)
URL Any URL (uniform resource locator) may appear in the following notation: url(). Between the parentheses, either absolute or relative URLs may appear. If document relative, they are relative to the style sheet itself.
Length You may express absolute lengths in terms of inches (in), centimeters (cm), millimeters (mm), points (pt), or picas (pc). When writing a length, put no space between the number and the unit, as in this example: 3mm. You may express relative lengths in terms of ems (em), exes (ex), or pixels (px). The em unit is equal to the current font size. The ex unit is equal to the height of the lowercase letter x in the current font, although browsers often set it equal to 0.5 em. The pixel unit is usually a screen pixel, but for printing, the specification calls for the pixel unit to
159
be rescaled so that it has the same approximate viewing size as it does on the screen.
Percentage Percentages appear as a number followed by the percent sign, as in this example: 150%. Percentage values for lengths are relative to some other length, usually some dimension of a parent element.
Key Words All other values discussed in this chapter take key words or lists of key words as their values. For example, this style rule asks the browser to set the type of a page in a serif font, preferably Palatino or Times: body { fontfamily: Palatino, Times, serif; }.
PRACTICAL CSS IN ACTION Linking Styles to Web Pages There are two basic ways to associate a style sheet with a Web page. The first uses the HTML link element; the second uses the style element. Every page in a Web site needs to be associated with the site’s style sheet, but once the association exists, maintaining the styles is easy. Whether you use the link or the style element, it must be a child of the head element of your HTML page. The first example below uses link; the second uses style. Both examples are in HTML 4. Example 1: Example 2: @import url(css/screen.css);
W3C Core Styles The W3C core styles are eight prewritten style sheets that are available for anyone to use with their HTML documents (Bos, 1999). They are called Chocolate, Midnight, Modernist, Oldstyle, Steely, Swiss, Traditional, and Ultramarine. You can interactively preview the eight style sheets at http://www.w3.org/StyleSheets/Core/preview. To use one of these style sheets, use a link element like the one in the example, which specifies the Modernist style sheet.
Alternate Style Sheets Every Web page that uses style sheets has up to three different kinds of style sheet: persistent, preferred, and alternate, none of which is mandatory (Raggett, Le Hors, & Jacobs, 1999). The alternate style sheets are mutually exclusive, and modern browsers such as Mozilla afford
P1: JDT WL040-16
WL040/Bidgoli-Vol I-Ch-14
October 3, 2003
160
17:56
Char Count= 0
CASCADING STYLE SHEETS (CSS)
the user a way to choose which alternate style sheet to use. The preferred style sheet is the one that the browser loads initially. The browser always combines the currently selected preferred or alternate style sheet with the persistent style sheet, if any. The persistent style sheet is a good place to put basic styles that the designer wishes to appear at all times. Then only the variant parts of the style sheet need to appear in the preferred and alternate style sheets. There may be any number of alternate style sheets but no more than one persistent style sheet and no more than one preferred style sheet. The designation of a style sheet as persistent, preferred, or alternate depends on the particular combination of rel and title attributes on the link tag that associates a Web page with the style sheets: The link to a persistent style sheet has rel="stylesheet" and no title attribute at all. The link to the preferred style sheet has rel="stylesheet" and a title attribute of your choosing. The link to an alternate style sheet has rel="alternate stylesheet" and a distinct title of your choosing. Not all Web browsers provide a user interface for switching style sheets. To provide a switching facility to users of such browsers, you may use HTML form elements, such as buttons or pop-up menus, and some JavaScript code (Sowden, 2001; Ludwin, 2002).
Examples The following examples are conservative. They are intended to show that interesting results can issue from quite simple CSS code. For more elaborate and cuttingedge examples, see the bibliography for items by Meyer (2001a; 2001b; 2002a; 2002b).
Logo Fixed to the Top Left Corner The example in Figure 4 assumes that you have a 72pixel-wide logo in a file logo.png that you wish to affix to the top, left corner of your page. The left margin is set in pixel units to move text to the right so that it does not overlap the logo. The margin is set somewhat larger than the width of the graphic so that there is some space between the logo and the text.
Two-Column Layout Without Tables The examples in Figures 5 and 6 constitute a simplified version of the technique of Zeldman (2001). The goal here is to create a two-column layout with a list of links on the left and the page’s principal content on the right. To achieve this layout, the HTML code in Figure 6 uses two body { margin-left: 82px; background-position: left top; background-attachment: fixed; background-repeat: no-repeat; background-image: url(logo.png); color: #000000; background-color: #ffffff; } Figure 4: Code for a fixed background image.
div#content { float: right; border-left: 1px solid #000; border-bottom: 1px solid #000; width: 70%; padding-top: 0; padding-right: 0; padding-left: 3em; padding-bottom: 3em; margin: 0; } Figure 5: CSS for two-column layout.
div tags to organize the menu links and the content. The division with the content has an id of “content,” to which the CSS code in Figure 5 refers. That code primarily employs the underutilized float property to float the content to the right side of the page. Margins and padding are adjusted for esthetics, and a border helps the eye distinguish the content from the menu links.
BEST PRACTICES Modern Web browsers are fairly good implementations of CSS (Web Standards Project, n.d.), and the CSS specifications offer a broad range of perhaps alluring stylistic capabilities. Nonetheless, adhering to certain best practices will minimize the risk that one or another of the standards-compliant browsers will fail to render a page as the designer expected. In addition, many of these practices result in fluid documents that adapt well to different window or screen sizes and user preferences for font size. Some of the practices avoid programming errors that exist in popular Web authoring agents. We examine best practices in two groups: (a) a list of practices by the inventors of CSS (Lie & Bos, 1999), and (b) some new practices that I recommend.
Recommendations of Lie & Bos (1999a) Use Ems to Specify Lengths Designs that use ems are scalable, fluid, and adaptive. The em unit, traditionally the width of the glyph M, in CSS is equal to whatever the height of the current font is. Because
Content goes here.
- link a
- link b
- link c
- link d
Figure 6: HTML code fragment for two-column layout.
P1: JDT WL040-16
WL040/Bidgoli-Vol I-Ch-14
October 3, 2003
17:56
Char Count= 0
BEST PRACTICES
the font height is the basis of this unit, it is a relative unit that scales proportionally as the font size changes. When you set margins and padding in terms of ems, the margins and padding grow or shrink in proportion to the changes in the font on the display device. This preserves the balance and overall appearance of the design while accommodating a variety of display devices and user needs and preferences for font size.
Use Ems to Specify Font Sizes This recommendation may seem oddly circular at first glance. What does it mean to specify font sizes in terms of font sizes? The em unit for font size is relative to the default font size of the Web browser or other user agent. For example, 2em is equal to twice the default font size. Using ems, rather than traditional type units such as points or picas, results in pages whose font size adapts to the needs and preferences of the end user, or to the limitations of the display medium. Using absolute units for font size is a common reason that type on Web pages appears too big or, worse, too small to read.
161
read it. Arranging the content of a page in its logical sequence ensures that the page continues to make sense even in browsers that cannot render the styles.
Test Pages in the Absence of CSS Even in a modern CSS-compliant browser, the style sheet may become temporarily unavailable due to a network failure or other cause. For this reason, you should test your pages for legibility with style sheets off. Failure of pages to degrade gracefully in the absence of style sheets causes serious problems of accessibility as well.
Test Pages in Relevant Browsers Despite the emergence of browsers that adhere to the HTML and CSS standards, it is still necessary to test designs in several relevant browsers. This is due to ambiguities in the standards, differences in interpretation of the standards, or outright errors in implementations. If a significant proportion of your site’s visitors uses older browsers, testing is imperative.
Use a Generic Font Family Use % When You Wish Not to Use Ems There are elements for which em is not the appropriate unit. For example, the margins of the page as a whole (the body element) are usually more pleasing if they are based on the size of the display window, rather than on the font size. The % (percent) unit is the appropriate unit for such cases.
Use Absolute Units Only When You Know the Size of the Output Medium First, a caveat: It is rare to know the dimensions of Webrelated output media. Even in print, there are regional variations in the dimensions of the paper. The United States generally uses paper that is 8.5 by 11 or 17 inches for office productivity applications. In Europe, the A4 metric size is common. Bear in mind as well that even in print, some users may need larger than usual type, and others may prefer very small type. If, however, you do know the dimensions of the medium, and if your design has very tight tolerances, CSS provides a menagerie of absolute units: points, picas, centimeters, millimeters, inches.
Float Objects Rather Than Using Layout Tables In presentational HTML, it is possible to float images or tables to the left or right margin; other content wraps around the floated elements. Designers often resort to a table to move other, nonfloatable elements to one side of the page, creating a multicolumn layout. With CSS, any element can float if you assign the float property to it, so it is not necessary to use tables for layout. The two principal advantages of CSS floating over layout tables are faster rendering and better accessibility. Tables, especially complex layout tables, often take much time for the browser to render. In addition, the World Wide Web Consortium (1999) recommends against using tables for layout.
Arrange Content in Logical Order Another way of stating this guideline is don’t rely on CSS to position your content in the order you wish users to
In print, designers can choose from a multitude of typefaces and can specify the font that best solves a communication problem. On the Web, only the typefaces installed on the end user’s computer are available. Typically, these are the fonts installed by default in the computer’s operating system. Nonetheless, you may wish to specify fonts for various contexts in your Web pages. When you do, always list the appropriate generic font family as the last item in the list of font families for your style rule. For example: body { font-family: Palatino, "Times New Roman", serif }. There are five generic font families: Serif fonts have small extensions (serifs) on the ends of strokes; sans-serif fonts lack serifs (from the French sans, meaning “without”); monospace fonts have glyphs that are all the same width; cursive fonts look like handwriting; fantasy fonts are miscellaneous display fonts for special uses. Many computers lack cursive or fantasy fonts, in which case the browser will substitute another font.
Use Numeric Color Codes The CSS standard defines only 16 color names (aqua, black, blue, fuchsia, gray, green, lime, maroon, navy, olive, purple, red, silver, teal, yellow, and white). Even though browsers may recognize other names, such names are nonstandard and may produce unexpected results, particularly in older browsers. Therefore, it is best always to specify colors as hexadecimal or percentage codes.
Know When to Stop Listen to your graphic designer and your usability expert. CSS gives you broad power to control the formatting of Web pages. Such power calls for a trained eye and great restraint. You can create a page with 10 fonts and every word in a different color, but such a page would be unreadable. Usually two or three fonts suffice to distinguish the various kinds of text. Using color sparingly makes the color you do use stand out with great emphasis (but be sure that color is not the only way your page conveys information).
P1: JDT WL040-16
WL040/Bidgoli-Vol I-Ch-14
October 3, 2003
17:56
162
Char Count= 0
CASCADING STYLE SHEETS (CSS)
Other Best Practices Use External Style Sheets Create style sheets as discrete text files that you link to your Web pages. This practice eliminates unnecessary redundancy in the Web site and enables you to establish alternate style sheets. Use embedded styles only when you need to override your external style sheet for a single, exceptional page. If you find that you are often overriding the external style sheets, it is time to redesign the external style sheets. Finally, reserve inline styles with the style attribute only for situations where it is absolutely necessary to accomplish an effect, such as dynamically moving layers around on the page. This practice is particularly important because it minimizes maintenance tasks. The less style information is dispersed throughout a site’s pages, the less effort needs to go into maintaining the site’s style sheets. By keeping as much style information as possible in centralized external style sheets, you will reduce the likelihood that a style you want to change is neglected.
Use @import to Hide Styles from Netscape 4 Version 4 of the Netscape browser is now badly obsolete, yet it still has many users. Its CSS implementation is both nonstandard and erroneous. It is possible to create valid HTML and CSS code that is completely unusable in Netscape 4, such that links become inactive. To avoid having to work around the problems in Netscape 4, eliminate styles for that browser by taking advantage of Netscape 4’s ignorance of the @import directive. Zeldman (2001) recommends code like the example below for screening style sheets from Netscape 4. @import "/styles.css"; For XHTML, use code like the example below, which permits the page to validate under the XHTML document type definitions. /**/ You may need to decline this practice if your audience has a high proportion of Netscape 4 users. The trade-off is that you will be limited in what aspects of CSS you can use and you will suffer exposure to the risk that trivial changes in your CSS will break your site for Netscape 4. Rigorous testing is required under such circumstances. Because of the severity of the trade-offs, you may decide to politely encourage your users to move to browsers that support standards. The Web Standards Project (n.d.b) provides techniques for doing so.
Use a W3C Core Style If you are not a graphic designer or do not wish to take the time to develop your own style sheets, use one of the
W3C core styles mentioned above. They provide a variety of basic style sheets appropriate for general use, they have already been written and tested for you, and they work (as much as possible) in Netscape 4. If you use a core style, you may omit the preceding practice.
Judiciously Use and Name Classes Use HTML classes to distinguish elements, such as paragraphs, that serve different purposes in your documents. Name the classes according to their function, not according to their appearance. You may change your mind as to the appearance, and it would make no sense for a class called “blue” to appear in green. But a class called “summary” will make sense regardless of the styles you apply to it.
Avoid Workarounds One of the benefits of Web standards is that Web page creators can avoid working around quirks and bugs in browsers. Nonetheless, there are bugs and quirks even in modern browsers. Searching the Web for “CSS workaround” will confirm this. Many of the workarounds have to do with layouts that require pixel accuracy, and whose designers consider a browser’s misinterpretation of the box model to “ruin” the design. To avoid such bugs, many examples on the Web use bizarre syntax that risks failure when repaired versions of browsers emerge. Rather than work around the bugs, it is best not to insist on control down to the pixel. Heed the practices above that encourage fluid, adaptive documents, and heed the next practice, as well.
Be Flexible to Keep It Simple CSS does not offer the precision of presentation control that exists in print. It would be counterproductive for the Web to have such a rigid level of control, because of the diversity of users who, and output devices that, access the Web. The best design practice is to create flexible presentation styles. Simplicity is best. It is not necessary to exercise every nuance of the box model or work around every bug in Internet Explorer. If you are willing to accept some deviation from your ideal design, your style sheets will be simple and easy to maintain.
EVOLUTION OF CSS The World Wide Web Consortium (2002b) continues to develop the CSS standards. Most recently, it released a draft specification for CSS2.1. This update corrects several errors in the published standard for CSS2. More important, it amends the standard to conform with practice in the following ways: (a) maintains compatibility with widely implemented and accepted parts of CSS2; (b) modifies the specification where implementations have overwhelmingly contradicted CSS2; (c) omits features of CSS2 that have never (or almost never) been implemented; and (d) omits CSS2 features that are to be superseded in CSS3. CSS3 is still under development. A key feature of CSS3 is that it is being developed in modules. This method of developing the latest CSS specifications will enable implementers and users to accept or reject parts of CSS3
P1: JDT WL040-16
WL040/Bidgoli-Vol I-Ch-14
October 3, 2003
17:56
Char Count= 0
REFERENCES
without necessitating revisions to the entire specification. It would not be surprising if only small portions of CSS3 ever see widespread implementation. Nonetheless, it does address the needs of linguistic and other communities whose ways of communicating were ignored in CSS1 and CSS2.
GLOSSARY Border The region of the box model immediately outside the padding. Box Model A part of the CSS specification that defines where the content and other parts of an HTML element will appear on the display medium. Cascade The rules or logic by which multiple style sheets participate in an orderly manner to affect the appearance of a document. Child An HTML element that is part of the content of an enclosing HTML element. Content The region of the box model where an HTML element’s text or graphic appears. Declaration The assignment of a value to a property. Inheritance The propagation of a style from a parent to a child or other descendant. Margin The transparent region of the box model surrounding the border. Padding The region of the box model between the content and the border; background properties appear within it. Parent An HTML element that contains another HTML element. Property An aspect of style that CSS can control. Selector The part of a style rule that expresses the context to which the style should apply. Style Rule A complete expression in CSS that specifies styles and the context to which they should apply; consists of a selector and declarations. User Agent Software or a device that acts on behalf of a user. The most common user agents are graphical Web browsers. Validator Software that tests code for syntactical conformity to a standard. Value A specific setting chosen from a range of possibilities.
CROSS REFERENCES See Client/Server Computing; DHTML (Dynamic HyperText Markup Language); HTML/XHTML (HyperText Markup Language/Extensible HyperText Markup Language); XBRL (Extensible Business Reporting Language): Business Reporting with XML.
REFERENCES Alvestrand, H. (2001). Tags for the identification of languages [RFC 3066]. Retrieved June 22, 2002, from ftp://ftp.isi.edu/in-notes/rfc3066.txt Bos, B. (1999). W3C core styles. Retrieved June 7, 2002, from http://www.w3.org/StyleSheets/Core/ Bos, B., Lie, H., Lilley, C., & Jacobs, I. (1998). Cascading
163
style sheets, level 2. Retrieved May 20, 2002, from http:// www.w3.org/TR /REC-CSS2 Clark, J. (n.d.). SP. Retrieved May 14, 2002, from http:// www.jclark.com/sp/ Hohrmann, ¨ B. (n.d.). W3C validator FAQ. Retrieved May 13, 2002, from http://www.websitedev.de/css/validatorfaq.html HTML tidy (n.d.). Retrieved June 15, 2002, from http:// tidy.sourceforge.net/ Lie, H. W., & Bos, B. (1999a). Cascading style sheets: Designing for the web (2nd ed.). Harlow, UK: Addison– Wesley. Lie, H. W., & Bos, B. (1999b). Cascading style sheets, level 1. Retrieved May 20, 2002, from http://www.w3.org/TR / REC-CSS1 Ludwin, D. (2002). A backward compatible style sheet switcher. Retrieved February 10, 2002, from http:// www.alistapart.com/issues/136/ Meyer, E. (2001a). Complexspiral demo. Retrieved May 14, 2002, from http://www.meyerweb.com/eric/css/edge/ complexspiral/demo.html Meyer, E. (2001b). Liberty! Equality! Validity! Retrieved February 4, 2003, from http://devedge.netscape.com/ viewsource/2001/validate/ Meyer, E. (2002a). Images, tables, and mysterious gaps. Retrieved February 4, 2003, from http://devedge.netscape. com/viewsource/2002/img-table/ Meyer, E. (2002b). CSS: Going to print. Retrieved May 13, 2002, from http://www.alistapart.com/stories/ goingtoprint/ Quinn, L. (2002). WDG HTML validator. Retrieved June 29, 2002, from http://www.htmlhelp.com/tools/ validator/ Raggett, D., Le Hors, A., & Jacobs, I. (1999). HTML 4.01 specification. Retrieved May 17, 2002, from http:// www.w3.org/TR /html401/ Sowden, P. (2001). Alternative style: Working with alternate style sheets. Retrieved November 2, 2001, from http://www.alistapart.com/issues/126/ Web Standards Project (n.d.a). Browser upgrade campaign. Retrieved June 20, 2002, from http://www. webstandards.org/act/campaign/buc/ Web Standards Project (n.d.b). Browser upgrade campaign: Tips. Retrieved June 29, 2002, from http://www. webstandards.org/act/campaign/buc/tips.html World Wide Web Consortium (1999b). Web content accessibility guidelines 1.0. Retrieved June 29, 2002, from http://www.w3.org/TR /WCAG10/ World Wide Web Consortium (2001). HTML validation service. Retrieved June 29, 2002, from http://validator. w3.org/ World Wide Web Consortium (2002a). W3C CSS validation service. Retrieved June 29, 2002, from http://jigsaw. w3.org/css-validator/ World Wide Web Consortium (2002b). Cascading style sheets, level 2, revision 1. Retrieved September 12, 2002, from http://www.w3.org/TR /2002/WD-CSS21– 20020802/ Zeldman, J. (2001). A web designer’s journey. Retrieved March 14, 2001, from http://www.alistapart.com/ stories/journey/5.html
P1: JDW Giannini
WL040/Bidgolio-Vol I
WL040-Sample.cls
June 16, 2003
13:44
Char Count= 0
C/C++ Mario Giannini, Code Fighter, Inc., and Columbia University
What Is C/C++? A History of C and C++ Programming Languages Getting Started with C/C++ Comments Functions Simple Data Types Classes: An Introduction Flow Control Expressions Looping and Nonlooping
164 164 165 166 166 166 167 168 169 169 169
WHAT IS C/C++? C and C++ are closely related programming languages used to create computer programs. Using these languages, programmers write documents called source files and then use a program called a compiler to convert the source files into program files that can be executed by a computer. According to CareerInfo.net (http://www.acinet.org), computer software engineers are projected to be the fastest growing occupation, rising from 380,000 jobs in 2000 to an estimated 760,100 by 2010. The growing demand for technologically skilled software developers makes it a popular field. The programs developed using C and C++ can cover a wide range types. Games, office products, utilities, server programs—just about anything can be developed using the C and C++ languages. C and C++ are not considered “high-level” programming languages, and unlike other languages such as COBOL or Visual Basic, they are designed to apply to a number of varied tasks, rather than specific ones.
A History of C and C++ The C language was originally designed by Dennis Ritchie for the DEC PDP-11 computer system and the UNIX operating system, between 1969 and 1973. It was directly influenced by the BCPL and B programming languages. In 1969, Bell Telephone Laboratories was in the process of abandoning its Multics operating system, believing that its strategy would prove too expensive to completely implement. Ken Thompson led a group to investigate the creation of a new operating system, which would eventually evolve into the UNIX operating system. Thompson wrote the original version of UNIX using assembly language, a complex and low-level language. He decided that UNIX would need a “system language,” a programming language that would serve as the primary language for developing applications. Initially, he created his own language, named B. Thompson used BCPL (created in the mid 1960s by Martin Richards) as his inspiration for creating B. 164
Advanced Data Types Structures and Unions Classes Constructors and Destructors C and C++ and the Internet CGI Programming Client–Server Programming Open Source Glossary Cross References Further Reading
171 171 171 171 172 172 174 174 174 174 175
B underwent several changes to improve several short comings and was renamed to the NB language (for, “New B”). As more changes were added, however, the language started to change considerably from its B predecessor. By 1973, the language was essentially completed and had been named C. At the time, there was no “official” definition of C. The language had not undergone any standardization approval and was therefore open to interpretation by any company that wished to produce a C compiler. In 1978, Brian Kernighan and Dennis Ritchie wrote The C Programming Language. This was the most detailed description of the language and would soon be used to describe a version of the language named K&R C, for the author’s names. In 1983, the documentation from Kernighan and Ritchie’s book was submitted to the American National Standards Institute (ANSI), to get their approval for an exact and standard definition of exactly what C was. Once approved, then the C language would carry with it the extra clout of having an official standard, to which compiler producers would need to meet in order to sell their products as an official C compiler. By 1989, after several changes, the ANSI X J311 committee approved what is now called ANSI C. This is the language still in use today. While C was undergoing its many changes for standardization, C++ was appeared around 1980 and continued to grow in popularity. C++ was originally termed “C With Classes” and was created by Bjarne Stroustrup at AT&T Bell Labs. Stroustrup was using Simula67, which followed an object-oriented approach to software development. Unpleased with Simula67’s performance, Stroustrup decided to enhance the C language, to implement classes and objects. Since its introduction, C++ has been considered a superset of C; everything in C is pretty much still in C++. C++ adds a number of features, primarily in the area of object-oriented programming. It is still possible to compile a C program using a C++ compiler, but not vice versa. The name C++ is a take on the ++ operator of C, which
P1: JDW Giannini
WL040/Bidgolio-Vol I
WL040-Sample.cls
June 16, 2003
13:44
Char Count= 0
WHAT IS C/C++?
means “add one.” So, C++ can be considered as “C + 1,” or the next evolution of C. Stroustrup created C++ in an informal fashion. There was no formal committee or project dedicated to its creation. It more or less promoted itself into development, offering the performance of C++, but with object-oriented enhancements and patterns to improve development. Because of this informal process, there is not a great deal of information about the evolution of its specification. Despite this fact, and as with the early years of C with no standard definition, C++ grew quickly in popularity. By 1987, it was decided that C++ should be submitted to the ANSI body for official standardization. By 1989, Stroustrup had completed the documentation required for standardization, and Hewlett Packard had initiated the process for ANSI submission. In 1989, ANSI created the X3J16 committee, which was to serve as the review body for standardization approval. During the review process, C++ underwent a number of changes. An official standard stream (or Input–Output) class hierarchy was defined, as was a set of standard “template” classes. By 1995, a proposal was made that would eventually be approved. By 1998, ANSI had approved and published a C++ standardization, which is the language still in use today.
Programming Languages Everything a computer does is the result of a program that runs on it. Even when a computer seems to be idle, waiting or a user to press a key, it is in fact doing many things. A program is defined as a set of instructions that the computer executes. A computer follows an ordered pattern of instructions, just as a person reads a recipe to bake a cake and executes the recipe’s instructions in order. Each computer (or central processing unit [CPU] inside the computer) understands a specific set of instructions. The Intel CPUs found inside Windows computers executes a different instruction set then Motorola CPUs found inside an Apple Macintosh computer. The instructions the CPU knows how to handle are called its instruction set. Even thought the Intel and Motorola CPUs have a different instruction set, many of the instructions are similar. For example, both provide instructions to do simple math such as addition and subtraction, as well as to move data from one memory location to another or compare the results of some mathematical operation. The instructions executed by the CPU are also called Machine Code, because the machine can execute it directly. A CPU’s instruction set represents everything it understands and can execute. Unfortunately, the instruction set is defined as a set of numbers, and each number tells it to perform a certain operation. This makes creating a program using the exact instructions quite complex. For example, the following numbers (in hexadecimal) on an Intel CPU would tell it to add “1” to memory address “100”: C6 06 64 00 01 Using this format can become complicated, given that each instruction performs only a small part of a program’s
165
function (even a very small program, such as Windows’ Notepad program, contains about 50,000 bytes of instructions). To simplify the creation of a program, machine code instructions need to be made less complex. The first step in achieving this is to introduce a set of mnemonics for the instructions. These mnemonics are a representation of the machine code that is more readily understood by humans, called Assembly or Assembly Language. An example of the same set of instructions, written in assembly, would look like this: ADD BYTEPTR[100],1 Now, by just looking at the mnemonics, one can begin to understand that the program is attempting to add 1 to a memory address (memory address 100). The computer doesn’t understand mnemonics, however; it understands the numbers in its instruction set. Therefore, an Assembler program is used to convert Assembly Language into actual CPU instructions. Assembly is defined as a low-level language. This means that it offers programmers complete control over the programming process, but they must take responsibility for every operation, no matter how minor. Higher level languages were developed, like C and C++, to simplify the task of creating programs. A single statement or instruction in C or C++ may translate into several machine code instructions. For example, the following code adds “1” to a variable in C or C++: Count = Count + 1; This is much easier to read and understand than the assembler mnemonics.
Compilers and Linkers A compiler is a program that translates instructions and statements from one language, such as Fortran or C, into machine code (code that the computer can directly execute). The output of the compilation process is called an object module and is made up of the translated machine code. A linker takes one or more of these object modules and links them together into a single file called an executable. The idea behind linking is that if a program has a set of instructions to do something useful, such as printing a string, the programmer will want to be able to reuse that set of instructions in this and other programs. If the programmer compiled these instructions into an object module, then he or she simply needs to link it into a new program to reuse it. All programming languages are defined as either interpreted or compiled. Interpreted means that the program is not directly executed by the computer, but by a program on the computer. Compiled languages are ones that are compiled (or translated) directly into machine code, which can run directly on the computer. They are typically faster than interpreted programs, but interpreted programs are generally easier to create and debug. PERL, Java (with the JVM), and PHP are all examples of interpreted languages. C and C++ are examples of compiled languages.
P1: JDW Giannini
WL040/Bidgolio-Vol I
WL040-Sample.cls
June 16, 2003
13:44
Char Count= 0
166
To simplify programming, developers can write their programs using C language, following its rules and syntax. They then compile the program and link it to whichever object modules they need to create an executable program file. The computer is then able to execute that file directly. To create an executable program using the C and C++ languages, programmers need the following items: r
A text editor, to type in the C or C++ statements A compiler, to convert the C or C++ statements into machine code (object modules) r A linker, to join one or more object modules into an executable file r
Most operating systems come with some form of text editor, such as vi or Emacs in Unix or Notepad in Windows. Unix and Linux operating systems also come with a built-in C and C++ compiler and linker, but Windows and MacOS do not. When developing programs, however, most programmers will use an integrated development environment (IDE). These are programs are actually a suite of programs that combine text editor, compiler, linker, and other helpful tools such as a debugger into a single application. For Windows, common IDEs include Visual C++ from Microsoft and C++ Builder from Borland. For the Macintosh, CodeWarrior from Metrowerks sells a C an C++ IDE.
C Versus C++ As noted earlier, C++ is often considered a superset of C; everything C can do, C++ can do as well, but not vice versa. With a C++ IDE, one can still create “pure C” programs (programs that do not take advantage of the benefits of C++), but C++ adds several useful attributes to C, including the following: r
C++ introduces classes and a syntax to create and use them. Classes are a syntactical method by which the programmer can group data and functions into a simple package or container. r C++ introduces a set of predefined classes and functions called the standard template library (STL) to reduce development time. STL increases functionality, for example, by offering a means to manage data collections and sort or search those collections. r C++ introduces function overloading (the ability to write several functions with the same name). This permits a more intuitive means for naming functions to manipulate different types of data for similar tasks (e.g., three “clear” functions that clear three different entities—a string, a file, or a list of data). C++ provides developers a way to make smaller, more manageable “objects” to work with data in their programs, rather that trying to string together a set of similar but unrelated functions. This, combined with a string library of classes such as STL, permits faster program development, a key benefit of C++. The remainder of this chapter provides examples in C++. Some of these will work in a “Pure C” program, but others will not. In certain cases, such as printing a string to
C/C++
a console, each language has its own way of performing a task, but only the C++ version is given here (except where noted).
GETTING STARTED WITH C/C++ I begin by demonstrating a simple C and C++ program that will display the text “Hello World” on the console (screen) and then terminate. Although relatively simple, upon completion you will have written a small program and compiled, linked, and executed it. First, create a source file to contain the program and its C++ statements. Depending on the compiler, a programmer would normally add a .cpp extension to the file name, although some compilers prefer .cxx or some other variation. Here, assume that the source file can be called helloworld.cpp. Place following code in the source file: //Include header file so we can use the cout object #include //This next line simplifies using the std STL library classes and //objects, like cout: using namespace std; /* main is where the program starts */ int main() { //Use the cout object to display a string to the console. cout 11
P1: JDW WL040A-39
WL040/Bidgolio-Vol I
WL040-Sample.cls
380
June 17, 2003
18:24
Char Count= 0
DATABASES ON THE WEB
Line 4 picks up the requested publisher’s name from an HTML request form. Line 5 creates a record set object and names it “pub.” Line 6 embeds the requested publisher’s name in an SQL statement and stores the statement in a variable called sqltext. Line 7 opens the pub object and executes the SQL statement. An engine on the Web server (in this case, an ASP engine) sends the request to the database, which returns the correct publisher information. The rest of the page creates HTML statements out of the data and returns the page to the user’s browser. Queries based on SQL may be created ahead of time and stored as an object within the database using the stored procedures method. Instead of including the query in the requesting page, it asks the database system to run the stored procedure and return the resulting data for display. Here is portion of an ASP requesting page designed to run a stored procedure: 1 Items and Styles 2 3 2 3 4 5 6 John 7 Doe 8 $34,000 9 10 11 Jane 12 Smith
P1: JDW Dorfman
WL040/Bidgoli-Vol I-Ch-62
June 23, 2003
15:20
734
Char Count= 0
EXTENSIBLE MARKUP LANGUAGE (XML)
Table 1 Comparison of HTML vs. XML FEATURE Uses
Has tags (elements), attributes and values Has a fixed, finite set of tags Has a strictly enforced set of semantic and syntactic rules Can be created in a text editor Is contained within a file
HTML
XML
Used to format text, images and other media for rendering and display in a browser or other user agent Yes
Used to organize data in an hierarchical structure
Yes No, laxly enforced to varying degree depending on browser Yes, HTML files are plain text Yes, HTML code resides in an.htm or.html file.
No Yes
13 $42,000 14 15 Line 1 is the XML declaration. Technically, XML documents do not need to start with the XML declaration, but W3C recommends it (Holzner, 2001). Line 2 is a comment. Note that XML uses the same syntax for comments as HTML. Line 3 is blank, or white space. It is included simply to make the code more readable. Line 4 is the root element. The root element is the first element in an XML document and contains all the other elements nested between its start and end tags. The root element may include an attribute. Here, the attribute "company" has the value "Acme." Line 5 begins an element (“employee”) that contains an attribute-value pair (id="23"). Note that in the strict XML hierarchy, there are parent–child relationships between all elements. Here, is the parent, the child. Lines 6–8 contain three more elements, all children of . Line 9 closes the element begun on Line 5. Lines 10–14 describe another employee of the company. More employees could easily be added by following the same structure. Line 15 closes the root element begun on Line 4 and ends the document. So now that we have an XML file, what good does it do us? The short answer is, “none.” The longer answer is, “plenty, given the right tools to further process the file or format it for display.” Our simple XML file is in fact a data source that is accessible programmatically. We can query, manipulate, and display XML data using any of several programming languages, in conjunction with an XML processor, or parser—the application that reads and interprets our XML.
Parsers We could of course write code ourselves to parse the contents of an XML file—after all, it is just a text file. How-
Yes
Yes, XML files are plain text Yes, XML code resides in a.xml file but may additionally exist only temporarily as a structure in memory
ever, there is no need to reinvent the wheel, as free parsers are readily available for download. Most likely, we one or more parsers came with software that is installed on our systems. Parsers have the intelligence to read, write, and manipulate data in XML files, but they have one limitation—they cannot read our minds. To tell a parser what to do, we use an API (application programming interface). Many parsers come complete with their own API tools. Alternately, we might use one company’s parser while accessing it via an API from a different party. We will discuss parsers in more detail later in this chapter. First we need to take a look at well-formed and valid XML documents—the language’s syntax and semantics.
CREATING WELL-FORMED XML DOCUMENTS What Are Well-Formed XML Documents? Definition: XML with correct syntax is well-formed XML. Parsers check XML documents for well-formedness. If a document is not well formed, the parser discontinues processing and returns an error message that indicates where the malformed syntax occurred.
Rules for Well-Formed XML A well-formed XML document must conform to W3C semantic and syntactic rules (W3C, 2000). The following rules specify the conditions an XML document must conform to to be considered well formed.
Semantics W3C specifies that a well-formed XML document has three parts: a prolog (which can be empty), a root element, and various miscellaneous components.
The Prolog. The prolog comes at the very beginning of an XML document. The prolog can contain an XML declaration, processing instructions, a document type definition (DTD), and comments. A document does not absolutely require a prolog to be considered well formed, but W3C recommends including at least an XML declaration. If an
P1: JDW Dorfman
WL040/Bidgoli-Vol I-Ch-62
June 23, 2003
15:20
Char Count= 0
CREATING WELL-FORMED XML DOCUMENTS
XML declaration is included, it must appear on the very first line. An XML declaration looks like this: To date, the version attribute is always equal to “1.0,” although W3C issued a Candidate Recommendation of the XML 1.1 specification in October 2002 and expects to issue a finalized Recommendation in 2003. Other optional attributes of the XML declaration include the following: Encoding—specifies the character set used in the document. Defaults to UTF-8. A document may also use Unicode sets such as UCF-2 and UCF-4 or ISO character sets such as ISO-8859–1 (Latin-1/West European). Standalone—specifies whether the document requires an external file. Setting standalone to “yes” means that the XML document does not require an external DTD or schema or any other external file. We would set this to “no” if you were referencing an external DTD or schema file. DTDs and schemas are discussed in the next section of this chapter. Besides the XML declaration, the prolog may contain additional processing instructions. Processing instruction tags start with . Processing instructions are directives to the XML processor, or parser—the application that reads and interprets our XML. Any processing instructions that we use must be understood by our parser; they are not built into the XML Recommendation. Here is an example of a commonly used processing instruction that links a stylesheet to the XML document. It is understood by parsers in both Internet Explorer 5 and Netscape 6: The prolog may contain three more things: comments and white space, which are discussed in Syntax, below, and the document type definition (DTD), which is discussed in the next section of this chapter.
The Root Element. The first element that comes after the
735
We have already met processing instructions in the prolog. There are other processing instructions that may be used anywhere in an XML document. Again, the constraint is that the processing instruction must be understood by the parser.
Syntax W3C specifies that a well-formed XML document must conform to the following syntax rules. Tags are delimited by “greater than” and “less than” brackets. Element tags start with < and end with >. Elements consist of a starting tag, an ending tag, and everything in between. Just as the root element must have a closing tag, so too must all other elements. The closing tag consists of the name of the opening tag, prefixed with a slash (“/”). Attributes, if present in the opening tag, are not repeated in the closing tag. For example, $42,000 If the element does not contain text or other elements, we may abbreviate the closing tag by simply adding a slash (“/”) before the closing bracket in our element. For example, Here, is a so-called empty element, as is . Elements must be properly nested. In the strict XML hierarchy, there are parent–child relationships between all elements. The root element may contain one or more child elements. Each child element, in turn, may act as parent to its own children. Child elements must be correctly nested within their parent elements. For example, in HTML we can get away with this:
prolog is known as the root element. All XML documents must have root elements. The root element consists of a tag pair and includes everything between its starting tag and its ending tag. All other elements in the XML document are nested between the starting and ending tags of the root element. In the simple XML example shown at the beginning of this chapter, is the starting tag of the root element and is the ending tag. It’s like saying, “This document is a payroll.”
This text is bold and italic
Miscellaneous Parts. Optional miscellaneous elements
XML tags are case sensitive. Whereas is well-formed, and are not.
in an XML document may consist of comments, processing instructions, and white space.
But XML requires that we close child elements before closing their parents: This text is bold and italic
P1: JDW Dorfman
WL040/Bidgoli-Vol I-Ch-62
June 23, 2003
15:20
736
Char Count= 0
EXTENSIBLE MARKUP LANGUAGE (XML)
Attribute values must be enclosed in quotation marks. is well-formed, but is not.
Comments in XML. The syntax for writing comments in XML is the same as that used in HTML. For example, XML preserves white space within elements. HTML ignores white space. An HTML statement such as
Hello.
How are you?
looks like this when rendered by a browser: Hello. How are you? XML parsers, however, preserve the element’s white space: Hello.
How are you?
On the other hand, parsers ignore the use of vertical and horizontal white space between elements to make code more readable. For example, in XML converts CR/LF to LF. In Windows applications, a new line of text is stored as two characters: CR LF (carriage return, line feed). In Unix applications, a new line is stored as a single LF character. Some applications use only a CR character to store a new line. In XML, a new line is always stored as LF. Only five entity references are predefined. Like HTML, XML uses entities, or escape sequences, to allow us to use special characters within XML elements. For example, say we need to use a numerical relationship within an element, such as We are trying to say, “a is greater than b,” but we run into a problem because XML parsers interpret the “greater than” symbol (“>”) as the end of an element. Instead, we have to use the following entity reference: Here, "&>" is the entity reference for the “greater than” symbol. In XML, entity references always start with an ampersand ("&") and end with a semicolon (";"). The W3C spec defines, and XML parsers recognize, only the following five entity references:
& < > ' "
The The The The The
& < > ' "
character. character. character. character. character.
If we wish to use any other entity references, we must define them in a DTD or Schema. Data contained within XML elements are either CDATA or PCDATA. PCDATA (parsed character data) are just normal, everyday data, such as the “$34,000” in $34,000 . Data contained within element tags are always considered PCDATA unless specifically declared as CDATA. PCDATA are text that will be parsed by a parser— any tags inside the text will be treated as markup and entities will be expanded. CDATA (character data) are text that will NOT be parsed by a parser. Tags inside the text will NOT be treated as markup and entities will not be expanded. CDATA are typically used for large blocks of text. Their syntax looks like this: