mf cover (i).qxd
11/01/2008
10:19
Page 1
ISSN 0307-4358
Volume 34 Number 2 2008
Managerial Finance Effects of computer innovation on financial practice Guest Editor: Drew B. Winters
www.emeraldinsight.com
Managerial Finance
ISSN 0307-4358 Volume 34 Number 2 2008
Effects of computer innovation on financial practice Guest Editor Drew B. Winters
Access this journal online _________________________
74
Editorial advisory board___________________________
75
Guest editorial ___________________________________
76
The hidden financial costs of ERP software James T. Lindley, Sharon Topping and Lee T. Lindley ________________
78
Y2K: is there a lesson in the bug that did not bite? Ernest W. King and Drew B. Winters______________________________
91
The role of computer usage in the availability of credit for small businesses Chaitanya Singh and Mark D. Griffiths ____________________________
103
Internet auctions as a means of issuing financial securities: the case of the OpenIPO Steven L. Jones and John C. Yeoman ______________________________
116
Using instant messenger in the finance course Stuart Michelson and Stanley D. Smith_____________________________
Access this journal electronically The current and past volumes of this journal are available at:
www.emeraldinsight.com/0307-4358.htm You can also search more than 175 additional Emerald journals in Emerald Management Xtra (www.emeraldinsight.com) See page following contents for full details of what your access includes.
131
CONTENTS
www.emeraldinsight.com/mf.htm As a subscriber to this journal, you can benefit from instant, electronic access to this title via Emerald Management Xtra. Your access includes a variety of features that increase the value of your journal subscription.
Structured abstracts Emerald structured abstracts provide consistent, clear and informative summaries of the content of the articles, allowing faster evaluation of papers.
How to access this journal electronically
Additional complimentary services available
To benefit from electronic access to this journal, please contact
[email protected] A set of login details will then be provided to you. Should you wish to access via IP, please provide these details in your e-mail. Once registration is completed, your institution will have instant access to all articles through the journal’s Table of Contents page at www.emeraldinsight.com/0307-4358.htm More information about the journal is also available at www.emeraldinsight.com/ mf.htm
Your access includes a variety of features that add to the functionality and value of your journal subscription:
Our liberal institution-wide licence allows everyone within your institution to access your journal electronically, making your subscription more cost-effective. Our web site has been designed to provide you with a comprehensive, simple system that needs only minimum administration. Access is available via IP authentication or username and password. Emerald online training services Visit www.emeraldinsight.com/training and take an Emerald online tour to help you get the most from your subscription.
Key features of Emerald electronic journals Automatic permission to make up to 25 copies of individual articles This facility can be used for training purposes, course notes, seminars etc. This only applies to articles of which Emerald owns copyright. For further details visit www.emeraldinsight.com/ copyright Online publishing and archiving As well as current volumes of the journal, you can also gain access to past volumes on the internet via Emerald Management Xtra. You can browse or search these databases for relevant articles. Key readings This feature provides abstracts of related articles chosen by the journal editor, selected to provide readers with current awareness of interesting articles from other publications in the field. Non-article content Material in our journals such as product information, industry trends, company news, conferences, etc. is available online and can be accessed by users. Reference linking Direct links from the journal article references to abstracts of the most influential articles cited. Where possible, this link is to the full text of the article. E-mail an article Allows users to e-mail links to relevant and interesting articles to another computer for later use, reference or printing purposes.
Xtra resources and collections When you register your journal subscription online you will gain access to additional resources for Authors and Librarians, offering key information and support to subscribers. In addition, our dedicated Research, Teaching and Learning Zones provide specialist ‘‘How to guides’’, case studies and interviews and you can also access Emerald Collections, including book reviews, management interviews and key readings. E-mail alert services These services allow you to be kept up to date with the latest additions to the journal via e-mail, as soon as new material enters the database. Further information about the services available can be found at www.emeraldinsight.com/alerts Emerald Research Connections An online meeting place for the world-wide research community, offering an opportunity for researchers to present their own work and find others to participate in future projects, or simply share ideas. Register yourself or search our database of researchers at www.emeraldinsight.com/connections
Choice of access Electronic access to this journal is available via a number of channels. Our web site www.emeraldinsight.com is the recommended means of electronic access, as it provides fully searchable and value added access to the complete content of the journal. However, you can also access and search the article content of this journal through the following journal delivery services: EBSCOHost Electronic Journals Service ejournals.ebsco.com Informatics J-Gate www.j-gate.informindia.co.in Ingenta www.ingenta.com Minerva Electronic Online Services www.minerva.at OCLC FirstSearch www.oclc.org/firstsearch SilverLinker www.ovid.com SwetsWise www.swetswise.com
Emerald Customer Support For customer support and technical help contact: E-mail
[email protected] Web www.emeraldinsight.com/customercharter Tel +44 (0) 1274 785278 Fax +44 (0) 1274 785201
Editorial advisory board
EDITORIAL ADVISORY BOARD
Professor Kofi A. Amoateng NC Central University, Durham, USA
Professor Moses L. Pava Yeshiva University, New York, USA
Professor Felix Ayadi Jesse E. Jones School of Business, Texas Southern University, USA Professor Mohamed E. Bayou The University of Michigan-Dearborn, USA Dr Andre de Korvin University of Houston-Downtown, USA Dr Colin J. Dodds Saint Mary’s University, Halifax, Nova Scotia, Canada Professor John Doukas Old Dominion University, Norfolk, Virginia, USA
Professor George C. Philipatos The University of Tennessee, Knoxville, Tennessee, USA Professor David Rayome Northern Michigan University, USA Professor Alan Reinstein Wayne State University, Detroit, Michigan, USA Professor Ahmed Riahi-Belkaoui The University of Illinois at Chicago, USA Professor Mauricio Rodriguez Texas Christian University, Fort Worth, Texas, USA
Professor Uric Dufrene Indiana University Southeast, New Albany, Indiana, USA
Professor Salil K. Sarkar Henderson State University, Arkadelphia, Arkansas, USA
Professor Ali M. Fatemi De Paul University, Chicago, Illinois, USA
Professor Atul A. Saxena Mercer University, Georgia, USA
Professor Iftekhar Hasan New Jersey Institute of Technology, USA
Professor Philip H. Siegel Monmouth University, New Jersey, USA
Professor Suk H. Kim University of Detroit Mercy, Detroit, USA
Professor Kevin J. Sigler The University of North Carolina at Wilmington, USA
Professor John Leavins University of Houston-Downtown, USA Professor R. Charles Moyer Wake Forest University, Winston-Salem, North Carolina, USA
75
Professor Gordon Wills International Management Centres, UK Professor Stephen A. Zeff Rice University, Texas, USA
Dr Khursheed Omer University of Houston-Downtown, USA
Managerial Finance Vol. 34 No. 2, 2008 p. 75 # Emerald Group Publishing Limited 0307-4358
MF 34,2
76
Guest editorial Introduction and executive summary This issue of Managerial Finance examines various issues on the role of computers in business financial decisions. Computers have become an important component of our daily lives, both at work and at home, by providing assistance in our regular daily activities. That is, computers were designed to help us gather and process data more quickly with the intended result of us making informed decisions more quickly. Accordingly, my expectation is that computers should not alter the decision-making process, but should provide for more informed decisions. However, introducing computers to the decision-making process may have unintended effects on our decision-making process. We examine the role of computers in business financial decisions. The five papers in this issue cover the following topics: two papers related to capital budgeting, one paper on debt availability, one paper on internet IPOs, and one paper on using instant messaging to manage people in remote locations. This set of paper mirror the primary decisions a business has to make, ie.: what assets to acquire, how to pay for those assets, and how to manage its people. The five papers for this issue are listed below with each title followed by a brief summary of the paper: (1) The hidden financial costs of ERP software. Strategic planning and implementation requires flexibility. The paper identifies and discusses a serious threat to the flexibility of firms and ultimately to the value of the firm. The threat is enterprise resource planning systems. As configured, these systems create major distortions in the corporate decision-making process by raising the cost of making value-enhancing decisions and negatively influencing the overall capital-budgeting process. (2) Y2K: is there a lesson in the bug that did not bite?. This paper examines the market’s reaction to Y2K progress by banks during 1999 and finds no market reaction to bank progress on solving Y2K. The results suggest that required capital projects are best purchased from experts unless new products will result from the required project. (3) The role of computer usage in the availability of credit for small business. The purpose of this paper is to provide a descriptive analysis of the role of computer usage in determining the credit score for small business owners. Our results suggest that computer usage with one puzzling exception has no affect whatsoever in the determination of credit policy by financial service providers to these firms. Rather, it appears that standard credit analysis and risk measures dominate the decision-making process.
Managerial Finance Vol. 34 No. 2, 2008 pp. 76-77 # Emerald Group Publishing Limited 0307-4358
(4) Internet auctions as a means of issuing financial securities: the case of the OpenIPO. The investment bank WR Hambrecht+CO (Hambrecht) takes companies public using its proprietary OpenIPO, an internet-based, online auction process that offers equal access to any investor with a Hambrecht brokerage account. The purpose of this paper is to analyze the OpenIPO process, vis-a`-vis traditional bookbuilding, and evaluate the suitability of the OpenIPO for various types of companies, market conditions, and assets. We
conclude that the OpenIPO’s main advantage is that it precludes many of the abuses recently observed in investment banking; however, it is not well suited for complex businesses that are either difficult to value or far removed from the public eye. Thus, we foresee the OpenIPO as supplementing, rather than supplanting, the traditional bookbuilding method. (5) Using instant messenger in the finance course. In this paper, we investigate the benefits of using instant messaging (IM) in finance courses. We discuss the advantages and disadvantages of using IM, how to implement IM, and provide personal examples of using IM in our classes. Almost 100 per cent of the students who use IM for class feel that it is useful. Through its increasing usage, IM has solved many communication problems in corporations. The first two papers point out some unintended consequences of using computers on the asset side of a business’s balance sheet. The first paper shows that investment in ERP systems can inhibit changes and thus interfere with the selection of value adding projects. The second paper examines the Y2K problem in computer software where software needs changes to handle the switch from 1900 to 2000. The authors examine this change for banks, so the investment in the changes is not a new product for the banks, but instead, simply allows banks to continue business, as usual. The authors find no market reaction to banks’ level of preparation for Y2K. The authors suggest that banks would have been better off to buy the solution to the problem than develop the solution internally. The third and fourth papers deal with financing a business’s assets. The third paper examines whether computer usage by small businesses affects their access to debt. The question is whether computer usage proxies for qualities that lenders find desirable in their business customers. The authors find that computer usage does not alter lender credit decisions. The fourth paper examines a new internet-based IPO process for raising equity capital for businesses. The authors examine whether this new process could replace the existing process that is fraught with conflicts of interest and the potential of insider dealings. The authors conclude that the new internet IPO process is best suited as a supplement to the current process rather than as a replacement for the current process. The fifth paper moves away from issues related to assets, debt, and equity to look at management of people. The authors examine the use of IM in finance courses. They find that it improves communication outside of class hours and therefore suggest that businesses could improve communication among employees in different location through the use of IM. The conclusion I draw from these five papers is that computers are tool to enhance how a business operates. Computers do not change the core operations of a business nor how those operations are funded, but carry the caveat that major investments in computer hardware and software can influence how a business responds to opportunities and problems. Drew B. Winters
Guest editorial
77
The current issue and full text archive of this journal is available at www.emeraldinsight.com/0307-4358.htm
MF 34,2
The hidden financial costs of ERP software James T. Lindley and Sharon Topping
78
University of Southern Mississippi, Hattiesburg, Mississippi, USA, and
Lee T. Lindley Information Systems Consultant, Mechanicsville, Virginia, USA Abstract Purpose – The purpose of this paper is to detail how the adoption of enterprise resource planning (ERP) systems creates major distortions in the corporate decision-making process. Design/methodology/approach – The approach is to focus on the distortion in the capital – budgeting process of corporations emanating from the rigidity of ERP software. The rigidity negatively influences decision-making because ERP software often dictates that the firm must change its core business procedures and processes to fit the software. Findings – Lack of flexibility limits the introduction of new products, or targeting a new customer segment by increasing costs and imposing delays in implementation. Research limitations/implications – Firms would benefit from performing detailed analysis of the impact of ERP systems on their ability to make operational decisions. Originality/value – This paper focuses on the problem of decreased flexibility in making changes in the production and accounting components of the firm when purchasing and installing ERP systems that cannot accommodate minor or major changes in the corporation. Keywords Manufacturing resource planning, Capital budgeting, Decision-making Paper type Conceptual paper
Managerial Finance Vol. 34 No. 2, 2008 pp. 78-90 # Emerald Group Publishing Limited 0307-4358 DOI 10.1108/03074350810841277
Introduction Strategic restructuring, innovation implementation, and continuous improvement are strategies used by firms in their quest for greater efficiency and lower costs to increase firm value. Successful implementation of these strategies requires a firm to make marginal changes across a multitude of dimensions of the firm (see Kanter et al., 1992; Nadler and Tushman, 1997; Tushman et al., 1997; Tushman and Romanelli, 1985). Marginal changes in production, sales, human resources, or accounting can lower costs, increase efficiency, or increase sales. A crucial component in making these changes is flexibility. Decision makers with a larger opportunity set for making changes have a greater possibility of making modifications that enhance the value of the firm. In this paper, we identify a serious threat that is affecting decision flexibility in firms and ultimately their ability to increase value. We argue that the adoption of enterprise resource planning (ERP) systems, as they have been configured, has created major distortions in the corporate decision-making process by raising the cost of making value-enhancing decisions and negatively influencing the overall capitalbudgeting process[1]. This, in turn, has led to lower firm value. The increased cost of change due to ERP systems leads to rejection of many cost-saving or efficiencyenhancing projects that would otherwise be adopted. For example, when changes to software are required to accommodate a cost-savings change in an accounting procedure, the benefits in accounting are weighted against the present and future costs of changing the software. Additionally, other important and valuable changes in other parts of the firm are not undertaken because the cost of software changes becomes the paramount concern and driving force in decision-making.
Over the past two decades, the exponential increase in computing power has provided multiple opportunities for firms to innovate and increase firm efficiency. Increased computing power also has increased firms’ ability to act and react to market change and make internal changes that increased the value of the firm. Historically, such important innovations have increased firm productivity and decreased costs for long periods without significant tradeoffs. Over time, these cost-benefit tradeoffs (with positive financial outcomes) become marginal and economic benefits decrease while costs increase. Some of this narrowing of benefits relative to costs becomes readily apparent before major investments are made. However, some are discovered only after major investment and changes have been undertaken. Those investment decisions in which it is apparent early that costs outweigh benefits (negative net present value [NPV] projects) are not implemented. Those discovered long after a project is accepted impose long-term costs to the firm leading to a decline in firm value. After a decade of utilizing computer software that expanded the integration of information across functional components of the firm, it has become problematic whether further integration will have diminishing marginal returns (Kerstetter, 2003). More important, the marginal costs from integrating information increase sharply while the complexity of the software system reduces the degrees of flexibility for the firm to change and innovate. The integration of information across functional components of the firm is appealing because it provides greater control and accountability. The initial productivity gains from integration of information were large; however, ERP software firms have since developed and sold software that have taken integration to a much higher level. ERP programs integrated the firm’s data and systems into one package, and provided ‘‘best business practices’’ integration of information across manufacturing, financial, and human resources operations[2]. ERP systems have paid off for many firms, particularly firms whose existing structure was amenable to accommodating an ERP system (Johnston and Cotteleer, 2002). Nevertheless, even when successful on implementation, ERP systems that integrate data across the major functions of a large organization are not benign: They can have a negative impact on the functional aspects of the firm. Integration of data across functional areas reduces the options available to decision makers to make changes within the functional area because information systems are rigid (Brown and Hagel, 2003). Decision makers cannot make changes that improve efficiency or reduce costs without altering the ERP software. Because integrated software systems are difficult and expensive to alter, the costs of innovation and change are higher. The conflict between data integration and the ability to make changes occurs because ERP systems require a level of integration that conflicts with basic economic and management principles. Namely, ERP systems distort the authority relation that should exist between line and staff functions. That is, decision-making authority is superseded or even transferred from a line or core business function to an ancillary or auxiliary service. As such, the staff or auxiliary department becomes dictating, rather than facilitating or advisory. Information technology (IT) or management information system (MIS) departments are but one of many ancillary functions in a firm including human resources, planning, accounting, and legal departments. Although crucial to the success of the firm, these staff functions are only indirectly involved in the core business of producing and selling a product or service. As such, they should serve in an advisory or facilitating role in regard to the line function and should be only indirectly involved in decision-making related to the core business.
The hidden financial costs of ERP software 79
MF 34,2
80
Conversely, the philosophy behind ERP systems is to bend the firm’s core business procedures and process to fit ERP software (Turbit, 2003). As a result, ERP system requirements become lexicographically ordered to dominate all other firm activities. Firms, however, have multifactor production functions and cannot have a lexicographic ordering of priorities if they are to be successful. Integration of MIS functions cannot take precedent over all other activities; the MIS department cannot be the driving force behind all decision-making. Profit maximization suggests that inputs be utilized to the point at which marginal revenue product equals marginal cost. Although this is obvious when stated, the conditions under which ERP systems are sold, implemented, and serviced create an environment in which firms find their software system has become a major impediment when changes are considered in core functions[3]. In contrast, successful firms are not those in which most or all activities of the firm are dictated by the human resource department, planning department, accounting department, legal department or IT department. History has demonstrated that firms that have allowed staff or auxiliary functions to dictate consistently to line have suffered[4]. For instance, planning departments often accrue decision-making authority to the point of dominating the strategic future of the firm. One classic case is General Motors (GM) in the late 1970s when its complex planning process led to serious delays in new product development and higher unit costs than its competitors, putting the company at a disadvantage in the market. The planning process was so intrusive throughout the firm that to produce the Saturn, GM had to circumvent the planning department’s authority and create a new stand-alone division (Miles and Snow, 1994). Another example is IBM after the six-year antitrust suit brought against the company by the Justice Department in the 1970s. Following from that, the legal department became such a driving force that no decisions – product development or otherwise – were made without the company attorney’s approval (Carroll, 1993). IBM was brought to a virtual standstill, missing important innovations and product improvements. To examine the hidden costs of ERP and the threat to decision flexibility, we present a brief review of ERP software systems, and we then provide details on how ERP systems impede change and innovation in a firm. We provide a summary of our argument in the conclusion. The history and impact of ERP systems ERP software systems have mixed performance results. Early on, the failure rate (i.e. not fully implemented after 36 months) was estimated to be 70 per cent (Gillooly, 1998). Headline failures include Hershey Foods Corporation, Whirlpool, Gore-Tex (Calogero, 2000), and to a lesser extent Dow Chemical, Boeing, Dell Computer, Apple Computer, and Waste Management (Osterland, 2000). Nonetheless, corporations, government organizations, and universities have continued to implement ERP systems (Donovan, 2001), although there are notable exceptions, such as Wal-Mart and Microsoft, in which firms have utilized internally generated systems (Iansiti, 2003). Some firms also have added a second software package to maintain a legacy system using the data collected by their ERP system. In addition to the lack of meeting deadlines for implementation, ERP systems often came in over cost and were accompanied by serious turmoil and disruption in the firm. The disappointing lack of performance and outright failures were routinely blamed on the host firm rather than attributed to software failure or limitations. Examples of host firm failures include a failure to train employees properly, a failure to maintain
accurate data records, and a reluctance to change the host firm’s ‘‘bad business practices’’. In keeping with the concept of bending the firm’s core business procedures and process to fit ERP software, bad business practices were defined as business practices that do not conform to the software rather than practices that are inefficient or ineffective (Southwell, 2003; Turbit, 2003). Occasionally, there is an admission that the firm was a mismatch with the software – a situation that indirectly implicates the software vendor as either uninformed about its own software or knowingly selling a product that would not meet expectations (Osterland, 2000). Seldom are the implementation failures attributed to the deficiencies of the software or the underlying strategy of bending the firm to fit the software. The illusion of cost reduction A selling point for ERP systems is that they would lower long-run costs. In the short run, costs were expected to be high due to implementation and training. The future cost savings were expected to result from less on-site software development, plus the efficiencies created in other areas such as production, sales, inventory, and accounting. ERP vendors suggested that the firm could reduce the amount and quality of in-house staff currently involved in development and maintenance of software. Vendors presented a plausible argument that development and maintenance of software was not the purchaser’s core competency and it would be more efficient and effective to buy the expertise. Upper level management may have found this argument persuasive based on their past experiences. IT staffs often were considered expensive, technically focused, and less amenable to compromise than other staff. Moreover, they often brought truth in the form of bad news to management using terms and concepts that were only vaguely familiar to others. Into this environment arrived a sales staff of a major software provider. They seldom brought bad news about what cannot be done; instead, they intimated that adopting their software would fix all of the MIS problems including many heretofore unknown to management. The sales pitch was presented by people selected for their communication skills who revealed, along with other pleasant news, that adoption of the software package would reduce the need for in-house IT personnel. The crest of the ERP adoption wave was also driven by the much over-hyped year 2000 fears, with the consequent apparent need to make a major investment under a hard deadline. Given this scenario, it is easy to see how attractive it was for firm executives to decide to purchase the ERP system, which promised to make them more efficient and reduce their dependence on in-house staff. The issues presented supporting the purchase of ERP packages were persuasive, and future problems associated with making changes in processes over time were not made clear. Also unanticipated were implementation costs[5]. Because the ERP vendor would have incentives not to raise these issues, it would have had to come from inside the firm. As discussed later, the level at which decisions were made and the contentious relationship with IT staff reduced the possibility of these objections being heard or heeded. How software became a decision maker Major corporations did not consciously set out to elevate computer software to a position in which it would dominate decision-making across the firm. Firms discovered the extent to which software dictated decisions in an incremental fashion on a decisionby-decision basis. In retrospect, firm executives operated on insufficient information
The hidden financial costs of ERP software 81
MF 34,2
82
and background knowledge at the time the implementation decision was made. At least three factors contributed to this lack of information and knowledge. One factor was the speed with which the computer/software changes occurred. The rapid advancement in speed of machines and the complexity of software required decisions makers, whose expertise was in other areas, to make more and more decisions about acquiring and managing IT processes. The IT world changed much faster than the learning curve of most organizations. A second factor was the composition of IT staff and their level of prominence in the decision-making process. By the very nature of the functions IT staff perform, IT employees differ in background and culture from the more traditional management groups in large firms. Thus, clashes occurred between the traditional corporate culture and the less conforming culture of narrowly focused IT technicians. Often, computer/ software activities were outsourced to reduce the costs of what were perceived to be expensive employees and activities. As a result of outsourcing, the core group with computer and software expertise within the firm often were either gone or reduced in stature such that they were a small factor in the decision process (Outsourcing Not a Company Cure-All, 2003). The third factor was the change in the relative percentage of the firm’s one-time resources that were necessary to purchase an ERP system. Because ERP systems were comprehensive, a very large investment was made at time zero instead of being spread over time. The outlay was much more visible than smaller incremental costs previously made, and the larger investment cost led to the decision being made at a higher managerial level than before. In most cases, there was less expertise at the chief executive officer level. A situation developed in which the cost was high, expertise was limited, future costs were uncertain, and ERP sales personnel promised large savings. Corporate-level decision makers paid less attention to the contents of the software product and relied more on information provided by software vendors[6]. Because purchasing an MIS system is time-consuming and difficult, it is reasonable that CEOs (or managers) attempt to economize on the decision process. However, the decision could have future negative implications if the decision turns out badly. Both the risk of being wrong and the burden of evaluating reams of information make it attractive for decision makers to find short cuts in the process. One of those short cuts is to monitor the actions of other firms in the field and adopt or mimic their practices (DiMaggio and Powell, 1983; Schrage, 2002). Similar to a bandwagon effect, managers mimic practices that are implemented by a large number of firms in the field, thereby, taking for granted legitimacy or efficiency of the behavior without further evaluation (Haunschild and Miner, 1997). Moreover, managers imitate actions of larger or more nationally known firms in the field mistaking public visibility as legitimacy of the practice (Kostova and Roth, 2002). Specific to this paper, if a large successful company adopts a particular ERP software system, this legitimizes the practice and serves as a signal to decision makers in other firms, which in turn, mitigates their risk exposure. In the event of a bad outcome, management decisions are less likely to be criticized if other major firms in the field have also implemented the ERP system. CEOs also face agency problems that may not be evident during the decisionmaking process. One agency problem is the result of a conflict between the goals of ERP users and the goals of ERP providers. The goal of ERP users is to continue using existing versions of software if they serve their purpose and have few ‘‘bugs’’. ERP users also wish to continue to use any modifications to the system that are of importance to the firm (i.e. the firm does not want to bear the cost of reinventing
modifications). Conversely, because ERP companies have specialized in developing proprietary software code, their revenue is enhanced by creating and selling new software in predetermined intervals of short duration. Revenues are further enhanced if older versions of the software can be made obsolete by no longer providing service or advice on old software. A second agency problem occurred because software providers severed the connection between software development and sales from the service and training segment of the process. Sales were handled directly by the software provider, and installation, maintenance, and fix-ups (changes to the software) were handled by consultants who were independent operators recommended by the software provider. This separation of sales from installation and maintenance insulated the software provider from the client with deleterious results. First, information feedback from the client about problems with the software was not presented to the software developer or, at best, was filtered. Second, the separation created an incentive for the software developer that was not in the client’s best interest. This incentive was to create new and different software (called upgrades) that generated sales with secondary regard to the needs and interest of the customer. Although the customer was free not to purchase an upgrade, software developers usually reduced support to older versions. Thus, the software firm benefited most from selling and altering the software often and from making it difficult for the customer to retain and maintain the existing software. Consultants benefited from the complexities of the software, modifying the software, and training employees. Large training costs, lost productivity from time away from job, and a decline in overall employee morale were not evident in the beginning. The upshot was that software clients became entrapped due to the large initial outlays and the ballooning consulting costs. For most firms, implementation was longer than anticipated, and productivity declined during this phase. Because ERP software did not integrate easily with other software and was not accommodating to changes in data entry or retrieval, the firm became a captive of the software provider. ERP software systems went from being an anticipated cost-saving or revenueenhancing undertaking to a NPV abandonment exercise to determine if the firm could escape. How ERP systems impede change and innovation Sources of conflict between the goals and objectives of core business functions and ERP software occur when information is inputted into the IT system, when information is processed, and when information output is retrieved from the system. Most core functions provide information used elsewhere in the firm and utilize information inputted elsewhere in the firm. ERP software constrains each core function because the software prescribes precisely what data can be inputted and the format by which that data can be entered or retrieved. As noted earlier, deviation from the prescribed process may require costly programming that must be recreated when a new software version is developed. The constraint on entering and retrieving data forces the user to enter and process information in a ‘‘straight-jacketed’’ manner that is referred to as ‘‘best business practices’’. There is no accommodation for how the firm previously collected and inputted information, or how they may wish to do so in the future. Correspondingly, information output is likewise restricted. An original selling point of ERP software packages was that they conformed to the ‘‘best practices’’ criteria, which would cut costs and increase efficiency. This premise was based on the notion that a standard exists that transcends across businesses and
The hidden financial costs of ERP software 83
MF 34,2
84
firms without regard to how they are organized or what products or services they provide. If these specific practices are not currently in place in the firm, processes and employees have to go through major changes that are not limited to the usual IT functions. At a minimum, the firm would have to change how it collected, processed, reported, and structured information. These changes were undertaken without sufficient evaluation of the negative impact on many aspects of the firm. Some of these impacts surfaced quickly, particularly those that affected customers or suppliers. Other impacts, such as the toll on the human capital assets of the firm, were less obvious and appeared with a lag[7]. From a financial perspective, ERP software made less likely the innovations and changes that would increase efficiency and lower costs. Because innovation and lower costs routinely are the result of changes in processes (i.e. how the business is run), prospective changes seldom are localized. Thus, changes affected the rest of the firm via the ERP software. Changes in a functional area that required changes in data input or data output were not possible or were possible only after costly changes were made to the software program. Consequently, the ability to make those marginal changes throughout a firm became limited. Particularly limited were small changes that taken alone were not large enough to cover the costs of software change. However, the sum of many small improvements over a period would have a significant effect on the bottom line of the firm. How ERP software affected decision-making ERP software restricted how data were entered and outputted for each function in the firm, and once adopted, each core function of the firm had to conform its IT interaction to that of other core functions of the firm. Conforming may have brought more efficiency and lower costs, but it also meant that the firm could not make marginal adjustments to its processes that involved IT interaction without incurring additional costs. Also, core functions that interfaced with stakeholders outside of the firm became more difficult and costly than they had been previously. Brown and Hagel (2003) succinctly describe the dilemma facing firms with ERP systems. Introducing a new product or service, adding a new channel partner, or targeting a new customer segment – any of these can present unseen costs, complexities, and delays in a business that runs enterprise applications. The expense and difficulty can be so great that some companies abandon new business initiatives rather than attempt one more change to their enterprise applications. Far from promoting aggressive nearterm business initiatives, enterprise architectures stand in their way. Several examples of how the rigidity of ERP software can affect decision-making in various functions are presented in the following discussion. The examples are for expository purposes and do not cover the compete spectrum of functional areas affected. These examples demonstrate the conflict between making innovative changes and the inflexibility of ERP software in handling problems in areas of sales, finance and accounting, inventory, human resources, and production scheduling. Conflict with sales. Consider that one or more customers have approached a major supplier about providing an interface so that customers can place their orders electronically and be billed in the same manner. Customers indicate that their order level would increase if they could use this method to purchase routine items, and research shows that this capability would attract additional customers. Customers vary in the types of software used and their level of sophistication, so the interface would need to be sufficiently general to accommodate several different customers. The
supplier firm finds that the ERP software has a rigid and complex interface for orders and billing that cannot accommodate this change without significant modification[8]. Consultants recommended by the ERP provider say they can develop some additional software that will accommodate this request, but the price tag for the software change exceeds the expected increase in profits from the increased sales. In addition, the consultants tell the firm that when the new upgrade comes out in 18 months they will have to repeat the process to make the additional software compatible with the new and improved ERP software. Given the cost, the NPV of the project often is negative and the project is rejected. Report generation problems. An important component for strategic change in any organization is the need for information in the form of reports and statements that are produced routinely in an acceptable format. However, information often is needed and sought that is not part of the routine reporting mechanism of the ERP software – information that may well have been an integral part of the internally developed systems replaced by the ERP system. For instance, troubleshooting when unusual problems occur, special attention required for new products, formulating strategic change, and marketing research for future products all require information. This essential information previously was made available using the pre-ERP ‘‘home-grown’’ firm-specific software. If this information does not fall into the category of best business practices dictated by the ERP system, costly and temporary modifications would have to be made[9]. The requests for unique or unusual information cannot be accommodated without incurring significant additional costs. Issues with human resources. Assume that a new union contract includes various wage payment options or deductions that are not standard in the ERP software. The ERP software will have to be adapted to accept these changes by consultants at substantial cost. Because the options are not standard, the consultants will be required to make adaptations each time the software is upgraded. The cost of the initial change and future changes should be considered as costs of the new agreement. However, it would be more likely that these costs were not even considered during negotiations. Accounting/finance conflict. Customers are promised a $25 gift certificate to fill out a survey and 1,000 customers are surveyed. The sales group submits a list of customer names receiving the certificates to accounting as one transaction. The sales group assumes that the simplest way to handle the financial transaction is to batch process rather than submitting 1,000 individual transactions. However, the ERP software requires each certificate to be entered separately by customer name. The sales group will have to bear the cost of submitting 1,000 separate requests, and accounting will bear the cost of processing each of these requests. Inventory problems. For many companies, inventory levels are kept to a minimum by forecasting anticipated inventory needs and maintaining only a small level of backup inventory, thereby, reducing the firm’s capital investment in inventory. Imperative in keeping a low level of inventory are accurate records plus quick and accurate communication within the company and with suppliers. The problems encountered with rigid ERP software can make it more difficult to maintain low levels of inventory, particularly if it depends on communication among many departments and many outside suppliers. In each of the preceding cases, the opportunity cost of utilizing an ERP application compared to an alternative application was evident after implementation rather than in the planning stages. This cost is ex post the ERP purchase decision and only shows up as each innovation is evaluated. Real implementation costs were greatly
The hidden financial costs of ERP software 85
MF 34,2
86
underestimated because the lack of information led CEOs to ignore the opportunity costs of ERP systems, which, in turn, led to a reduction of future innovation and changes. It would be difficult to calculate the reduction in productivity and economic growth that have occurred over the aggregate of firms adopting ERP systems. However, the more important question for firms is how long do they want to continue with ERP systems and lose opportunities to innovate and change? Also, how will the firm modify their interaction with ERP systems so as to reduce the barrier to innovation? As noted earlier, ERP systems violate the principle of separation of line and staff functions. In addition, ERP systems conflict with the principles of the theory of the firm. Coase’s (1937) seminal article on the theory of the firm specifies that a firm is an organization that combines resources and activities to produce output at a cost lower than that provided by the market. Conversely, a firm will avoid producing within the firm those components that can be purchased at a price lower than the firm’s internal cost. Such an arrangement is not a static condition; rather, firms change and evolve to take advantage of each situation that renders an internal cost lower than the corresponding market price. An example in which firms find it cheaper to purchase inputs instead of producing them internally would be automobile producers who purchase vehicle tires rather than producing their own. Another example is when large (over the road) truck manufacturers purchase truck engines from Cummins or Caterpillar rather than producing their own. Many factors contribute to situations in which firms change processes because internal costs exceed market prices. One example is a change in technology that lowers costs internally and the firm reduces it dependence on buying an input in the market and produces it internally. The converse would be a situation in which increasing labor costs or lower priced inputs from foreign markets create an incentive for the firm to purchase the input. In other cases, inefficient management or bad decision-making could lead to inefficiencies or create barriers to change. ERP software implementation falls into the later category rather than the former. As note earlier, however, the ‘‘bad decision’’ became apparent only after the implementation of the software. The rigidity of the software impeded the ability of the firm to make many of the marginal changes that would allow it to continue to have internal costs that are lower than market prices. It should be noted that it is not that market prices decreased, but that internal costs increased. Conclusions In this paper, we have argued that the adoption of ERP systems, as they have been configured, has created major distortions in the corporate decision-making process by raising the cost of making value-enhancing decisions and negatively influencing the overall capital-budgeting process. ERP systems have led to a rejection of many costsaving or efficiency-enhancing projects that otherwise would have been adopted. Because ERP systems are structured such that the firm must bend its core business procedures and process to conform to the ERP software, ERP system requirements take precedent over all other firm activities. This dominance of software over the decision process leads to suboptimal strategic outcomes and is neither profit maximizing nor firm value enhancing. ERP systems were expected to lower long-run costs, although in the short run, costs were expected to be high due to implementation and training. Many firms were successful in lowering costs and improving the bottom line because the software was
implemented efficiently and the system ‘‘fit’’ the firm. However, even these successful firms are not immune from the problems of inflexible and rigid software if the software does not fit the future firm. Innovation and lower costs routinely are the result of changes in processes (i.e. how the business is run), and these changes often affect other parts of the firm through the ERP software. To accomplish the high level of integration of data across functions, ERP software dictates that the firm must change its core business procedures and processes to fit the software. Consequently, the ancillary IT function becomes dictating rather than facilitating. From a finance perspective, ERP software raises the cost of capital budgeting. Although many large projects have cash flows sufficiently large enough to have a positive NPV, many of the cost-savings and efficiency-enhancing activities in firms are from numerous small marginal changes in processes. Over time, the compounding of the absence of these marginal changes will leave the firm less competitive. ERP systems have provided positive benefits despite their lack of flexibility. Thus, it is unlikely that ERP systems will be abandoned. The challenge will be to retain the benefits of ERP systems while providing flexibility to accommodate changes at low cost. To some degree, the sheer weight of increases in computing power lowers the cost of incremental changes. Lower computing costs together with the increased power and flexibility of the tools available to IT will make it possible in the future for firms to develop solutions that are not cost effective today. For instance developing a ‘‘practical’’ solution to the gift certificate example described previously may have required the services of multiple highly skilled engineers to deliver a solution that works on the current platform using the available tools. On faster systems a solution using less compute-efficient programming methods may be practical. New tools and platforms abstract and hide much of the complexity, thus enabling greater productivity and requiring less technical skill from IT staff. Lower computing costs mean that it is not as necessary to be concerned about computational efficiency but rather with increased IT productivity. This trend lowers the cost of incremental process improvement. Another effect of the exponential decline in the price of processing speed and storage capacity is the rise of the data warehouse. The highly tuned databases that underlie ERP systems are designed to facilitate transactions, not reporting. Much of the data that flows through the system is aged out and discarded or is otherwise inaccessible because of the adverse impact that retrieving it can have on system operations. Consequently, much of the information the firm painstakingly enters into the system is not readily available for analysis. As costs have plummeted, it has become possible to build huge databases using the historical data from the ERP systems. The data warehouses are built to support business analytics, the detailed and highly segmented analysis of the firm’s past performance. Yet these information stores also make it possible to perform troubleshooting, strategic change planning, and marketing research for future products-functions that ERP systems may not have performed as well as the systems they replaced. One trend that is causing excitement in IT is systems architectures based on services – specifically, Web services based on open standards such as the HTTP protocol, XML, and SOAP. This architecture is more open and flexible than the current architecture that underlies the ERP systems. In today’s environment, if a firm wants to make a connection between an external partner and its internal ERP system, the connection is a custom, one-of-a-kind connection. Each connection is specifically coded for its purpose. Any change to a system requires all of the different connections to be
The hidden financial costs of ERP software 87
MF 34,2
88
reworked. Under the services architecture, connections are ‘‘published’’ as loosely coupled transactions with defined interfaces. The promise of this architecture is that as long as the user conforms to the published interface, changes to the underlying system can be made that are transparent to the consumer of the service. Upgrades can be made without reworking all of the connections. If ERP vendors migrate their products toward this architecture, the barrier to innovation is reduced. The services architecture is not a panacea. The underlying protocols and standards leave much room for variation. Companies and vendors must agree on vocabularies, security, and authentication. Not all changes are transparent, but the underlying protocols and standards make it much easier to manage changes. There is also a question of whether ERP vendors will truly support the open standards. Still, this model shows significant promise for increasing flexibility and driving down the cost of change for the firm. If technology can overcome some of the impediments inherent in ERP systems, the internal cost of change will decrease and there will be greater compatibility between the goals of the firm and their software systems. The higher capital budgeting costs imposed by the system will decline. However, technology will not correct the agency problems now inherent in ERP software systems (i.e. the incentive to create proprietary software that is upgraded, often creating planned obsolesce of previous versions). History shows that market forces work to reduce or eliminate sources of monopoly power. If so, then markets likely will reward ERP software firms that can incorporate the flexibility necessary to accommodate their clients’ needs. Cost-reducing technology also may encourage firms to return to greater emphasis on in-house development of systems. Even if in-house systems are more expensive, they may have greater value to the firm if they facilitate the ability to change – the hallmark of successful firms. Notes 1. The phrase capital budgeting as used in this paper covers any decision that affects potential cash flows, whether large or small. 2. Some of the major vendors of ERP software systems are J.D. Edwards, Oracle, PeopleSoft, SAP AG, and Baan. More recently, there has been mergers and consolidation of software firms resulting in fewer, but larger, providers. 3. ‘‘This inflexibility is endemic today. Big suites of enterprise-wide applications like those in ERP suites, designed to integrate disparate corporate information systems, dominate client-server architectures. Unfortunately, the one-time, ‘‘big-bang’’ and tightly defined way in which these applications have been implemented, as well as their massive bodies of difficult-to-modify code, mean the enterprise applications integrate business only by limiting the freedom of executives’’ (Brown and Hagel, 2003). 4. We recognize that ancillary functions by their nature often constrain core activities. Examples are GAP constraints imposed by the accounting function, lawful practices required by the legal department, or risk constraints imposed by the risk management function. However, the aforementioned ancillary functions are general in nature and advisory rather than controlling. In contrast, ERP systems are much more invasive into the processes of the firm. ERP systems touch almost all areas of the firm and most of the employees directly or indirectly. Indeed, ERP systems are designed to encompass the total firm. 5. ‘‘Costs associated with ERP implementation are extensive and often unanticipated. Most companies fail to identify the true cost of ownership because they do not include many of associated costs from hardware and software upgrades, inefficiencies moving to the new application, and other hidden costs’’ (Best Software Inc., 2003).
6. 7.
8.
9.
In contrast, Ross and Weill (2002, 2003) suggest that more IT decisions should be made by top management. Although out of the scope of this paper, two examples of the impact of ERP systems on human capital are serious morale issues and the alibi syndrome in which employees can avoid legitimate changes (i.e. work) by claiming that the software will not allow it. Electronic data interchange interfaces to ERP systems are notoriously complex, and interfaces usually were implemented only for the firm’s largest customers because of the cost. For an example in which historical records became an issue see ‘‘Briggs and Stratton: Harnessing the power of its ERP systems’’ (SAS Institute, Inc., 2002). After installing SAP in 1988, they lost a portion of their reporting layer that gave managers data in the specific form they needed to manage their areas. To continue utilizing these reports, they had to set up a second system using SAS that took the information from SAP and created a separate reporting system.
References Best Software Inc. (2003), ‘‘Industry trend: from tier one to mid-market solutions’’, available at: www.bestsoftware.com/vertical/manufacturing/industry_trend.cfm (accessed 21 January 2005). Brown, J.S. and Hagel, J. III (2003), ‘‘Flexible IT, better strategy’’, The McKinsey Quarterly, Vol. 4, available at: www.mckinseyquarterly.com/article_abstract.aspx?ar=1346&L2=13&L3=13 (accessed 21 January 2005). Calogero, B. (2000), ‘‘Who is to blame for ERP failures?’’, Server World, June, available at: www.serverworldmagazine.com/sunserver/2000/06/erp_fail.shtml (accessed 21 January 2005). Carroll, P. (1993), Big Blue: The Unmaking of IBM, Crown Publishing, New York, NY. Coase, R.H. (1937), ‘‘The nature of the firm’’, Economica, New Series, Vol. 4 No. 16, pp. 386-405. DiMaggio, P.J. and Powell, W.W. (1983), ‘‘The iron cage revisited: institutional isomorphism and collective rationality in organizational fields’’, American Sociology Review, Vol. 48, pp. 147-60. Donovan, R.M. (2001), ‘‘No magic cure will fix all ERP ills’’, Advanced Manufacturing Magazine, available at: www.advancedmanufacturing.com/January01/implementing.htm (accessed 21 January 2005). Gillooly, C. (1998), ‘‘Enterprise management disillusionment’’, Information Week, 16 February, available at: www.informationweek.com/669/69iudis.htm (accessed 21 January 2005). Haunschild, P.R. and Miner, A.S. (1997), ‘‘Modes of interorganizational imitation: the effect of outcome salience and uncertainty’’, Administrative Science Quarterly, Vol. 42, pp. 472-500. Iansiti, M. (2003), ‘‘Integration: the right way, the wrong way’’, CIO Magazine, 15 March, available at: www.cio.com/archive/051503/integration.html (accessed 21 January 2005). Johnston, S.J. and Cotteleer, M.J. (2002), ‘‘ERP: payoffs and pitfalls’’, HBS Working Knowledge, 14 October, available at: hbsworkingknowledge.hbs.edu/item.jhtml?id=3141&t=strategy (accessed 21 January 2005) Kanter, R.M., Stein, B.A. and Jick, T.D. (1992), The Challenge of Organizational Change, The Free Press, New York, NY. Kerstetter, J. (2003), ‘‘Commentary: business software needs a revolution’’, Business Week Online, 23 June, available at: www.businessweek.com/magazine/content/03_25/b3838630.htm (accessed 21 January 2005). Kostova, T. and Roth, K. (2002), ‘‘Adoption of an organizational practice by subsidiaries of multinational corporations: institutional and relational effects’’, Academy of Management Journal, Vol. 45 No. 1, pp. 215-33.
The hidden financial costs of ERP software 89
MF 34,2
90
Miles, R.E. and Snow, C.C. (1994), Fit, Failure, and the Hall of Fame, The Free Press, New York, NY. Nadler, D.A. and Tushman, M.L. (1997), Competing by Design: The Power of Organizational Architecture, Oxford University Press, New York, NY. Osterland, A. (2000), ‘‘Blaming ERP’’, CFO Magazine, 1 January, available at: www.cfo.com/ article.cfm/2987370 (accessed 21 January 2005). Outsourcing Not a Company Cure-All (2003), Business Day, 20 October, available at: www.bday.co.za/bday/content/direct/1,3523,1461499-6078-0,00.html (accessed 21 January 2005). SAS Institute, Inc. (2002), ‘‘Briggs and stratton: harnessing the power of its ERP systems’’, available at: www.sas.com/offices/europe/austria/html/download/success/briggs_stratton.pdf (accessed 21 January 2005). Schrage, M. (2002), ‘‘Mimetic management’’, Technology Review, Vol. 105 No. 5, p. 20. Southwell, M. (2003), ‘‘Poor project planning leads to ERP failure’’, Arabic Computer News, July, available at: www.itp.net/features/details.php?id=930&tbl=itp_features (accessed 24 January 2005). Turbit, N. (2003), ‘‘ERP implementation: the trials and tribulations’’, World Trade, Vol. 16 No. 7, p. 39. Tushman, M.L., Newman, W.H. and Romanelli, E. (1997), ‘‘Convergence and upheaval: managing the unsteady pace of organizational evolution’’, in Tushman, M.L. and Anderson, P. (Eds), Managing Strategic Innovation and Change, Oxford University Press, New York, NY, pp. 583-94. Tushman, M.L. and Romanelli, E. (1985), ‘‘Organizational evolution: a metamorphosis model of convergence and reorientation’’, in Staw, B.M. and Cummings, L.L. (Eds), Research in Organizational Behavior, JAI Press, Greenwich, CT, pp. 171-222. Corresponding author James T. Lindley can be contacted at:
[email protected]
To purchase reprints of this article please e-mail:
[email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints
The current issue and full text archive of this journal is available at www.emeraldinsight.com/0307-4358.htm
Y2K: is there a lesson in the bug that did not bite?
Lesson from Y2K
Ernest W. King Department of Finance and Economics, College of Business Administration, University of Southern Mississippi, Hattiesburg, Mississippi, USA, and
91
Drew B. Winters Finance Department, Rawls College of Business, Texas Tech University, Lubbock, Texas, USA Abstract Purpose – The purpose of the paper is to determine if banks that solved the Y2K problem early created value for their shareholders. Design/methodology/approach – The method of analysis is an event study. Findings – The primary finding of the analysis is that solving the Y2K problem did not create value for bank shareholders. That is, announcements of solving the Y2K problem were not accompanied with positive stock price reactions. Research limitations/implications – While the paper does not find support for a positive reaction to solving Y2K, it does find some evidence of concerns about banks that were having trouble solving Y2K. However, the sample size of banks with problems was small and therefore we caution readers about generalizing these results. Practical implications – All banks needed to solve the Y2K problem, but those solving Y2K do not appear to create value for their shareholders. From this we conclude that it is better to buy the solution to a required project than to develop it internally. Originality/value – This paper is of interest to anyone in a capital budgeting decision making process that includes required projects. Keywords Capital budgeting, Banks, Data security Paper type Research paper
1. Introduction Capital budgeting is one of the fundamental topics in any corporate finance course. We teach that after properly calculating the NPV of all the available projects, we should fund all the projects with a positive NPV because all positive NPV projects increase shareholder wealth. However, we also recognize that some companies impose capital spending constraints, which may limit the projects funded. Under spending constraints, we recommend that firms undertake the combination of projects with the highest total NPV that fit under the spending constraint. However, we warn that required projects must be funded first and that in the presence of required projects the funds available for discretionary projects are less than the spending constraint. On the topic of required projects, we seldom say much more than these projects are often mandated by law and provide an example, such as EPA mandated reductions in harmful smoke stack emissions. However, there are potential strategic elements in required projects and it is the goal of the this paper to highlight possible strategic elements in required projects using one very well-known required project, the updating of technology to handle the Y2K problem. The authors thank Tom Lindley for his insightful comments.
Managerial Finance Vol. 34 No. 2, 2008 pp. 91-102 # Emerald Group Publishing Limited 0307-4358 DOI 10.1108/03074350810841286
MF 34,2
92
Y2K, or the millennium bug, is the technology based problem associated with changing from the year 1999 to the year 2000. Many technology based applications require dates for reference and historically the date has been represented as YYMMDD. This form of the date presents a problem because the year 1900 and 2000 take the same form, 00, and technology based applications will always operate as though 00 is 1900 instead of 2000 because mathematically 00 is less than 99. Thus, any application that uses the date for reference and uses a two digit year in the date will not function correctly when we cross the millennium to the year 2000. So, it is clear that Y2K was a real problem and that it clearly fits the definition of a required project. At this point we have not identified the strategic elements of a required project, in general, or Y2K, in particular. Before we discuss the strategic elements, we want to mention our sample to provide context for the discussion of the strategic elements of solving Y2K. We examine a sample of banks. Banks provide a convenient yet interesting sample group for studying Y2K. First, banks are an interesting sample because the primary business of banks is financial services, so a major disruption in the banking system would impact the entire economy. Accordingly, if the market is concerned about Y2K disruptions, then we should see an effect in bank returns. Second, banks are a convenient sample for analyzing Y2K because the SEC required banks to disclose their Y2K progress throughout 1999 in their 10Q reports. This allows us to track individual bank progress toward solving Y2K and divide our sample of banks into three groups: (1) banks that solved Y2K; (2) banks that think they will solve Y2K before 12/31/99; and (3) banks that appear unlikely to solve Y2K by 12/31/99. So, for banks, what are the strategic elements of solving Y2K. First, if Y2K is difficult to solve, then the first bank to successfully address Y2K has a first mover’s advantage. That is, at the point that the first bank solves Y2K, that bank is the only bank that can insure uninterrupted access to their customers’ accounts. This allows that bank an opportunity to attract customers or buy other banks having difficult solving Y2K. Second, if Y2K is difficult to solve, then investors and creditors will have concerns about the bank as a ‘‘going concern’’ until it solves Y2K, which should decrease stock prices and increase the cost of debt. Third, a business that implements a required project potentially has a new product to sell. For most banks this is unlikely because the technology solutions for Y2K are likely system specific and therefore not easily saleable. Using a sample of banks, we conduct two tests to determine if positive stock returns were associated with banks that solved the Y2K problem before the end of 1999. The first analysis is an event study on the first bank to solve Y2K to determine if this bank achieved a first mover’s advantage when it solved Y2K. The first bank to solve Y2K was UMB Financial Corporation. Our event study shows insignificant stock returns associated with the announcement that UMB Financial Corporation solved Y2K. The second analysis divides the banks into groups based on how well the banks handled the Y2K problem and examines Y2K announcement returns for the group of banks that announced that they had solved their Y2K problem. The announcement returns for the group of banks that solved Y2K are not significantly different from zero. Thus, our results find no evidence of a first mover’s advantage in Y2K nor of a Y2K discount in the stock price of banks that solved Y2K.
So, what we can learn from the solution of an event that was estimated to cost the US economy more than $100 billion. Our primary lesson from Y2K is that the banks in our sample could not capture any strategic advantage from solving Y2K early. This suggests that the market assumed that regardless how difficult it was to solve Y2K that the banking industry would find a solution in time for the change to the year 2000 to remain viable. Do our results suggest that in realty there is no strategic element to required projects? We do not think so. Instead, we suggest that businesses must be very careful with their strategic analysis of a required project. For example, it appears that no strategic opportunities from Y2K were available for the average bank. Banks that focus on data processing may have had some strategic opportunities with Y2K (although we do not test this in our paper). 2. Can Y2K be easily solved by all banks? We test for stock price changes in response to announcements that banks solve the Y2K problem. Since solving Y2K mean changes to existing systems and therefore does not provide any new products or services, for the average bank, we would only expect a stock price reaction to an announcement of solving Y2K, if solving Y2K is difficult. That is, the market will only attach value to solving Y2K if it believes that solving Y2K will be difficult. If solving Y2K is difficult, then two kinds of positive reactions are possible for announcements by banks of successfully solving Y2K. The two possible positive reactions can be described as: (1) a first mover’s advantage; and (2) going concern value. A first mover’s advantage can only result in financial gains if the move is difficult to follow. If the move is easy to follow, then the competition will react quickly to eliminate any first mover’s advantage. An example from banking of a move that is easy to follow would be a bank lowering its Prime lending rate. A lower lending rate relative to the competition would allow a bank to attract new customers. However, reducing lending rates is something that the competition can match quickly, so the first mover’s advantage is quite limited in this example. Solving the Y2K problem may have a first mover’s advantage because at the time, major concerns existed about the ability of all banks to solve the Y2K problem. Failure to solve Y2K may result in a bank going out of business. Since the Y2K problem affected the ability of a bank’s technology to continue functioning properly after the change to the year 2000, then the Y2K problems potentially affect a bank’s ability to provide customers access to their accounts. Inability of a bank to provide its customers access to their accounts would threaten the survival of the bank. Thus, if solving Y2K is difficult, then the market should discount a bank’s stock price until the bank announces that it has solved Y2K, therefore insuring that the bank is a going concern into the year 2000. Then an announcement of solving Y2K signals that the bank is a going concern and would be accompanied by a positive stock price change as the market removes the Y2K discount. Both types of positive stock price reactions require that the market believe that solving Y2K is difficult, and evidence suggests that at the time solving Y2K was believed to be difficult. For example, a 18 September 1998 American Banker article titled ‘‘Banks said to make progress on Y2K, but the job is not done’’, makes several references to concerns over banks’ ability to solve the problem. The most telling is that bank regulators had begun to assemble an acquirers list of banks that regulators
Lesson from Y2K
93
MF 34,2
94
believe will solve Y2K and will be in position to acquire banks that do not. This suggests that bank regulators had major concerns about the survival of banks that could not successfully address the Y2K problems in their technology. In addition, the SEC required all publicly traded banks to disclose their Y2K progress throughout 1999 in their quarterly 10Q reports to insure that the investing public was properly informed. 3. The first mover’s advantage in Y2K: the case of UMB Financial Corporation Solving the Y2K problem makes a bank a viable going concern into the next millennium. It might also give a bank solving Y2K a first mover’s advantage. Solving Y2K would provide a bank with the opportunity to acquire other banks that could not solve the Y2K problem at a bargain price because banks that do not solve Y2K may not be going concerns. Buying banks at bargain prices should increase the value of the surviving banks. In this section, we examine the economic impact on UMB Financial Corporation from being first to solve the Y2K problem. UMB Financial Corporation (UMB) announced that on 10 November 1998 it had completed the first successful customer test for year 2000 readiness by a major bank. In an American Banker article (11/13/98) on the UMB test, the UMB CEO stated that the bank is viewing the year 2000 strategically because management believes that an opportunity exists for UMB to acquire banks that do not solve the Y2K problem[1]. To determine if solving the Y2K problem creates firm value for UMB, we use the event study methods discussed in Cornett and Tehranian (1990). Cornett and Tehranian use a standard market model to control for general market movements with 0/1 dummy variables added to the model to capture the value of firm specific announcements. The specifics of the model used to test the UMB announcement of solving the Y2K problem in its customer systems is the following: returnt ¼ þ ðmarkett Þ þ ðeventt Þ þ "t
ð1Þ
where returnt ¼ the daily return for UMB Financial Corporation on day t, markett ¼ the daily return for NASDAQ equal weighted index on day t[2], eventt ¼ a 0/1 dummy variable that equals 1 on 11/11/98 and 0 on all other days[3]. Equation (1) was estimated over a 60-day period from 10/10/98 through 12/10/98. The test period was chosen to minimize any effect from the well-documented turn-of-theyear stock market anomalies and to minimize any effect from quarter-end window dressing done by banks[4]. The results from an OLS estimation of equation (1) are: returnt ¼ 0:0024 þ 0:5231ðmarkett Þ þ 0:0166ðeventt Þ The regression model has an adjusted R-square of 0.0618 and an F-statistic of 2.349, which has a p-value of 0.1088. The model intercept has a t-statistic of 0.786 (p-value ¼ 0.4368) and the market index parameter estimate has a t-statistic of 2.059 (p-value of 0.0462). The t-statistic for the event dummy parameter estimate is 0.877 (p-value ¼ 0.3857) which means the parameter estimate is not different from zero.
The results from estimating equation (1) suggest that solving the Y2K problem did not create value for UMB Financial Corporation. One possible explanation for our results is that we used the wrong announcement date. Accordingly, we test two other possible event dates:
Lesson from Y2K
(1) 11/13/98 which is the date of the American Banker article about UMB’s successful test; and (2) 11/16/98 which is the release date of UMB’s 10Q showing that the firm has solved the Y2K problem. Neither date has a parameter estimate that is significantly different from zero. Thus, we conclude that the market did not react when UMB Financial became the first bank to solve Y2K. 4. Does the market react positively to banks that solve Y2K? The lack of positive returns associated with UMB Financial Corporation being the first bank to solve Y2K can have a few possible sources. First, the UMB announcement that it was the first bank to solve the Y2K problem could have been anticipated and therefore the financial benefit of being first could already exist in UMB’s stock price. We believe that this is unlikely because, although it was widely known that banks were working on the problem, we find no public information to indicate that the market anticipated that UMB would be the first to solve Y2K. Second, the market may not have believed that UMB could exploit its first mover’s advantage, so buying pressure did not occur with the UMB announcement. Third, the market may have believed that most, if not all, banks would solve the Y2K problem in time for the beginning of the year 2000, so being the first to solve Y2K was not an advantage. In other words, if the market believed that most banks would solve Y2K, then bank stock prices would not include a Y2K discount. To address the issue of whether the market included a Y2K discount in bank stock prices, we examine the daily returns of a group of banks that announced solving Y2K in1999 for positive event returns for each bank’s announcement. 5. Sample and methods Our sample is drawn from the banks listed on Yahoo/Finance under the heading of regional banks at the end of 1998. We chose this list of banks for two reasons. First, we concluded that regional banks are large enough to be publicly traded on a reasonably active basis, which provides the necessary time series of stock prices for our study. Second, using the banks listed on Yahoo/Finance provides a listing of publicly traded banks that are required by the SEC to file quarterly 10Q reports. This provides ready access to the banks’ Y2K progress because during 1999 the SEC required banks to report their Y2K readiness in their 10Q reports. We examined the 10Q reports for each bank for the first three quarters in 1999 and tracked the bank’s progress toward solving their Y2K problem. The reason that we stopped tracking Y2K progress with the third quarter 10Q is that it is the last 10Q report before the end of the year. Third quarter 10Q reports are typically filed and made public during the first half of November. From this examination we are able to divide the sample in three groups: (1) banks that solved the Y2K problem before year-end; (2) banks that think they will solve the Y2K problem before the year-end; and (3) banks that appear unlikely to solve Y2K before the year-end[5].
95
MF 34,2
96
After dividing our sample into these three groups we limit the sample to banks that trade throughout 1999. This provides a sample of 246 banks with 217 in the group that solved Y2K, 27 in the group that thinks it will solve Y2K, and 2 in the group that appears unlikely to solve Y2K. We did not require that the banks’ stock trade daily, but we require that they trade on their Y2K announcement date and on the following day[6]. This reduces the number of banks in the group that solved Y2K from 217 to 168 for our event study tests. We continue to use the event study methods of Cornett and Tehranian (1990) with the model defined as: returnj;t ¼ j þ j ðmarkett Þ þ j ðeventj;t Þ þ "j;t
ð2Þ
where returnj,t ¼ return for bank j on day t, markett ¼ the daily return for NASDAQ equal weighted index on day t, eventj,t ¼ 0/1 dummy variable that equals 1 on the announcement date for bank j and the following day and 0 otherwise. The announcement date for each bank is the filing date for the 10Q that states that the bank solved Y2K. For each bank in the sample, we search LEXIS/NEXIS and the Wall Street Journal index for public announcements that the bank solved Y2K. We did not find a Y2K announcement for any bank in our sample in this search. This is not entirely surprising since our sample is of regional banks, which typically do not receive much national news coverage. Accordingly, the filing date of the 10Q would be the first date for wide public disclosure of each bank’s Y2K progress. We note that by using the 10Q filing date for Y2K announcements we do not have clean events. That is, there is likely other important information about the bank that is disclosed to the public for the first time in the 10Q that announces Y2K preparedness. However, we believe if the market has discounted bank stock prices due to concerns about surviving Y2K, then announcements about successful addressing Y2K, and thus insuring the bank is a going concern, should be greeted, on average, with a positive reaction. 6. Results from banks announcing successfully addressing Y2K We estimate equation (2) for each of the 168 banks in the reduced sample of banks that solved Y2K. The cumulative abnormal returns (CAR) for 168 banks is 0.54. This result is clearly not a positive reaction to these banks solving Y2K. In fact, our results from announcements of banks solving Y2K can best be described as a nonevent. To demonstrate this, Figure 1 provides a bar chart of the t-statistics for the banks. Figure 1 shows that the distribution of t-statistics is reasonably normally distributed with the distribution centered near zero. In addition, only four t-statistics (out of 168) are greater than two (approximately significant at the 5 per cent level in a two-tailed test) and then two are positive and two are negative. This figure clearly indicates no significant reaction to the announcement of Y2K, which suggests that the market had not included a Y2K price discount in bank stocks. At this point our results appear to suggest that the concern over the Y2K problem was much to do about nothing or more to the point that investors believed that whatever technology problem that existed would be solved before the end of 1999. It is
Lesson from Y2K
97 Figure 1. Bar chart of the t-statistics for the banks
important to check to see if any investor uncertainty existed about the resolution of the Y2K problem as the turn-of-the-millennium approached, which we do in the next section. 7. Y2K analysis as the end of 1999 approached To this point our analysis finds no market reaction for the banks that solved Y2K, suggesting no market impact from Y2K. Thus far, we have looked only for a positive reaction for banks that solved Y2K. In this section, we examine whether investors became concerned about the future of banks that were still having difficulties in solving Y2K as the end-of-the-year approached. To accomplish this analysis, we form an equal-weighted portfolio from each group of banks and compare the portfolios as the end-of-the-year approaches. 7.1 December volatility Our results suggest that banks that solved Y2K did not capture a financial gain from the process. However, as the year-end approaches investors may become concerned/ uncertain about the future of banks still having difficulties in completing their Y2K conversions. Uncertainty on the part of investors often results in increased volatility. In this section, we test whether banks that appear unlikely to solve Y2K experience an increase in stock return volatility at the year-end. To examine volatility, we estimate following SUR model that uses the a ‘‘absolute value’’ of returns as a proxy for volatility. The model is: Ra;t ¼ a þ a jmarkett j þ a D þ "a;t Rb;t ¼ b þ b jmarkett j þ b D þ "b;t ð3Þ Rc;t ¼ c þ c jmarkett j þ c D þ "c;t where Rx,t ¼ daily return at time t with x ¼ (a, b, or c) which denotes the different portfolios (a ¼ banks that solved Y2K, b ¼ banks that think they will solve Y2K, and c ¼ banks that appear unlikely to solve Y2K), markett ¼ the daily return for NASDAQ equal-weighted index on day t,
MF 34,2
D ¼ a 0/1 dummy variable that equals 1 on during December 1999 and 0 otherwise, "x,t ¼ error term at time t with x ¼ (a, b, or c).
98
Figure 2. Results from estimating the SUR model on the absolute value of returns of the three portfolios of banks
The results from estimating the SUR model on the absolute value of daily returns from all of 1999 with the dummy variable equal to 1 for the month of December are reported in Figure 2. The December dummy variable parameter estimate for the portfolio of banks that appear unlikely to solve Y2K is different from zero at the 1 per cent level and it is about 5 1/2 times larger than the parameter estimate for the portfolio of banks that solved Y2K. The null hypothesis that all dummy variable parameter estimates are equal can be rejected at the 5 per cent level. In addition, the null hypothesis that the dummy variable parameter estimates are equal for the portfolio of banks that think they will solve Y2K and the portfolio of banks that appear unlikely to solve Y2K can also be rejected at the 1 per cent level. However, we cannot reject the null hypothesis of equal
parameter estimates for the portfolio of banks that solved Y2K and the portfolio of banks that think they will solve Y2K. These results suggest that during December, the banks that appeared unlikely to solve Y2K experienced significantly more stock return volatility than the other two portfolios of banks. This suggests that the inability to solve Y2K created a great deal of uncertainty about the future of these banks as going concerns. This increased uncertainty about future cash flows mean increased risk for investors, which represented a clear downside to shareholders due to the perceived inability to successfully handle necessary projects in a timely manner. While the result of increased volatility/risk for the banks that appear unlikely to solve Y2K fits expectations, we remind the reader to be cautious about generalizing this result because of our small sample size (n ¼ 2). We do note that the volatility differences between the portfolios are unlikely to be from a difference in diversification. Diversification results from combining different assets in a portfolio. However, in this study the portfolios are combinations of similar assets (regional banks), so increasing the number of banks in the portfolio is likely to provide little, if any, diversification. Also, we are unable to distinguish between the levels of volatility for the portfolio of banks that solved Y2K (n ¼ 217) and the portfolio of banks that think they will solve Y2K (n ¼ 27). Thus, it is extremely unlike that different levels of diversification could cause volatility differences of the magnitude we find in our analysis. 7.2. Year-end returns The event study results from banks solving Y2K show no reaction. We do find, however, that investors appear to be more uncertain about the future prospects of banks that are having difficulties in solving Y2K relative to other banks. Uncertainty about a bank’s future relative to the Y2K problem has a finite time window because Y2K problems will be revealed on the first business day of the year 2000. Thus, investors have until the end of the last trading day of 1999 (12/31/99) to adjust their bank stock holdings prior to the revelation of remaining Y2K problems. Accordingly, we compare returns on 12/31/99 for the three portfolios. For this analysis, we use the equal-weighted portfolios from each group of banks and then analyze the daily portfolio returns in the following SUR model: Ra;t ¼ a þ a markett þ a D þ "a;t Rb;t ¼ b þ b markett þ b D þ "b;t
ð4Þ
Rc;t ¼ c þ c markett þ c D þ "c;t where Rx,t ¼ daily return at time t with x ¼ (a, b, or c) which denotes the different portfolios (a ¼ banks that solved Y2K, b ¼ banks that think they will solve Y2K, and c ¼ banks that appear unlikely to solve Y2K), markett ¼ the daily return for NASDAQ equal-weighted index on day t, D ¼ a 0/1 dummy variable that equals 1 on 12/31/99 and 0 otherwise, "x,t ¼ error term at time t with x ¼ (a, b, or c). The regression results, using daily returns from all of 1999, (reported in Figure 3) show a statistically insignificant parameter estimate on the 12/31/99 dummy variable for the portfolio of banks that solved the Y2K problem, a statistically insignificant parameter
Lesson from Y2K
99
MF 34,2
100
Figure 3. Results from estimating the SUR model on the returns of the three portfolios of banks
estimate for the banks that think they will solve Y2K, and a negative and significant (1 per cent level) parameter estimate for the banks that appear unlikely to solve Y2K. The hypothesis tests show that we are able to reject the null hypothesis for: all the dummy variable parameter estimates equal zero, all the dummy variable parameter estimates being equal, and for the dummy variable parameter estimates being equal for the solved Y2K portfolio and the unlikely to solve Y2K portfolio. These results suggest that at the end of 1999 investors were still uncertain about how the technology would react to the change to the new year and that the uncertainty lead to selling pressure on the group of banks that appeared unlike to successfully address all Y2K time-based issues in their technology. 8. Conclusion We know that Y2K was a technology non-event as the vast majority of our technology based systems moved smoothly from 1999 to 2000. Our results suggest that Y2K neither create financial gains for early movers nor banks that clearly solved the problem before the end of 1999, but it did create increased risk for those that appeared
unlikely to solve Y2K in a timely manner. So, the question is, what can we learn from our analysis of Y2K in the banking industry? Solving the Y2K problem was a necessary project for the industry because any bank not solving Y2K would potentially go out of business. The industry invested millions of dollars in the problem as each bank went about solving Y2K. Yet, we find no market reaction for banks that solved Y2K. This suggests that the market expected banks to solve the Y2K problem and thus there was no market reaction to solving Y2K because that was the anticipated result. Since it appears that the market expected banks to solve Y2K, we should find that the market penalized those banks that appeared unlikely to solve Y2K. While our evidence suggests that the market penalized these banks (price declined and volatility increased), our sample size of banks that appeared unlikely to solve Y2K is too small (n ¼ 2) to draw general conclusions about the market’s response to banks that appeared unlikely to solve Y2K. So, what can bank managers (and more generally corporate managers) learn from Y2K? We believe our results suggest that the best approach for handling a major technology problem (when the technology is not your product) is for the firm to find an expert that has a solution for the technology problem and buy the solution from the expert (in a timely manner) rather than build the solution internally. In the case of banks solving Y2K, this means that most banks should buy the solution from the banks that specialize in data processing after the data processing banks develop solutions that make their hardware and software Y2K compliant. This appears to be a better solution for the typical bank than each bank spending time and valuable resources in duplicate efforts when the market seems to believe the technology problems will be handled successfully in a timely manner. Notes 1. For further details on UMB’s Y2K readiness at the point of the announcement see UMB’s 10Q report for the nine-month period ending 9/30/98, which is dated 11/16/98. 2. UMB Financial Corporation is a NASDAQ traded company. We estimate Equation (1) using other market indices and find that our results are qualitatively similar across indices. 3. UMB conducted the tests of its customer systems on 11/9/98 and 11/10/98. On the first day of the two-day test, the computer system used the date of 12/31/99 and on the second day of the test the computer system date was 1/4/2000. Thus, the information that a successful test was conducted would be available on 11/11/98 which is the event date we test. 4. Keim (1983) and Lamoureux and Sanger (1989) are two good examples of the turn-of-theyear effect literature. Allen and Saunders (1992) discuss bank window dressing. 5. The description of a bank solving its Y2K problems could appear in any of the three quarterly 10Q statements during 1999. However, the description for the banks that think they will solve Y2K by the end of 1999 and for the banks that appear unlikely to solve Y2K by the end of 1999 appear only in the third quarter 10Q statements for 1999. For brevity, we did not include examples of typical Y2K preparation statements for banks that we used to classify the banks into one of the three groups. However, these examples are available upon request. 6. Scholes and Williams (1977) and Dimson (1979) show that asynchronous trading creates problems with estimating beta in a market model. Accordingly, we use several different event study methods including some that do not estimate market model betas and find similar results across the methods, which suggests are results are not driven by biased beta estimates.
Lesson from Y2K
101
MF 34,2
102
References Allen, L. and Saunders, A. (1992), ‘‘Bank window dressing: theory and evidence’’, Journal of Banking and Finance, Vol. 16, pp. 585-623. Cornett, M. and Tehranian, H. (1990), ‘‘An examination of the impact of the Garn-St.Germain depository institutions act of 1982 on commercial banks and savings and loans’’, Journal of Finance, Vol. 45, pp. 95-110. Further reading Dimson, E. (1979), ‘‘Risk measurement when shares are subject to infrequent trading’’, Journal of Financial Economics, Vol. 7, pp. 197-226. Keim, D. (1983), Size-related anomalies and stock returns seasonality: further empirical evidence, Journal of Financial Economics, Vol. 12, pp. 13-32. Lamoureuz, C. and Sanger, G. (1989), ‘‘Firm size and turn-of-the-year effects in the OTC/NASDAQ market’’, Journal of Finance, Vol. 44, pp. 1219-45. Scholes, M. and Williams, J. (1977), ‘‘Estimating betas from nonsynchronous data’’, Journal of Financial Economics, Vol. 5, pp. 309-27. Corresponding author Drew B. Winters can be contacted at:
[email protected]
To purchase reprints of this article please e-mail:
[email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints
The current issue and full text archive of this journal is available at www.emeraldinsight.com/0307-4358.htm
The role of computer usage in the availability of credit for small businesses Chaitanya Singh
Computer usage for small businesses 103
Graduate School of Management, University of Dallas, Irving, Texas, USA, and
Mark D. Griffiths Miami University, Oxford, Ohio, USA Abstract Purpose – The purpose of this paper is to provide a descriptive analysis of the role of computer usage in determining the credit score for small business owners. Design/methodology/approach – Bitler, Robb, and Wolken report over three-quarters of all small business use computerized systems. Since such systems are faster, more accurate, and less fallible than manual systems, we investigate whether increased use of computers leads to better credit ratings and leads to access to larger credit lines. Findings – The results suggest that computer usage has virtually no effect in the determination of credit by financial service providers. Credit analysis and risk measures dominate the decision-making process. Research limitations/implications – The 3,561 surveyed firms are skewed toward very small firms with approximately 64 per cent having less than five employees and an 20 per cent having less than ten employees. Forty per cent of the firms have annual sales of less than $100,000 while only 1.8 per cent of the firms have sales in excess of $10 million. Practical implications – Computerization is an operational necessity but credit worthiness is a function of the overall management characteristics of the firm and not the tools employed by the management team. Originality/value – A minority of small businesses do not use computerization and these tend to be among the smallest in size and with the lowest levels of credit-worthiness. Nonetheless, credit worthiness is determined by standard credit analysis and risk measures rather than operational efficiency. Keywords Small enterprises, Credit rating, Computer applications Paper type Research paper
Introduction The purpose of this paper is to provide a descriptive analysis of the role of computer usage in determining the credit score for small business owners. To the best of our knowledge this topic has not been previously investigated. Research on lending to small businesses generally focuses on either the lender or the borrower. Because widely available data such as the commercial banking Call Reports and Thrift Financial Reports provide detailed information about lenders and only limited aggregate information about borrowers, a substantial amount of lenderoriented research has developed. For example DeYoung et al. (1999) use regulatory data to show that commercial banks tend to reduce lending to small businesses as they age. Similarly, Berger et al. (1998) show that bank consolidations, through mergers and acquisitions, have the effect of reducing small business lending by the affected institutions, but that local competitors tend to react by increasing their small business lending activity. Stein (2002) explores various reasons why larger banks tend to reduce small business lending, suggesting that it may involve making decisions using qualitative information that is better suited to the decentralized approach used by
Managerial Finance Vol. 34 No. 2, 2008 pp. 103-115 # Emerald Group Publishing Limited 0307-4358 DOI 10.1108/03074350810841295
MF 34,2
104
small regional banks than the centralized structure of large banks. Frame, et al. (2001) look beyond the standard regulatory data by employing data from a phone survey of 200 lending institutions to ask whether lenders’ use of credit scoring models affects their small business lending. In general, borrower-centered research is concentrated in mortgage lending because of the Home Mortgage Disclosure Act’s (HMDA) reporting requirements. The body of data created by HMDA allows researchers to investigate which items, from a broad set of applicant characteristics, a lender uses to make a credit decision (see, for example, Munnekk et al., 1996; Hunter and Walker, 1996). Firm-specific data on commercial lending are much less available, although some researchers have been able to collect samples from a small number of lenders. For example, Blackwell and Winters (1997) employ a hand-collected sample of 174 lines of credit from six banks to examine the effects of loan monitoring and relationship banking on loan pricing. A far richer body of research has developed as a result of the Federal Reserve Bank’s Surveys of Small Business Finances. First conducted in 1987 and, again in 1993 and 1998, these surveys provide detailed information about the banking relationships, borrowing patterns, and other financial behaviors of thousands of small businesses. The 1987 survey was the focus of a number of important studies including Petersen and Rajan (1994) who find that small businesses benefit from building strong business relationships with a financial institution; Petersen and Rajan (1995) who find that concentration in the market for financial institutions adversely affects the amount of institutional finance received by firms early in the corporate life cycle; and Berger and Udell (1995) who found that borrowers with more established banking relationships pay lower interest rates and are less likely to pledge collateral. The 1993 survey was employed by Berger and Udell (1998) who show how capital structure and sources of capital vary with firm size and age; Avery et al. (1998) compare the 1987 and the 1993 survey responses and find that the role of personal guarantees by small business owners in the credit allocation process is growing. In 1998, the Federal Reserve Board conducted its third survey on the state of small business in the USA. This survey of small business finances (SSBF), like the two previous surveys, appears to be the only comprehensive source on the finances of small business including data on income, expenses, assets, and liabilities. Our interest in this matter centers on the implications of Bitler et al. (2001, Table 3), which is reproduced (in its entirety) as Table I. As can be seen, over three-quarters of all small business use computerized systems to some degree (we expand on this point in Table III), primarily with the purpose of managing the operations of the firm. Much of this administration involves e-mail, inventory control, internet sales, general administration, and bookkeeping functions. Arguably, since such systems are faster, more accurate, and less fallible than manual systems and, since the SSBF provides detailed information on the various types of credit, credit cards, trade credit, equity injections, debt levels, and access to credit, we were interested to know whether increased use of computers leads to better credit ratings. We find the answer to be generally in the negative. Our results suggest that computer usage with one puzzling exception has no effect whatsoever in the determination of credit policy by financial service providers to these firms. Rather, it appears that standard credit analysis and risk measures dominate the decision-making process.
Category All firms No. of employees 0-4 5 or more Fiscal year sales (’000s of dollars) Less than 100 100 or more End of year assets (’000s of dollars) Less than 100 100 or more Year under current ownership 0-4 5-9 10 or more
Used computers
Used computers for specific tasks Internet/WWW access Banking Administration
76.3
59.0
15.2
73.9
70.8 88.5
53.8 70.6
13.6 18.8
68.0 86.8
62.7 85.6
48.1 66.6
11.6 17.7
60.3 83.2
70.6 85.6
54.0 67.2
12.7 19.2
68.0 83.7
78.4 78.6 74.4
62.7 64.1 55.6
16.7 16.6 14.1
75.9 75.7 72.2
Source: Bitler et al. (2001)
Our reduced form model finds variables representing whether the firm is delinquent on a business obligation, whether a recent loan application has been approved or denied, the amount of business experience, and whether computers are used for an e-mail connection as the significant determinants of the firm’s Dunn and Bradstreet (D&B) score. We find the use of e-mail as a significant variable in the credit determination somewhat peculiar especially as additional test (described later) indicate that the variable does not appear to be a proxy for other computer-related variables collected in the survey[1]. We find these results generally unsurprising for several reasons. First, computerized systems for business applications have been in existence for a considerable period of time. As a consequence, they are widely available and competitively priced. As our results show, a minority of firms is non-users and these tend to be among the smallest in size and credit-worthiness. Second, over three-quarters of all firms use e-mail for communication and co-ordination purposes and a large number (between 32 and 41 per cent of firms in any given region) are engaged in internet sales. Hence, computerization is an operational necessity. It is reasonable to expect that credit worthiness is a function of the overall management characteristics of the firm and not the tools employed by the management team. Our results confirm this hypothesis. This paper proceeds in the following fashion. In the next section, we describe the data and our methods. The results are then presented and we conclude in the final section. Data and methods The intent of the SSBF was to collect information about small business financial behavior and the use of financial services and the financial service providers by the firms. Data for the fiscal year-end 1998 were collected from November 1998 through January 2001. The 3,561 surveyed firms, comprising for-profit businesses with fewer than 500 employees exclude agricultural firms, financial institutions, and government entities.
Computer usage for small businesses 105
Table I. Reproduction of Bitler et al. (2001, Table 3): percentage of small businesses that used computers, by selected category of firms, 1998
MF 34,2
106
The sample is skewed toward very small firms in that approximately 64 per cent of the firms have less than five employees and an additional 20 per cent have less than ten employees. Forty per cent of the firms in the sample have annual sales of less than $100,000 while only 1.8 per cent of the firms have sales in excess of $10 million. Almost half (49.4 per cent) of the businesses are proprietorships, seven per cent are partnerships, 23.9 per cent are S corporations, and 20 per cent are C corporations[2]. Bitler et al. (2001) find that computer usage varied somewhat with size, with larger firms being more likely than smaller firms to use computers. As they state, ‘‘[f]or example, 89 per cent of firms with more than four employees used computers for business purposes, compared with 71 per cent of firms with four or fewer employees’’ (p. 187). To facilitate our analysis and to ensure data sufficiency for statistical purposes, we re-stated three of the survey’s original measures. For the variable, years of owner’s experience, which varies from 0 to 72 years, a new variable was constructed with the values: .
less than 11 years;
.
11-20 years;
.
21-30 years;
.
31-40 years;
.
41-50 years;
.
51-60 years;
.
61-70 years; and
.
more than 70 years.
Similarly, for the variable, age of firm in years, which varies from 0 to 104 years, a new variable was constructed with the values: .
less than 11 years;
.
11-20 years;
.
21-30 years;
.
31-40 years;
.
41-50 years;
.
51-60 years;
.
61-70 years; and
.
more than 70 years.
The two new collapsed variables are combined to form a categorized business experience variable used in our study[3]. Additionally, the five (D&B) scores were subsequently collapsed into two categories representing high and low credit risk. Low credit risk was represented by D&B ratings 1 and 2, while ratings 4 and 5 represent businesses with high credit risk.
Results Table II provides a breakdown of D&B scores and associated standard errors by region and SIC codes. Two SIC groups have been eliminated from this summary: (1) SIC group 10-14 (Mining) had only 14 observations; one observation in each of the Mid-Atlantic, West North Central and Pacific regions, three observations in the West South Central region and eight observations in the Mountain region; and (2) SIC group 91-97 (public administration) which had no observations in any region. Although the number and frequency of observations vary somewhat by region, the data at least in terms of average D&B scores are quite homogeneous. In fact, there is no statistical difference in the average D&B scores between regions. In the interest of brevity, the results are not shown but are available upon request. Table III provides a breakdown of the computer use by region. Here, we find that administrative support (68.9 per cent) and Bookkeeping (68.0 per cent) dominate the uses in small businesses. E-mail (62.5 per cent) ranks third overall with inventory management ranking a distant fourth place at 37 per cent. We commence our analysis by examining the relation between several variables and the D&B credit score. To this end, we employ equation (1): DBScore ¼ a0 þ a1 Vari þ e
ð1Þ
where DBScore ¼ Dunn and BradStreet score; a0 ¼ intercept; Vari ¼ the following independent variables: computer usage; total business assets; business profit; owner’s age; hispanic ownership; minority ownership; female ownership; family ownership; less than high school education; high school education; some college education; associates degree; trade school education; and college and higher education; e ¼ independently and identically distributed error term. The results from these regressions are shown in Table IV. A comment is needed at this point lest anyone should interpret the levels of significance of female-owned enterprise, minority-owned enterprise, and Hispanic-owned enterprise as being indicative of discriminatory credit policy or, more popularly ‘‘red-lining’’ by rating agencies[4]. Our investigation indicates that each of these variables is significantly correlated (at greater than the 5 per cent level) with both owner experience and age of the firm. As such these variables are simply proxies for credit ratings. We discuss this issue in greater detail later in this section. When dealing with datasets of the size of the SSBF and the number of categorical variables surveyed, it is useful to employ a repeated stepwise discriminant regression process. This is the technique we utilize here. In the interest of brevity, we omit a detailed description of the stages of the regression (available upon request), and address the final subset solution directly. The model used is: D&B ¼ 0 þ 1 BODELINQ þ 2 QRLOANAD þ 3 BOSE þ 4 CUSSEC þ " where D&B[5], a binary variable representing the credit score of the small business. For credit scores one and two, D&B ¼ 0; for credit scores four and five, D&B ¼ 1;
Computer usage for small businesses 107
Total
15-17 Construction 20-39 Manufacturing 40-49 Transportation, communications and utilities 50-51 Wholesale trade 52-59 Retail trade 60-67 Finance, insurance and real estate 70-89 Services
SIC Industry code description
Table II. Summary of average Dunn and Bradstreet scores (standard errors): by region and SIC code Freq
East South West South Mountain Central Central Avg Avg Avg D&B D&B D&B score score score Freq (SE) Freq (SE) Freq (SE)
Freq
Avg D&B score (SE)
Pacific
6 3.0 (0.44) 15 3.0 (0.28)
8 2.8 (0.31) 29 2.8 (0.14)
154 3.1 (0.28) 442 3.0 (0.16) 498 2.9 (0.14) 283 2.8 (0.19) 640 2.9 (0.16) 203 3.0 (0.25) 374 2.8 (0.18) 239 3.1 (0.21) 731 3.0 (0.12)
11 3.0 (0.33) 34 2.9 (0.14) 23 3.2 (0.16) 23 2.4 (0.16) 36 3.0 (0.16) 11 2.6 (0.24) 29 3.1 (0.20) 16 3.1 (0.24) 40 3.0 (0.14) 56 2.8 (0.13) 181 3.1 (0.07) 197 3.0 (0.07) 122 2.9 (0.10) 283 3.0 (0.06) 84 3.1 (0.09) 146 3.0 (0.08) 90 3.3 (0.12) 306 3.0 (0.06)
9 2.9 (0.42) 24 2.8 (0.26) 41 2.5 (0.16) 21 2.6 (0.20) 44 2.9 (0.18) 15 2.9 (0.29) 29 2.6 (0.21) 21 3.0 (0.27) 52 2.9 (0.15) 35 3.4 (0.18) 93 3.1 (0.10) 83 2.8 (0.11) 40 3.1 (0.15) 138 3.0 (0.10) 45 3.3 (0.16) 86 2.8 (0.10) 50 2.9 (0.15) 144 3.1 (0.09)
4 3.3 (0.48) 22 3.2 (0.21) 22 3.4 (0.15) 16 2.7 (0.27) 20 3.2 (0.30)
16 2.9 (0.18) 43 2.8 (0.12) 45 2.8 (0.18) 34 3.1 (0.20) 59 2.6 (0.16) 23 2.8 (0.25) 40 2.9 (0.16) 32 3.5 (0.19) 70 3.1 (0.14) 23 3.2 (0.21) 45 2.8 (0.19) 87 2.9 (0.13) 27 2.7 (0.24) 60 2.8 (0.16) 18 3.1 (0.30) 29 2.4 (0.23) 19 3.2 (0.22) 89 2.9 (0.13)
Freq
Avg D&B score (SE)
East North West North South Central Central Atlantic Avg Freq Avg Avg D&B D&B D&B score score score Freq (SE) (SE) Freq (SE)
108
Avg D&B score (SE)
New England Mid Atlantic
MF 34,2
34.6 7.9
41.7
81.9 85.8 15.0 17.5a
44 10
53
104 109 19 27
154
14.2 78.7
18 100
442
297 286 60 93
150
142 17
73 274
84.9 81.7 17.1 21.0a
42.9
40.6 4.9
20.9 48.3
%
Mid Atlantic Freq
Note: aPercentage of total in region
Totals
PC banking e-mail Internet sales On-line credit Inventory management Administrative support Bookkeeping Other Non-users
Usage
New England Freq %
498
315 327 66 108
177
143 15
68 296
83.1 86.3 17.4 21.7a
46.7
37.7 4.0
17.9 78.1
East North Central Freq %
283
193 202 40 55
108
79 3
46 171
83.9 87.8 17.4 19.4a
47.0
34.3 1.3
20.0 74.3
West North Central Freq %
640
437 442 55 129
232
200 22
95 403
85.5 86.5 10.8 20.2a
45.4
39.1 4.3
18.6 78.9
South Atlantic Freq %
203
137 137 17 39
76
54 4
19 111
84.0 84.0 10.4 19.2a
46.6
33.1 2.5
11.7 68.1
East South Central Freq %
374
250 251 42 88
141
96 14
57 218
84.7 85.1 14.2 23.5a
47.8
32.5 4.7
19.3 73.9
West South Central Freq %
239
174 172 29 40
94
75 13
49 152
Freq
140 503 251 48 288 518 529 109 125
24.5 76.0 27.5 6.5 47.0 87.0 86.0 14.5 16.7a
731
Freq
85.5 87.3 18.0 17.1a
47.5
41.4 7.9
23.1 83.0
%
Pacific
%
Mountain
Computer usage for small businesses 109
Summary of computer uses by region
Table III.
Table IV. Summary of independent variables on D&B scores
0.11 34.12 0.00 All
3.80
0.00 0.00
0.01
Owner’s age 0.19 0.04 1.73 0.08 NE, SAtl
0.07
Hispanic Female
0.05 0.00 0.00 0.00 0.07 2.07 1.84 1.55 9.13 2.88 0.04 0.07 0.12 0.00 0.00 ENC,WNC Pac Mid, ENC, NE, WNC, SAtl, Mtn, Pac ESC, WSC, Mtn, Pac
0.10
Computer Total usage assets Profit
0.01 0.09 0.10 0.07
0.08 0.14 0.89 Mtn
0.01
0.10 1.11 0.27
0.11
0.05 2.11 0.03
0.10
Family >High High Some Assoc. Trade College school school college degree school and higher
0.05 0.05 0.11 0.06 0.06 4.97 0.26 0.82 0.82 1.18 0.00 0.79 0.41 0.41 0.24 Mid, ENC, SAtl, WSC, Pac
0.25
Minority
Notes: NE, New England Region; Mid, Mid Atlantic Region; ENC, East North Central Region; WNC, West North Central Region; SAtl, South Atlantic Region; ESC, East South Central Region; WSC, West South Central Region; Mtn, Mountain Region; Pac, Pacific Region
Coefficient Standard error t-stat p-value Regional significance in sub-regressions
Intercept
110
N ¼ 3561 R2 ¼ 4.56%
MF 34,2
BODELINQ, a binary variable with a value of zero if there is no business obligation delinquency and one if there is some business obligation delinquency[6]; QRLOANAD, a binary variable with a value of one if loan application is always approved and zero if sometimes approved or denied; BOSE, business/owner scope of experience; CUSSEC, computers used for e-mail connection. The choice of the binary credit variable, combined with missing and omitted data and the exclusion of sole proprietorship data due to difficulties associated with negative assets and similar recording issues, limits the total number of valid observations to 198 cases[7]. We present the initial classification statistics of running these regression models in Table V. In Table VI, we present the classification scores for each classification function per case. In general, given these results, one can conclude the following: .
Firms with greater amounts of business delinquencies are more likely to be risky.
.
Firms that have recently had a loan application denied are more likely to be risky. Firms with lower business experience are more likely to be risky.
. .
Computer usage for small businesses 111
Firms with lower computer usage rates for e-mail are more likely to be risky.
We now turn to an analysis of the classification results. We find that 162 of the 223 (72.6 per cent) low-risk firms are correctly classified, while 142 of the 238 (59.7 per cent) high-risk firms are correctly classified. Specifically, our model correctly identifies 304 cases (65.9 per cent)[8]. As with any discriminant analysis, additional tests must be performed to verify the validity of the method. We begin by employing Box’s test of Equality of the Covariance Matrices. We find that the low-risk group has a log determinant of 2.481, while the high-risk group has a log determinant of 0.658, suggesting that the two groups are not drawn from the same distribution. Since the Box’s M statistic is highly significant, we fail to accept the null hypothesis that two groups do not differ, leading us to conclude that the covariance matrices differ between the groups formed by the dependent variable. This lack of homogeneity of the variances between the two groups may be attributed to: the presence of outliers in one or both groups, violation of the Proportion of firms correctly classified by chance
Proportion of firms classified by model
Low risk High risk Total
0.5 0.5 1.0
0.551 (N ¼ 109) 0.449 (N ¼ 89) 1.000
Variable
Low risk D&B score
High risk D&B score
12.261 1.624 9.457 0.219 9.865
9.599 2.983 7.911 0.183 8.693
Credit score
Constant BODELINQ QRLOANAD BOSE CUSSEC
Table V. Summary of classification statistics
Table VI. Classification function coefficients using Fisher linear discriminant functions
MF 34,2
112
normality assumption and/or considerably different groups sizes. As such, it is incumbent upon us to employ a logistic regression in this case. The results of the forward logistic regression are presented in Table VII. We interpret these stepwise results as suggesting that the usual variables for credit analysis such as return on assets, ratio of net working capital to total assets, growth, and ratio of equity to debt are independent of the credit score. That is, the odds ratios for these variables have no impact on the odds ratio for the dependent variable. However, we note that as business experience increases the odds that the credit score of the firm equals one decrease by a factor of 0.964. Conversely, as a firm disdains computer usage (i.e. computer usage goes from 1 to 0), the odds that the firm is categorized as a high-risk firm increases by a factor of 3.126[9]. In addition, we performed a number of robustness checks. First, we performed both forward (addition of most significant independent variables) and backward (removal of least significant independent variables) simple linear regressions. Using an F-statistic of 2.71 or less to remove variables from, and an F-statistic of 3.84 to add variables to, the regression, we observed that the same four variables were statistically significant at the 95 per cent level in both procedures[10]. Second, we identified (Table VIII) ‘‘close alternatives’’ or variables that may exhibit correspondence with the significant predictor variables obtained from our stepwise analyses. One of the problems associated with the stepwise procedure is that the data analyst may incorrectly recognize the selected variables as the important independent variables that help explain or predict differences in the dependent variable. That is, there is no guarantee that any stepwise procedure will select the ‘‘best’’ subset of variables and that the ordering of variables obtained from a stepwise regression is an artifact of the algorithm used and need not reflect relationships of substantive interest (see Weisberg, 1980). Following Hauck and Miike (1991), we encourage the analyst to think in terms of sets of variables that, together, represent factors associated with the outcome. The key to this proposal is the recognition of close alternatives, that is, those variables excluded from selection by the inclusion of another variable. In Table VIII, using the F-statistics indicated above for inclusion and deletion, we present the variables that were originally statistically significant but later non-significant after the selection of some other variable. Variable
Table VII. Summary of variables in the equation
Table VIII. Summary of close alternatives in stepwise analyses
Constant BODELINQ QRLOANAD BOSE CUSSEC
Coefficient
SE
Wald test
Significance
Odds ratio
1.198 1.317 1.590 0.037 1.140
0.486 0.383 0.519 0.015 0.467
6.065 11.800 9.381 6.098 5.949
0.014 0.001 0.002 0.014 0.015
3.314 0.268 4.901 0.964 3.126
Step
Variable added
1 2 3 4
BODELINQ QRLOANAD BOSE CUSEEC
Significance level 0.000 0.001 0.013 0.013
Close alternatives
MOWN, ROA
We note that neither the existence of a delinquency in meeting a business obligation (BODELINQ) nor the amount of business experience (BOSE) has a close alternative. This is not surprising as both are reasonably determinants in a borrower’s credit score. Similarly, it is not surprising to find that having a recent credit application accepted or denied should be a close alternative to the return on assets (ROA). Recall that the minority ownership (MOWN) and (BOSE) are correlated at the 10 per cent level and minority ownership and return on assets are correlated at the 13 per cent level. We are surprised to find that the use of computers for e-mail purposes has no close alternative. Summary and conclusions The purpose of this paper is to provide a descriptive analysis of the role of computer usage in determining the credit score for small business owners. Our analysis is based on the 1998 Federal Reserve Board survey on the state of small business in the US. Our interest centered whether increased use of computers leads to better credit ratings. Our results suggest that computer usage with one puzzling exception has no effect whatsoever in the determination of credit policy by financial service providers to these firms. Rather, it appears that standard credit analysis and risk measures dominate the decision-making process. Our reduced form model finds the use of e-mail as a significant variable in the credit determination. This is somewhat peculiar as the variable does not appear to be a proxy for other computer-related variables collected in the survey. Further, our robustness checks finds that other computer-usage-related variables do not appear to be close substitutes. Nonetheless, we find these results generally unsurprising for several reasons. First, computerized systems for business applications have been in existence for a considerable period of time. As a consequence, they are widely available and competitively priced. As our results show, a minority of firms are non-users and these tend to be among the smallest in size and credit-worthiness. Second, over threequarters of all firms use e-mail for communication and co-ordination purposes and a large number (between 32 and 41 per cent of firms in any given region) are engaged in internet sales. Hence, computerization is an operational necessity. It is reasonable to expect that credit worthiness is a function of the overall management characteristics of the firm and not the tools employed by the management team. Our results confirm this hypothesis. Notes 1. The other computer-oriented variables collected in the survey include the use of computers for: (i) PC banking, (ii) internet sales, (iii) on-line credit applications, (iv) inventory management, (v) administration, (vi) accounting/bookkeeping, and (vii) other purposes. 2. An extensive description of the sample characteristics is provided by Bitler et al. (2001). 3. If organization type is sole proprietorship, years of owner’s experience is used as business experience; for all other types, age of firm is used. 4. In the 1987 survey, Cavalluzzo and Cavalluzzo (1998) find some evidence of discrimination in credit decision against businesses owned by racial minorities but only against firms of relatively marginal creditworthiness. Later, using the 1993 data, Cavalluzzo et al. (2002) find that lending discrimination against businesses owned by racial minorities is negatively related to the competitiveness of the local lending market.
Computer usage for small businesses 113
MF 34,2
5. 6. 7. 8.
114 9.
10.
To provide the best analysis possible, we omitted D&B category three to ensure discrimination between firms of high risk and low risk. Some business delinquency includes late payment as well as non-payment. Inclusion of the sole proprietorship data increases the number of valid cases to 271. The total number of grouped cases (excluding sole-proprietorship data) processed is 2,135. The number of grouped cases used for grouping classification statistics is 461 (includes cases that are missing at least one discriminating variable). There are 198 cases with no missing data. A similar type of analysis could be made for female-owned and minority-owned businesses in that as the firms became male- and non-minority-owned credit scores improved. However, a check of the correlation matrix for these variables shows that the correlation coefficient between female owned and business experience is 0.11 and between minority owned and business experience is 0.09. Results not presented but available upon request.
References Avery, R.B., Bostic, R.W. and Samolyk, K.A. (1998), ‘‘The role of personal wealth in small business finance’’, Journal of Banking and Finance, Vol. 22, pp. 1019-61. Berger, A.N. and Udell, G.F. (1995), ‘‘Relationship lending and lines of credit in small firm finance’’, Journal of Business, Vol. 68 No. 3, pp. 351-81. Berger, A.N. and Udell, G.F. (1998), ‘‘The economics of small business finance: the roles of private equity and debt markets in the financial growth cycle’’, Journal of Banking and Finance, Vol. 22, pp. 613-73. Berger, A.N., Saunders, A., Scalise, J.M. and Udell, G.F. (1998), ‘‘The effects of bank mergers and acquisitions on small business lending’’, Journal of Financial Economics, Vol. 50 No. 2, pp. 187-229. Bitler, M.P., Robb, A.M. and Wolken, J.D. (2001), ‘‘Financial services used by small businesses: evidence from the 1998 survey of small business finances’’, Federal Reserve Bulletin, Vol. 87, pp. 183-205. Blackwell, D.W. and Winters, D.B. (1997), ‘‘Banking relationships and the effect of monitoring on loan pricing’’, Journal of Financial Research, Vol. 20 No. 2, pp. 275-89. Cavalluzzo, K.S. and Cavalluzzo, L.C. (1998), ‘‘Market structure: the case of small businesses’’, Journal of Money, Credit and Banking, Vol. 30 No. 4, pp. 771-92. Cavalluzzo, K.S., Cavalluzzo, L.C. and Wolken, J. (2002), ‘‘Competition, small business financing and discrimination: evidence from a new survey’’, Journal of Business, Vol. 75 No. 4, pp. 641-79. DeYoung, R., Goldberg, L.G. and White, L.J. (1999), ‘‘Youth, adolescence and maturity of banks: credit availability to small business in an era of banking consolidation’’, Journal of Banking and Finance, Vol. 23, pp. 463-92. Frame, W.S., Srinivasan, A. and Woosley, L. (2001), ‘‘The effect of credit scoring on smallbusiness lending decisions’’, Journal of Real Estate Finance and Economics, Vol. 13 No. 1, pp. 57-70. Hauck, W. and Miike, R. (1991), ‘‘A proposal for examining and reporting stepwise regressions’’, Statistics in Medicine, Vol. 10 No. 5, pp. 711-15. Hunter, W.C. and Walker, M.B. (1996), ‘‘The cultural affinity hypothesis and mortgage lending decisions’’, Journal of Real Estate Finance and Economics, Vol. 13 No. 1, pp. 57-70. Munnekk, A.H., Tootell, G.M.B., Browne, L.E. and McEneaney, J. (1996), ‘‘Mortgage lending in Boston: interpreting the HMDA data’’, American Economic Review, Vol. 86 No. 1, pp. 25-53.
Petersen, M.A. and Rajan, R.G. (1994), ‘‘The benefits of lending relationships: evidence from small business data’’, Journal of Finance, Vol. 49 No. 1, pp. 3-38. Petersen, M.A. and Rajan, R.G. (1995), ‘‘The effect of credit market competition on lending relationships’’, Quarterly Journal of Economics, Vol. 110, pp. 407-43. Stein, J.C. (2002), ‘‘Information production and capital allocation: decentralized versus hierarchical firms’’, Journal of Finance, Vol. 57 No. 5, pp. 1891-921. Weisberg, S. (1980), Applied Linear Regression, John Wiley & Sons, New York, NY. Further reading Blackwell, D.W. and Winters, D.B. (2000), ‘‘Local lending markets: what a business owner/ manager needs to know’’, Quarterly Journal of Business and Economics, Vol. 39 No. 2, pp. 62-79. Melnik, A. and Plaut, S. (1986), ‘‘Loan commitments, terms of lending and credit allocation’’, Journal of Finance, Vol. 41 No. 2, pp. 425-35. Corresponding author Mark D. Griffiths can be contacted at:
[email protected]
To purchase reprints of this article please e-mail:
[email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints
Computer usage for small businesses 115
The current issue and full text archive of this journal is available at www.emeraldinsight.com/0307-4358.htm
MF 34,2
Internet auctions as a means of issuing financial securities: the case of the OpenIPO Steven L. Jones
116
Kelley School of Business, Indiana University, Indianapolis, Indiana, USA, and
John C. Yeoman School of Business and Government, North Georgia College & State University, Dahlonega, Georgia, USA Abstract Purpose – The purpose of this paper is to analyze the OpenIPO process, vis-a`-vis traditional bookbuilding, and evaluate the suitability of the OpenIPO for various types of companies, market conditions, and assets. Design/methodology/approach – This paper develops the pros and cons of the OpenIPO process, vis-a`-vis the traditional bookbuilding method, in light of the recent academic literature on securities auctions and the results of the OpenIPOs Hambrecht has conducted, as of mid-2004. Findings – The main advantage of the OpenIPO process is that it precludes many of the abuses recently observed in investment banking; however, it is not well suited for complex businesses that are either difficult to value or far removed from the public eye. Research limitations/implications – Only nine OpenIPOs have been conducted by Hambrecht, or using the Hambrecht method, as of the completion of this paper in mid-2004. Practical implications – The paper foresees the OpenIPO process of Hambrecht as supplementing, rather than supplanting, the traditional bookbuilding method. This could come about through the emergence of the OpenIPO as a more viable alternative to bookbuilding, or possibly through some hybrid type of offering in which individual investors play a larger role in price discovery, via the internet, and shares are allocated through both the internet auction and traditional bookbuilding. Originality/value – Managers considering an initial public offering have a choice between the OpenIPO process of Hambrecht, used in the Google offering, and the traditional bookbuilding process. The choice of the OpenIPO has become more viable not only because of the Google offering, but due to the severe criticism the traditional method has received in recent years for alleged abuses related to the pricing and allocation of shares. This paper assists managers in evaluating this choice IPO offer type while rigorously evaluating the pros and cons of the OpenIPO process and its likely future role in the investment banking industry. Keywords Internet, Auctions, Securities Paper type Research paper
Managerial Finance Vol. 34 No. 2, 2008 pp. 116-130 # Emerald Group Publishing Limited 0307-4358 DOI 10.1108/03074350810841303
1. Introduction The investment bank WR Hambrecht+CO (Hambrecht) takes companies public using its proprietary OpenIPO, an internet-based, online auction process that offers equal access to any investor with a Hambrecht brokerage account. The OpenIPO has been available since 1999 as the main alternative, in the USA, to the traditional bookbuilding process of issuing initial public offerings (IPOs). Proponents claim the OpenIPO has several advantages to both issuers and investors. Yet OpenIPOs have been responsible for only nine IPOs to date, as of mid-2004. Clearly, this slow acceptance is due in part to the introduction of the OpenIPO near the end of a hot IPO market. But there may be additional reasons for its slow acceptance, including the lure of abusive side payments to corporate executives, made possible by the bookbuilding underwriter’s discretion over share allocation, or possibly the OpenIPO has limitations its proponents have
ignored. Regardless, three significant developments are now in place that will provide a market test for both the OpenIPO and the bookbuilding process. These developments are: the recovery of the IPO market, as the economy picks up, Google’s decision to use the OpenIPO auction process to both price and allocate shares in its upcoming IPO, and the Securities and Exchange Commission’s (SEC) increased scrutiny over share allocations in bookbuilding offers[1]. The purpose of this paper is to analyze the OpenIPO process, vis-a`-vis traditional bookbuilding, and evaluate the suitability of the OpenIPO for various types of companies, market conditions, and assets. The traditional bookbuilding process has been severely criticized in recent years for at least three reasons relating to the pricing and allocation of shares in IPOs. (1) Offer prices have typically been underpriced by material amounts. The average return on the first day of trading, for those privileged to buy at the offer price, was 18 per cent between 1980 and 2001, and reached as high as 71 per cent, on average, in 1999[2]. (2) The underwriter has discretion over the share allocation, which always excludes the general population of individual investors but has occasionally included the abusive practices of spinning, laddering, or quid pro quo arrangements[3]. Spinning involves preferential share allocations to the personal accounts of corporate executives in return for their company’s future investment banking business. Laddering assigns preferential allocations to institutional investors in return for their commitment to buy additional shares in the aftermarket, and quid pro quo arrangements are similar, except the institutional investors agree to pay excessive commissions in return for preferential allocations. (3) The underwriting spreads in traditional bookbuilding appear insensitive to competition, and many, including Chen and Ritter (2000), have suggested this is due to collusion by the investment banking industry[4]. According to Clay Corbus, Senior Managing Director at Hambrecht, the OpenIPO can eliminate the above-mentioned abuses and shortcomings by providing investors with ‘‘equal access and fair allocation’’, resulting in better pricing, less dilution, and lower fees for issuing companies[5,6]. Hambrecht claims that a fair (i.e. impartial) share allocation, based on the results of an open public auction with internet access, is desirable for issuers because the auction will presumably be won by the bidders who place the highest value on the shares. These winning bidders are presumed to be longterm investors because quick profits cannot be expected in the aftermath of an open public auction; hence, a reduction in flipping and market volatility should follow[7]. Hambrecht also claims that the OpenIPO will produce little, if any, underpricing, on average. This is because the offering price is set at, or just below, the auction’s clearing price, which should reflect exactly what the entire market is willing to bear on the day of the offering. Thus, if the auction captures interest that the bookbuilding process might otherwise miss, the issuing company reaps the benefits of the higher offering price. In contrast to the OpenIPO’s auction process, bookbuilding determines the offering price through negotiations between the issuing company and the lead underwriter, where the latter’s estimate of share value reflects the solicited opinions of a limited number of institutional investors. Hambrecht argues that the issuing companies are often at a decided disadvantage in this negotiation because they lack the expertise
Internet auctions
117
MF 34,2
118
necessary for significant discussions regarding fair market value. In which case, the conflict of interest that exists between the issuing company and the bookbuilding underwriter usually comes out in favor of the latter. Better pricing reduces dilution, of course, and according to Hambrecht, the OpenIPO issuer’s position is further protected from dilution by relatively modest underwriting spreads. Hambrecht claims that its leveraging of technology allows the OpenIPO to be issued for less than traditional IPOs, making it less expensive for companies to go public. Thus, the proceeds raised in an OpenIPO offering are based more on the company’s financial needs rather than the investment banks’ cost structure. The benefits of the OpenIPO to investors partially mirror those to issuers and, according to Hambrecht, include equal access, fair allocation, greater flexibility, and equal treatment. Both institutions and individuals have equal access to bid on IPO shares through in an internet auction. The allocation is fair because the shares are distributed in an impartial manner based on the auction process. Investors have more flexibility because they can bid for shares at the price that they believe represents the issuer’s fair market value, and the offering price is uniform, meaning the winning bidders all pay the same price. The remainder of this paper analyzes the OpenIPO process and evaluates its suitability for various types of companies, market conditions, and assets. Section 2 describes the OpenIPO process in detail. Section 3 develops the pros and cons of the OpenIPO process, vis-a`-vis the traditional bookbuilding method, in light of the recent academic literature on securities auctions and issuance. Summary statistics for the nine OpenIPOs Hambrecht has conducted, as of mid-2004, are presented and considered in section 4. We conclude, in section 5, that the OpenIPO’s main advantage is that it precludes many of the abuses recently observed in investment banking; however, it is not well suited for complex businesses that are either difficult to value or far removed from the public eye. Thus, we foresee the OpenIPO as supplementing, rather than supplanting, the traditional bookbuilding method. This could come about through the emergence of the OpenIPO as a more viable alternative to bookbuilding, or possibly through some hybrid offering in which individuals play a role in price discovery, via the internet, and shares are allocated through both the internet auction and traditional bookbuilding. 2. The OpenIPO process An OpenIPO offering differs from the bookbuilding method traditionally used for IPOs in the USA because the offering price and the allocation of shares are determined primarily by an internet auction. In fact, an OpenIPO auction is similar to the method used to sell US Treasury bills and bonds, except non-competitive bids are not possible in OpenIPO auctions. Also, in contrast to a traditional IPO, any qualified individual or institutional investor may bid in an OpenIPO auction, and all bidders have an equal opportunity to receive an allocation of shares. The procedure used for each OpenIPO auction is described in the offering’s preliminary prospectus. Genitope Corporation used an OpenIPO to go public in October 2003 and the following discussion is based on the information in Genitope’s final prospectus, as well as information on the OpenIPO web site, the OpenIPO Participation Agreement, along with a few other sources. An OpenIPO auction begins when the company files the offering’s registration statement, containing the preliminary prospectus, with the SEC. The auction ends three to five weeks later when the SEC declares the registration statement to be effective. While the auction is open,
Hambrecht markets the offering through roadshows involving one-on-one and group meetings with institutional investors, public roadshows presented over the internet, emails sent to affinity investors such as the company’s customers, and distribution of tombstone posters to retail investors and institutions. Also, when the company’s employees have substantial contact with the public, such as was the case in the OpenIPO of Peet’s Coffee and Tea and Briazz, the employees may wear clothing that promotes the offering. Hambrecht and the other underwriters also solicit bids from prospective investors through the internet and by telephone and facsimile. To place a bid in the auction, one must have a brokerage account with Hambrecht or with one of the brokers in their OpenIPO network. An initial deposit of $2,000 is required to open an account with Hambrecht. Other than the minimum account opening deposit, a bidder does not have to deposit any funds into the account until after the registration statement for the offering becomes effective. Confidential sealed bids are submitted online by stating the highest price one is willing to pay per share, in increments of at least 1/32 of a dollar, for a specified number of shares. The minimum bid size is 100 shares and there is no maximum bid size. However, Hambrecht deems a bid or bids from one account for more than one per cent of the offered shares to be a ‘‘large quantity bid’’, which may be reduced to achieve a wider public distribution of the offered shares. One may bid any price, regardless of whether it is below, within or above the price range stated in the preliminary prospectus. A bid is conditional and it involves no obligation or commitment of any kind before the auction closes. In fact, bids can be modified or revoked at any time prior to when the auction closes. Unlike in a traditional bookbuilding IPO, all bidders have the same opportunity to purchase shares. More than one bid may be placed, and more than one bid can be successful. For example, if one places four bids at different share prices and two bids are successful, the bidder would receive the combined number of shares from both successful bids at the public offering price. A bidder’s maximum potential obligation in an OpenIPO auction is the total of all shares for which bids are made, multiplied by the public offering price. Approximately two business days before Hambrecht expects the registration statement to be declared effective, bidders receive email notification about the proposed effective date. Bidders are also notified immediately when the registration statement becomes effective, which usually occurs at about 10:00 am on the east coast. They are reminded that bids made before the registration statement became effective must be confirmed before the auction closes. A bid placed after the registration statement is declared effective does not have to be confirmed. Bids are confirmed online by use of a web page that is available for about 6 h after the registration statement becomes effective. An OpenIPO auction typically closes immediately after trading ends on the Nasdaq National Market (4:00 pm, east coast time) on the day that the registration statement was declared effective. When the auction closes, bidders must have sufficient funds in their account to cover their maximum potential obligation or the bids may be rejected. The underwriters have the right to reject any bids that they deem to be manipulative or disruptive in order to facilitate the orderly completion of the offering. After the auction closes, the clearing price is determined by ranking the bids from highest to lowest price, and then determining the highest price at which all of the shares offered can be sold. The shares that Hambrecht and the other members of the underwriting syndicate
Internet auctions
119
MF 34,2
120
may purchase, pursuant to an over-allotment option, are used to determine the clearing price, regardless of whether the underwriters intend to exercise the option. The auction-clearing price is not necessarily used as the public offering price. The latter is determined through negotiations between Hambrecht and the company. Still, the auction-clearing price is the main factor used by Hambrecht and the company in determining the public offering price of the shares. Additional factors such as the company’s assets, earnings, book value, general market conditions, Hambrecht’s assessment of the company’s management, and the price of shares of comparable companies may also be considered. The public offering price can be set lower, but never higher, than the auction-clearing price, and all winning bidders pay the same offering price. The offering will either be postponed or canceled if too few bids are received to clear the auction, if Hambrecht and the company do not consider the clearing price to be adequate, or if Hambrecht and the issuing company cannot agree on the public offering price. In which case, a post-effective amendment to the registration statement can be filed with the SEC to allow Hambrecht and the issuing company to conduct a new auction. The public offering price determines the allocation of shares to investors. Therefore, if the public offering price is less than the clearing price, all bids that are at or above the public offering price receive a pro rata portion of the shares offered. In fact, Hambrecht’s suggests that facilitating a wider distribution of shares is one reason to set the public offering price below the auction-clearing price. When this occurs, the allocations will be rounded down to multiples of 100 or 1,000 shares, depending on the size of the bid. Both successful and unsuccessful bidders are notified upon completion of the auction and the determination of the public offering price. The following example demonstrates how the clearing price and share allocation are determined through the auction process. Assume that a company offers one million shares in its initial public offering through the OpenIPO auction process and that Hambrecht has an over-allotment option equal to 15 per cent of the shares to be sold, or 150,000 shares. Also assume that Hambrecht receives ten bids to purchase shares in the offering. Table I lists the bids, ranked from highest to lowest price, and the number of shares associated with each bid. The bids are also labeled alphabetically from highest to lowest price. Hambrecht needs to determine the price that will sell 1,150,000 shares, consisting of the one million shares offered by the issuer plus the 150,000 shares from the over-allotment option. Assuming that all of these bids are confirmed and none are withdrawn or modified before the auction closes, the auction-clearing price used to determine the public offering price would be the $16.00 per share posted by bidders D, E, and F. This is the highest price at which all 1,150,000 shares offered can be sold since the cumulative demand of bidders A, B, and C is only 650,000 shares. Note, however, that bids A through F total 1,500,000 shares and all bids at or above the auction-clearing price must be accepted. In addition, the offering price may be below $16.00 per share, based on negotiations between Hambrecht and the company. If Hambrecht and the issuing company agreed to set the offering price equal to the auction-clearing price of $16.00 per share, then all bids at or above $16.00 per share would be confirmed, and each of the six bidders, A through F, who bid $16.00 per share or more would receive a pro rata allocation equal to two-thirds of the shares for which they bid, rounded down to round lots. Rounding down, of course, means that some bidders will be allocated less than their pro rata number of shares. For example, bidder
Total
A B C D E F G H I J
Bidder 200,000 150,000 300,000 400,000 200,000 250,000 100,000 250,000 150,000 300,000
$18 17 17 16 16 16 15 15 14 13
2,300,000
Number of shares
Price per share 200,000 350,000 650,000 1,050,000 1,250,000 1,500,000 1,600,000 1,850,000 2,000,000 2,300,000
Cumulative number of shares
1,000,000
133,333 100,000 200,000 266,667 133,333 166,667 0 0 0 0
$16 price allocation 66.7 66.7 66.7 66.7 66.7 66.7 0.0 0.0 0.0 0.0
Percentage of bid (%)
999,800
133,000 100,000 200,000 266,600 133,300 166,600 0 0 0 0
After rounding
1,000,000
108,108 81,081 162,162 216,216 108,108 135,135 54,054 135,135 0 0
$15 price allocation
108,100 81,000 162,100 216,200 108,100 135,100 54,000 135,100 0 0
54.1 54.1 54.1 54.1 54.1 54.1 54.1 54.1 0.0 0.0
999,700
After rounding
Percentage of bid (%)
Internet auctions
121
OpenIPO auction example
Table I.
MF 34,2
122
A’s pro rata allocation is 133,333 shares, but bidder A would be allocated only 133,300 shares. Those who bid below $16.00 per share would not receive an allocation of shares. If Hambrecht persuaded the issuing company to accept a public offering price below the auction-clearing price, at $15.00 for example, then all bids made at or above $15.00 per share would be confirmed. The eight bidders, A through H, who bid $15.00 per share or more would receive a pro rata portion of the 1,000,000 shares offered, based on the 1,850,000 shares they requested, or 54.05 per cent of the shares they requested. No shares would be allocated to those who bid less than $15.00 per share. Interestingly, Hambrecht and the financial media refer to the OpenIPO process as a ‘‘Dutch auction’’. However, Lucking-Reiley (2000) points out that the OpenIPO process is properly defined as a sealed-bid, multi-unit auction in which the winning bidders pay a uniform-price equal to the lowest accepted bid. Most internet auctions are actually English ascending-price auctions which require open bidding to move the price upward. If there are multiple units in an English auction, then the winning bidders all pay the lowest accepted bid price. An actual Dutch auction begins at some relatively high price that is decreased in increments until the first bid wins. For some reason, however, it is common practice to refer to multi-unit, uniform-price auctions as ‘‘Dutch auctions’’. 3. The pros and cons of the OpenIPO process The OpenIPO process may appear the clear choice over the traditional bookbuilding methods of issuing IPOs. The OpenIPO offers access to new potential investors, through the internet’s pervasive reach, and gives them the opportunity to participate equally in the offering. This would seem to insure more participation and thus better pricing for the issuing company. Additionally, such an open, public process could eliminate the abuses of the IPO process that have been of increasing concern in recent years. In this section, we develop the pros and cons of the OpenIPO process and evaluate its potential vis-a`-vis traditional bookbuilding. 3.1 The potential impact of the internet on the IPO process Gallaugher (2002) suggests that electronic commerce has impacted business, in the most general sense, by shortening the distribution channels between suppliers and buyers. Similarly, Tufano (2002) describes the OpenIPO as a financial innovation that seeks to lower the transactions costs between sellers and buyers of securities. This disintermediation, or elimination of organizations in the distribution channel, can create savings if the cost of some of the intermediaries can be avoided. Disintermediation, however, can also create what Gallaugher calls ‘‘value gaps’’ as a result of removing critical sources of value from the distribution channel. If the cost savings of the new, disintermediated channel is less than the value previously added by the intermediaries of the old channel, then a value gap would exist. In the case of the OpenIPO, the process is not disintermediated to the point that the investment bank, Hambrecht, is reduced to an auction house, otherwise an obvious ‘‘value gap’’ would arise[8]. Instead, Hambrecht continues to perform the necessary functions of due diligence, preparation of the registration statement for the SEC, overseeing auditors and legal advisors, along with distributing the preliminary prospectus. In addition, Hambrechct may perform the traditional roadshows for selected institutional investors. The value added by the OpenIPO comes from greater access to potential investors, the reduced cost of communications, and the auction process, although non-internet auctions of IPOs are used regularly in other countries.
Wilhelm (1999) explains that investment banks have had to rely on a network of institutional relationships given the relatively primitive information technology of the past. For example, Benveniste and Wilhelm (1990) show that the discretion an investment bank has in the traditional bookbuilding method can be used to elicit demand-revealing orders from institutions participating in an IPO roadshow[9]. During the roadshow, each institution is presented with the offering’s proposed price range and asked to place a non-binding order for a quantity shares at a price within this range. Benveniste and Wilhelm explain that an institution’s incentive to place an order at the high end of the price range, and thereby reveal favorable information, comes from the investment bank’s discretion to underprice the offering and provide larger allocations to the institutions that were forthcoming with their research and opinions. Wilhelm suggests, however, that recent advances in information technology, particularly the development of the personal computer and the internet, may allow investment banks to be less dependent on their institutional networks for information production. The implication for IPOs, according to Wilhelm, is that bookbuilding may be unnecessary in the future given the timely access the internet offers to enumerable individual investors. He suggests that internet auctions could result in lower fees and less underpricing than observed with bookbuilding, while still generating adequate information production. In addition, less dependence on relationship-based networks in investment banking should reduce the opportunities for abuse. For example, spinning is not possible with an OpenIPO because share allocations are determined strictly by the auction results. 3.2 The OpenIPO’s potential vis-a`-vis traditional bookbuilding Does the emergence of the internet really mean that an internet auction process, such the OpenIPO, will eventually supplant the traditional bookbuilding method? Afterall, Sherman (2003) reports that bookbuilding is either dominating or gaining popularity over non-internet IPO auctions almost everywhere in the world. In Japan, Kutsuna and Smith (2004) observe that since its introduction in 1999, bookbuilding has completely displaced the previously mandated auction method. They hypothesize that bookbuilding is preferred because its relationship-based model allows the underwriter to credibly reveal the issuer’s quality to potential investors. That is, the underwriter’s central role in the process solves the information underinvestment, or free rider, problem common in uniform-price, market-clearing auctions[10]. In which case, bookbuilding provides quality issuers with a higher price for their shares, but the cost of revealing this quality may be greater underpricing. Presumably, bookbuilding has driven out the auction process in Japan since any issuer who chooses the latter immediately signals a lack of quality that jeopardizes the issuance. Similarly, Sherman (2003) claims that auctions may not generate adequate evaluation of an offering. As discussed above, the bookbuilding underwriter’s discretion over share allocation allows for control over both information production and underpricing. Thus, Sherman claims that bookbuilding reduces the risks to both investors and issuers, especially those with a strong preference for accurate pricing. Is it possible that bookbuilding could dominate non-internet auctions while internetbased auctions of IPOs make gains at the expense of traditional bookbuilding? Aggarwal and Dahiya (2000) argue that traditional bookbuilding will not be supplanted as a means of issuing IPOs because internet auctions will not result in better pricing and because issuers are not overly concerned about fees. Their
Internet auctions
123
MF 34,2
124
expectation of inferior pricing in internet auctions of IPOs is based on the so-called winner’s curse. That is, potential bidders recognize that the auction winner must be willing to pay the most, and this may be too much; thus, potential bidders are hesitant to participate and the clearing price may understate demand. Biais and FaugeronCrouzet (2002) explain that this hesitancy to participate is a form of tacit collusion that results from the indifference of potential bidders toward winning a uniform-price, market-clearing IPO auction. That is, why participate if no better deal is likely than what will be available tomorrow in the secondary market? The optimal strategy then is to wait until the very end of the auction and bid below one’s demand-revealing price in the hope of guessing correctly. In which case, the auction effectively degenerates into a sealed-bid, multi-unit auction, much like the OpenIPO, except the offer price of the OpenIPO is not necessarily set to the market-clearing price. Biais and Faugeron-Crouzet show that a non-internet auction process called offre a prix minimum (OPM), used in France, is an efficient means of issuing IPOs because it solves the tacit collusion problem. The solution comes from setting the offer price below the market-clearing price; thus, a ‘‘good deal’’ is available to the winning bidders. They note that Derrien and Womack (2003) find that the OPM auction has generally been more effective than bookbuilding at controlling underpricing in French IPOs, especially during so-called hot IPO markets when underpricing is typically more severe. Likewise, Biais and Faugeron-Crouzet believe the high level of underpricing observed by Kandel et al. (1999) in auctions of Israeli IPOs is due to tacit collusion resulting from their uniform-price, market-clearing design. Biais and FaugeronCrouzet also note that the underpricing observed in bookbuilding solves the tacit collusion problem. This suggests that Hambrecht’s discretion to set an offer price below the auction-clearing price helps prevent tacit collusion in OpenIPOs, even if Hambrecht has other reasons for retaining this discretion[11]. Since tacit collusion leads to underpricing regardless of the means of issuance, one might ask if the choice of issue method is really important. The answer would, of course, be yes. Biais and Faugeron-Crouzet explain that the French OPM auction and the bookbuilding method are the most efficient because they actually use underpricing to solve the tacit collusion problem and thus control the less predictable underpricing that otherwise results from the tacit collusion problem. They also point out that while discretion over share allocation provides bookbuilding with more control over information production and underpricing, this discretion opens the door to potential abuses if the underwriter does not act in the issuer’s best interest. Thus, it seems clear that the OpenIPO process will not supplant the bookbuilding method as a result of market forces. The relevant question is to what extent the OpenIPO process will emerge from its niche to supplement the bookbuilding method. We will wait until after we have looked at the results of the nine OpenIPOs Hambrecht has conducted, to date, before we make any such predictions. 4. An analysis of the OpenIPO results Hambrecht has conducted nine IPOs using the OpenIPO auction process through mid2004. The first three were done in 1999; only four more were issued from 2000 to 2002, and most recent two were done in late 2003. On 29 April 2004, Google filed a registration statement for a $2.7 billion offering in which Hambrecht’s OpenIPO auction will be used to price and allocate the shares, although Morgan Stanley and Credit Suisse First Boston are the lead underwriters. Hambrecht also conducted an OpenIPO auction to determine the allocation for 5.6 million of the 32 million shares sold
in Instinet’s 2001 IPO, but the public offering price was determined through traditional means. Hambrecht was the lead underwriter on the nine completed OpenIPOs, and all of the companies were subsequently listed on NASDAQ. In addition, in each of the offerings, the underwriters had an over-allotment option equal to 15 per cent of the shares offered. Not all OpenIPO auctions have been successful. Two companies, Aristotle International, and Sightsound.com, filed registration statements to conduct offerings using the OpenIPO process in 2000, but they withdrew the offerings before completion. In 2001, Beacon Education Management filed and then withdrew a registration statement for an OpenIPO. More recently, in 2004, Sunset Financial Resources filed a registration statement for an OpenIPO but converted to a traditional IPO, and Alibris withdrew its OpenIPO due to dissatisfaction with the auction-clearing price. Table II summarizes the results of the nine successful OpenIPO auctions. The gross proceeds from the successful offerings range from $10.5 million for Ravenswood Winery up to $72 million for Andover.net, with an average offering size of $32.9 million. Chen and Ritter (2000) define moderate-size IPOs as those between $20 million and $80 million in gross proceeds. Along with Ravenswood, the only other OpenIPO below this size threshold is Briazz, at $16 million in gross proceeds. Chen and Ritter report that over 90 per cent of all of the moderate-size IPOs of the late 1990s had gross spreads of exactly 7 per cent. Only the most recent OpenIPO, Genitope, had a spread as high as 7 per cent, although four OpenIPOs had spreads of 6 to 6.5 per cent. Three OpenIPOs, Ravenswood Winery, Overstock.com, and RedEnvelope, had spreads of only 4 per cent. Thus, it would appear that the lower spreads Hambrecht claims are at least a possibility with OpenIPOs. Interestingly, Merrill Lynch announced its withdrawal from Google’s OpenIPO on 21 June 2004 out of concern that the fees generated by the offering would be inadequate. Only two of the OpenIPO offerings included secondary shares offered by existing shareholders. In the Peet’s Coffee and Tea offering, 800,000 shares or 24.2 per cent of the offering was by existing shareholders. In the Overstock.com offering, 845,000 shares or 28.2 per cent of the offering was by existing shareholders. The average new share ratio, which is the number of new shares offered by the company divided by the number of shares outstanding after the offering, was 25.0 per cent. The lowest new share ratio was 21.0 per cent for Overstock.com and the highest new share ratio was 34.4 per cent for Briazz. These percents are typical for moderate-size IPOs. The average offering price as a percent of the mid-point of the original expected offering price range was 89.4 per cent. The high value was 133.3 per cent for the Andover.net offering and the low was 66.7 per cent for the Peet’s Coffee and Tea offering. Note that the expected price range for an OpenIPO offering is four dollars, while traditional IPOs typically have a two dollar price range. In three of the offerings, the expected price range was amended. In one offering, Andover.net, it was raised and in two offerings, Nogatech and Peet’s Coffee and Tea, it was lowered. The average offering price as a percentage of the mid-point of the final expected offering price range was 89.0 per cent. The average offer price was $11.44 per share with a high of $18.00 per share for Andover.net and a low of $8.00 per share for both Peet’s Coffee and Tea and Briazz, Inc. Hambrecht does not publish the auction-clearing price, so there is no way to compare it to these offering prices. However, the OpenIPO website states that winning bids received 71.5 per cent of the indicated interest in the Nogatech offering and approximately 89 per cent in the Genitope offering. Sherman (2003) states that the
Internet auctions
125
Table II. OpenIPO offerings results Gross spread (%)
Offer price
Final offering price range
3.9
11.1
RedEnvelope, Inc.
25 September $30,800,000 4.0 $14.00 2,200,000 0 8,504,568 25.9 $12.00-$16.00 $12.00-$16.00 $14.55 2003 Markets and delivers high-quality gifts and accessories in red boxes with ivory ribbons – stock closed on NASDAQ at $8.47/share on 25 June 2004
Genitope
Notes: The initial market return equals the closing market price on the first day of trading on the NASDAQ minus the offer price, divided by the offer price
a
29.2
0.2
Overstock.com 29 May 2002 $39,000,000 4.0 $13.00 3,000,000 845,000 14,292,404 15.1 $12.00-$16.00 $12.00-$16.00 $13.03 Online discount retailer of a wide variety of brand-name merchandise – stock closed on NASDAQ at $38.99/share on 25 June 2004
30 October $33,300,000 7.0 $9.00 3,700,000 0 16,238,296 22.8 $9.00-$13.00 $9.00-$13.00 $10.00 2003 Researches treatments for cancer which use the body’s immune system to fight tumors – stock closed on NASDAQ at $9.45/share on 25 June 2004 Average $32,916,666 5.5 $11.44 2,800,000 10,870,065 25.0
0.4
$8.03
Briazz, Inc. 2 May 2001 $16,000,000 6.0 $8.00 2,000,000 0 5,820,966 34.4 $8.00-$12.00 $8.00-$12.00 Prepares and sells quality ‘‘on-the-go’’ foods to mainly office workers through its own cafes – filed for Chapter 11 reorganization in early June 2004
17.3
$9.38
21.6
252.1
4.8
$8.00-$12.00
Peet’s Coffee 25 January $26,400,000 6.5 $8.00 3,300,000 800,000 7,958,214 31.4 $10.00-$14.00 and Tea 2001 Specialty coffee roaster and marketer through multiple channels – stock closed on NASDAQ at $24.34/share on 25 June 2004
8 December $72,000,000 6.5 $18.00 4,000,000 0 15,000,000 26.7 $12.00-$15.00 $16.00-$18.00 $63.38 1999 Linux/Open Source Internet destination – acquired by VA LINUX in February 2000, stock closed upon announcement at $136.875/share Nogatech 17 May 2000 $42,000,000 6.5 $12.00 3,500,000 0 14,734,669 23.8 $16.00-$16.00 $12.00-$16.00 $9.41 Manufactures, develops, and markets integrated circuits and consumer-oriented USB products – bought by Zoran in August 2000 for about $10/share
Andover.net
Salon.com 22 June 1999 $26,250,000 5.0 $10.50 2,500,000 0 10,730,623 23.3 $10.50-$13.50 $10.50-$13.50 $10.00 Web-based media group – stock trades on OTCBB, delisted from NASDAQ in June 2001, continues to struggle on in financial peril
3.6
Initial Initial market returna price (%)
Line of business – current stock price or disposition Ravenswood Winery 9 April 1999 $10,500,000 4.0 $10.50 1,000,000 0 4,550,852 22.0 $10.50-$13.50 $10.50-$13.50 $10.88 Produces, markets, and sells premium California wines under its own brand name – acquired in July 2001 by Constellation Brands for $29.50/share
Gross proceeds
New Secondary Shares share Original shares outstanding ratio offering price offered after offering (%) range
126
Issuer
Offering date
Total shares offered
MF 34,2
winning bidders received 70 per cent of their indicated interest in the Briazz offering and 60 per cent of their indicated interest in the Overstock offering. Sherman concludes from this high level of prorating that either there was ‘‘an extraordinarily large’’ number of bids at the auction-clearing price or the offer price was set below the auction-clearing price. The fact that all of the offer prices, except for the first two, are dollar integers would seem to corroborate this view. The average first-day, initial return was 29.2 per cent, but excluding Andover’s return of 252.1 per cent drops this average to only 1.4 per cent. In regards to the Andover offering, Hambrecht spokeswoman, Sharon Smith was reported as stating, ‘‘There was just an incredible amount of demand in the last couple of days’’[12]. Officials at Hambrecht were also reported as saying that Andover could have gotten at least $24 per share, based on the auction results, but $18 was the top of the final offering price range, and Andover’s management was hesitant to delay the offering any further by revising the offering price range upward. While no statistical significance can be attached given the small number of observations, excluding Andover does yield effectively zero underpricing. The median initial return was 3.6 per cent, and two of the offerings, Salon.com and Nogatech, experienced declines from their offer price on the first day of trading. Recall that Chen and Ritter report the average level of underpricing in moderate-size IPOs was 18 per cent between 1980 and 2001, and four and three-fold this amount in 1999 and 2000, respectively. Regarding the longer term, post-offering performance of the OpenIPOs, two the companies, Ravenswood Winery and Andover, were bought out at significant premiums over their initial market price, while Nogatech was bought out for an amount roughly equal to its initial market price. The stock prices of both Peet’s Coffee and Tea as well as Overstock.com have approximately tripled from their initial offer prices. Of the two most recent offerings, RedEnvelope has seen its prices fall off considerably, while Genitope’s stock price remains in its initial trading range. Only the offers of Briazz and Salon.com have proved to be failures as longer term investments. Briazz just recently filed for Chapter 11 reorganization, while Salon.com has struggled to sell advertising and has been on the brink of bankruptcy ever since the internet bubble burst; its stock trades on the over-the-counter bulletin board (OTCBB) for less than $1 per share, as of mid-2004. As one of the earliest and most prominent web-only media outlets, Salon.com may have had the highest profile of any of the OpenIPOs; however, all of these companies had high awareness levels with the public prior to issuance. Note that Ravenswood Winery, Peet’s Coffee and Tea, Briazz, Overstock.com, and RedEnvelope are all consumer-oriented companies that market to a reasonably affluent segment of the population. Andover.net was well known in the Linux/OpenSource community, Nogatech focused on the consumer market, and Genitope has received considerable popular media exposure for its new cancer research. It is impossible to say whether these companies would have faired better had they chosen to do their IPO using the traditional bookbuilding method. But we can at least observe that Hambrecht’s past OpenIPOs have had relatively small fees (i.e. gross spreads), low underpricing, with the exception of Andover.net, and the companies have been reasonably well known to the public prior to their OpenIPO. These results are generally consistent with Hambrecht’s claims as well as the predictions of the academic literature from the previous section.
Internet auctions
127
MF 34,2
128
5. Conclusions Hambrecht’s OpenIPO system uses an internet auction process to take companies public, rather than the traditional bookbuilding methodology. According to Hambrecht, the OpenIPO process benefits both investors and issuers. However, OpenIPO auctions have been slow to gain acceptance in the financial markets. Whether the OpenIPO process of securities underwriting represent a real threat to the traditional bookbuilding method remains to be seen. However, the potential investors with access to the internet is now enumerable, and this has created the opportunity for vast innovations in financial markets and services. The current academic literature on bookbuilding and auction theory implies that OpenIPOs will not avoid underpricing, on average, although only one of nine, to date, have incurred dramatic underpricing. In fact, controlling the less predictable underpricing associated with the tacit collusion problem may be the rationale for the offering price discretion Hambrecht retains in the OpenIPO process. Since bookbuilding’s discretion over share allocation should provide greater control over information production and underpricing, we may expect that bookbuilding will remain the preferred method for issuing IPOs of complex businesses, especially those far removed from the eye of the general public. The OpenIPO would seem to be best suited for retail-oriented companies that are highly visible to the general public. Such companies would be more likely to attract the active interest of individual investors and thus exploit the internet’s access to non-traditional bidders. This suggests that Google may have made the right choice. Additionally, during hot IPO markets, nontraditional bidders in OpenIPOs may be especially effective at reducing underpricing, just as Derrien and Womack (2003) found with French OPM auctions. Similarly, Aggarwal and Dahiya (2000) foresee that most offerings through internet auctions will come in less information intensive commodity-type assets, such as debt. Interestingly, Hambrecht also offers internet auctions for corporate bond offerings through its OpenBook process and for seasoned equity offerings through its OpenFollowOn process. Thus, the OpenIPO appears to have definite potential under particular conditions, which suggests that it will survive to supplement, not supplant, the bookbuilding method. In addition, a some hybrid process may evolve that combines the information production of bookbuilding with the public access of the OpenIPO. In fact, the Google offering may turn out to be the first step in such an evolution. Notes 1. Although the lead underwriters in Google’s $2.7 billion offering are Morgan Stanley and Credit Suisse First Boston, they have agreed to use Hambrecht’s OpenIPO auction to price and allocate the shares. This development shows that internet auctions of IPOs are not constrained by the limited underwriting capacity of Hambrecht, and that the larger investment banks may be forced, by issuing clients, to at least offer the OpenIPO process, or some other form of internet auction, as an alternative to bookbuilding Interestingly, Goldman Sachs Group apparently lost a chance to be the lead underwriter on the Google deal because it advised against the use of the OpenIPO process. 2. See Ritter and Welch (2002). 3. Possibly the best known example of such abuses is the allegation by New York state Attorney General Eliot Spitzer that former WorldCom CEO, Bernie Ebbers made over $11 million from spinning IPO shares received in return for directing investmentbanking business to Citigroup’s Salmon Smith Barney, Inc.
4.
Chen and Ritter (2000) note that of the 1,111 IPOs raising proceeds of $20 million to $80 million between 1995 and 1998, over 90 per cent had gross underwriting spreads of exactly 7 per cent. 5. See Corbus (2001) as well as Hambrecht’s web site, www.openipo.com. 6. One abuse, alleged by New York Attorney General Eliot Spitzer, that the OpenIPO would not have prevented was the granting of favorable ratings by Wall Street analysts in return for the overrated companies’ investment-banking business. Perhaps the most celebrated investigation of this involved Citigroup Salomon Smith Barney telecom analyst, Jack Gubman who agreed to pay a $15 million fine and was barred from the securities industry in 2003. 7. Wit Capital was founded in 1996 as the ‘‘the Internet investment bank’’. Wit did not contribute to the IPO pricing process but instead, used the internet to form an ‘‘esyndicate’’ of individual investors to participate and bid in IPOs. 8. Gallaugher notes that companies often fail to realize that a value gap exists in a new distribution channel because it is easy to identify the savings that result from removing an intermediary, but benefits of less obvious, yet critical, contributions may be ignored. 9. The so-called roadshow is a series of presentations in which the investment bank visits with potential buyers to present and discuss the upcoming offering. In the traditional bookbuilding method, the investment bank builds a book of non-binding orders collected mostly from institutional investors during the roadshow. 10. The free rider problem arises in a uniform-price, market clearing IPO auction because bidders have no incentive to prepare for such an auction by investing in information since a bidder cannot expect to obtain a price from the auction that is any better than the price that can be obtained immediately afterwards in the secondary market. 11. Yeoman (2001) provides an additional why we should expect underpricing in IPOs, regardless of the efficiency of the issue method. He notes the tradeoff between the underwriting fee and the expectation of underpricing and shows that as long as uncertainty over the value of an issue is material, the total cost of an issue (including fees and underpricing) is lowered by accepting some level of underpricing. That is, if an issuer insisted on terms such that there was an expectation of no underpricing, then the underwriter would need to charge a spread so high that the issuer’s total net proceeds would be reduced. 12. See ‘‘Andover strikes it rich,’’ by Joanna Glasner, Wired News at www.wired.com/news/ business References Aggarwal, R. and Dahiya, S. (2000), ‘‘Capital formation and the internet’’, Journal of Applied Corporate Finance, Vol. 13, Spring, pp. 108-13. Benveniste, L. and Wilhelm, W. (1990), ‘‘A comparative analysis of IPO proceeds under alternative regulatory environments’’, Journal of Financial Economics, Vol. 28, pp. 173-207. Biais, B. and Faugeron-Crouzet, A. (2002), ‘‘IPO auctions: English, Dutch, . . . French, and Internet’’, Journal of Financial Intermediation, Vol. 11, pp. 9-36. Chen, H. and Ritter J. (2000), ‘‘The seven percent solution’’, Journal of Finance, Vol. 55 No. 3, pp. 1105-32. Corbus, C. (2001), ‘‘New auction technologies for capital raising’’, presentation to The Wharton School, 6 April. Derrien, F. and Womack, K. (2003), ‘‘Auctions vs. bookbuilding and the control of underpricing in hot IPO markets’’, Review of Financial Studies, Vol. 16, pp. 31-61.
Internet auctions
129
MF 34,2
130
Gallaugher, J. (2002), ‘‘E-Commerce and the undulating distribution channel’’, Communications of the Association for Computing Machinery, Vol. 45 No. 7, pp. 89-95. Kandel, S., Sarig, O. and Wohl, A. (1999), ‘‘The demand for stock: an analysis of IPO auctions’’, Review of Financial Studies, Vol. 12 No. 2, pp. 227-47. Kutsuna, K. and Smith, R. (2004), ‘‘Why does book building drive out auction methods of IPO issuance? Evidence from Japan’’, Review of Financial Studies, Vol. 17 No. 4, pp. 1129-66. Lucking-Reiley, D. (2000), ‘‘Auctions on the internet: what’s being auctioned and how?’’, Journal of Industrial Economics, Vol. 48 No. 3, pp. 227-52. Ritter, J. and Welch, I. (2002), ‘‘A review of IPO activity’’, Journal of Finance, Vol. 57 No. 4, pp. 1795-828. Sherman, A. (2003), ‘‘Global trends in IPO methods: book building vs. auctions’’, Journal of Financial Economics, Vol. 78 No. 3, pp. 615-49. Tufano, P. (2002), ‘‘Financial innovation’’, in Constantinides, G., Harris, M. and Stulz, R. (Eds), Handbook of the Economics of Finance (Volume 1a: Corporate Finance), Elsevier, Amsterdam, 2003, pp. 307-36. Wilhelm, W. (1999), ‘‘Internet investment banking: the impact of information technology on relationship banking’’, Journal of Applied Corporate Finance, Vol. 12 No. 1, Spring, pp. 21-7. Yeoman, J. (2001), ‘‘The optimal spread and offering price for underwritten securities’’, Journal of Financial Economics, Vol. 62, pp. 169-98. Corresponding author Steven L. Jones can be contacted at:
[email protected]
To purchase reprints of this article please e-mail:
[email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints
The current issue and full text archive of this journal is available at www.emeraldinsight.com/0307-4358.htm
Using instant messenger in the finance course
Using instant messenger in the finance course
Stuart Michelson Department of Finance, School of Business Administration, Stetson University, DeLand, Florida, USA, and
131
Stanley D. Smith Department of Finance, University of Central Florida, Orlando, Florida, USA Abstract Purpose – New technologies have provided new tools we may use as finance professors to communicate with our students. Instant messaging (IM) has become a common communication tool in industry and among students. The purpose of this paper is to investigate the use of IM as a communication tool in finance courses. Design/methodology/approach – After reviewing the advantages and disadvantages of IM, the students were surveyed to determine how they viewed IM in comparison to other communication techniques. Findings – The paper finds that 50 per cent or students use IM at any time (not just for class). The majority of the IM users, use it several times a day and have used it for two to three years. Only about 15.7 per cent of our students have used IM for our classes. The range of IM usage in the classes is 7-25 per cent. Of those students who have used IM for our courses, they have used it 2-5 times during the semester and almost all students found it useful. Students were asked to rate various methods of professor/student communication. The students strongly like face-to-face communication, followed by (in order of preference) email, IM, and telephone. Students disagree with the statement that IM is a substitute for face-to-face interaction and agree that IM is a supplement to face-to-face interaction. Originality/value – The findings suggest ways to improve communications with students and other persons. Keywords Teaching, Communication, Communication technologies, Finance Paper type Research paper
Introduction The internet has changed the way we, as finance instructors, conduct and teach our courses. Several examples have been published recently in the business education journals. In this paper, we investigate the benefits of using instant messaging (IM) in finance courses. We discuss the advantages and disadvantages of using IM, how to implement IM, and provide personal examples of using IM in our classes. Literature The literature on using the Internet in financial education is fairly recent, but rapidly expanding. Most of this literature has focused on what data sources and financial resources are available on the Internet and may be used in education. Herbst (1996) presents an interesting discussion on the fundamentals of the Internet and describes the opportunities and challenges for financial engineers on the Internet. Ray (1996), in ‘‘An introduction to finance on the internet’’, presents a background on the history and evolution of the Internet, as well as providing a description of financial resources available. In ‘‘A guide to locating financial information on the Internet’’, Pettijohn (1996) provides a guide and extensive list of financial resources available on the Internet. Grinder (1997) discusses recent developments on the Internet and provides an extensive list of financial service Internet sites. Stanley (1996) reviews the use of
Managerial Finance Vol. 34 No. 2, 2008 pp. 131-138 # Emerald Group Publishing Limited 0307-4358 DOI 10.1108/03074350810841312
MF 34,2
132
EDGAR as a resource for SEC documents and applications for classroom use. Michelson and Stanley (1999) review applications of web page usage in the finance area. Recent articles have discussed the benefits of utilizing the web to provide academic content for students. The literature has also discussed several disadvantages or costs to using the web. Benbunan-Fich et al. (2001) indicate they integrate technology in business courses in two ways: to transmit content or deliver instruction and to support communication between professors and students. Lincoln (2001) finds that most faculty that developed web pages did so out of personal motivation, very few had direct institutional support. He asserts that faculty had several obstacles to adoption; time to learn, execute, and maintain faculty web sites. Young (2002) finds that student course evaluations of faculty effectiveness increased significantly over the period that technology was implemented in their program. Several authors (Kaynama and Keesling, 2000; Atwong and Hugstad, 1997; McBane, 1997) illustrate that internet activities can enhance the educator’s role in a number of areas, including: lecturing, providing instructional materials, facilitating student collaboration, evaluating student performance, and recruiting majors. Further, they state that web sites are growing in popularity for posting of professional and classroom information, such as course syllabi, sample tests, and other course materials. In statistics, Leon and Parr (2000) provide information on areas in which their course home pages were not successful. These areas include: mailing lists, providing detailed outlines or summaries of lectures, providing material keyed to a particular date, preparing summaries for every class right after class, and maintaining complicated web page structures. They find that there are beneficial applications for web pages (WP), including: providing materials for large courses, WP are an efficient way to solicit feedback, WP provide links and resources, and WP can be a permanent record of what happened in class (calendar, homework log, class summaries, and study questions). Web pages can also allow the instructor a means to post copies of ‘‘handouts’’ (pull, not push basis), old exams, and data files (such as Excel files). There are also articles that provide information about the use of the Internet in teaching and education in the accounting and economics areas (for example see: Agarwal and Day, 1998; Debreceny et al., 1996; Manning, 1996). Instant Messaging is becoming much more prevalent in the corporate world (Smith, 2003; Wessel, 2003). Smith reports that federal investigators are now requiring that firms archive the IM’s exchanged between NYSE and NASD member firms and their customers, similar to the requirement for emails. He indicates that more than 400 million IM accounts will be created by 2004 and almost one-half of them will be used to connect businesses and customers. Mike Osterman, a consultant in IM, states there are currently about 60 million IM users in the US workplace. He says that about 20 percent of those that now use email also use IM. He predicts that within four years, virtually all email users will also use IM. Wessel says that there are benefits to using IM, including: convenience, speed of response, lack of spam, capability to quickly respond to customers, and the ability to IM someone’s cell phone. He cites drawbacks such as: not wanting to be constantly interrupted (leave an away message), ability for companies to install secure internal IM system, and it can be awkward to end conversations. Several articles have recently appeared in The Wall Street Journal about IM in the corporate world (Weber, 2002, p. B1; Bulkeley, 2002, p. B1). Weber indicates that IM is now at the stage where email was in 1990. The primary problem is that there is no standard by which IM systems can talk to one another. He indicates that ‘‘like email before it, IM technology has the potential to reshape how workers communicate and share knowledge.’’
Bulkeley explains how IM has solved many communication problems in corporations. Not only does IM allow users to instantly chat with each other, it also tells users which recipients are available to receive messages at that moment. IM creates presence; that the person is working and available for business. The downside is that it makes it harder to maintain privacy and avoid distractions. Also receiving messages every few minutes can be very disruptive as well. On the positive side, ‘‘having someone on your contact list deepens your relationship.’’ Even though two people may have never met in person, after exchanging IMs over time, they feel as though they know each other well. In the corporate world, this can help business relationships. Instant messaging One of the latest internet-related tools is IM. Many teens use it to communicate with their friends. Even though there are several IM programs available, most IM programs are similar. Users have a list of other peoples’ IM handles (screen names), and they click on the name to initiate a conversation session. Then the two or more people write text messages that are delivered almost instantaneously and reside in a window on one another’s machines. One can chat on a one-to-one basis or invite others to participate in a chat room spontaneously developed for that conversation. One can even have both going on simultaneously. For our purposes, we will assume that we are using AOL Instant Messenger (AIM); however, there are other free IM programs, such as Yahoo Messenger and MSN Messenger. Corporate (non-free) IM software, such as Lotus Software Group’s Sametime and Presence Platform from Bantu, Inc., include features and functionality that are missing from the free versions, e.g. communications encryption, allowing users to view what is on one another’s PC, and written transcripts of all communique´s. Advantages Some of the advantages of using IM include: .
AIM and most IM programs are free and easy to install and use.
.
If the student is using the computer, which is connected to his one phone line, then it is difficult to converse and use the computer at the same time. IM solves this problem.
.
Even if the student has separate computer and phone connections, IM may reduce phone costs for both parties when the parties are in different calling zones, e.g. local tolls or long distance charges.
.
The instructor is able to handle more than one student at a time. One can conduct four or five independent sessions or conversations at one time.
.
The parties can meet at any time that is feasible for them and are not restricted by location related factors.
.
IM can be useful if one person has a hearing impairment or is otherwise handicapped and has difficulty arranging a traditional meeting.
.
One can spontaneously and easily arrange a chat room with numerous people to deal with any topic you wish.
.
Messaging sessions can be saved and utilized later.
Using instant messenger in the finance course 133
MF 34,2
134
.
Computer links can be inserted into conversations so that everyone can follow a progression of web pages.
.
Attachments can be sent through IM conversations so that everyone can review the same document.
Disadvantages Some of the disadvantages of using IM include: .
There is a lack of a single IM standard; therefore, all parties must be on the same software, e.g. AIM. However, this disadvantage is reduced if the messaging service is free. The service is free in the same way that some television or web is free; the users may be exposed to advertising that helps pay for the service.
.
Using IM can make it harder to maintain privacy and can be disruptive. The IM beep from someone requesting a chat can be as intrusive and distracting as when the telephone constantly rings at dinner time.
.
AIM is a very rough imitation of the telephone. How good the audio is depends on the equipment on both ends and the way they are connected.
.
AIM cannot offer rich content because AIM is not a web browser and cannot render HTML
Personal experience One of this article’s authors has used IM in his course for about two years. He does not require its use, but rather offers it as another means to contact the professor. He puts his screen name on his syllabus, along with his web site address and email address. He also puts the screen name on his web site with links to the AOL IM registration page and the University of Massachusetts tutorial page (links provided below). He has found that about two-thirds of his students use IM for personal use. It is interesting though, only about 10 per cent of his students use IM to contact him, whereas over 50 per cent of his students use email for frequent contact. Most of his students begin to use the professor’s IM for specific course and project related questions. But after a while the contact evolves into other areas, such as: career advice, curriculum advice, and more general ‘‘hi how are you doing?’’ Of course the professor has to properly manage his time with respect to IM usage because the beeping from the IM window can seem to be incessant. Like email, this professor has found, that properly managed, IM can be another valuable tool in working with students and faculty. As an added benefit, this professor finds that when his daughter sees he is online, she’ll ‘‘drop in’’ just to say hi and share the day’s events. Important links For AOL Instant Messenger Registration – go to http://www.aim.com/index.adp. You will be asked to choose a screen name, type a password, confirm your password, type your e-mail address and your birth date. You will later be asked to confirm your registration by replying to a message at your e-mail address. For a good tutorial on using IM and AIM chat – see www.old.umassd.edu/tew/aimwelcome.html Survey results During the Fall 2002 and Spring terms, the authors surveyed their classes about the students IM usage. Three classes were surveyed, totaling 129 students. Sixty-one of
the students were graduate students (MBA). The survey instrument is provided in the Appendix at the end of this paper. Table I provides the most relevant results of the survey. We found that 50 per cent or our students use IM at any time (not just for class). The majority of the IM users, use it several times a day and have used it for two to three years. Only about 15.7 per cent of our students have used IM for our classes. The range of IM usage in our classes is 7-25 per cent. Of those students who have used IM for our courses, they have used it 2-5 times during the semester and almost all students found it useful. We asked our students to rate various methods of professor/student communication. Our students strongly like face-to-face communication, followed by (in order of preference) email, IM, and telephone. Even though telephone was rated lowest, it was rated slightly below average (2.8 on a 1-5 scale). We also asked if our students feel IM is a substitute or a supplement to face-to-face interaction. Students disagree with the statement that IM is a substitute (2.22 on a 1-5 scale) and agree that IM is a supplement (3.47) to face-to-face interaction. We received a variety of student comments in response to our questions about their like/dislikes for the various forms of communication, as well as why they have or have not used IM. In general our students prefer the immediate response and interaction available with face-to-face communication. In general, they did not like telephone communication. Comments suggested that they did not like playing phone tag on the office phone’s voice mail or felt they would be rude to call the professor at home. The students that use IM also like the immediacy and interaction that IM provides. The major drawback to IM is that many students are not at the computer throughout the day, so IM is not available for their use. Many of the MBA students also indicated that IM was not available at their workplace. As you would expect, many students frequently use email because they find it convenient and easy to use. Students remarked that they like the convenience of email, but the primary drawback to email is that the students do not get an immediate response to their emails. Of course, the responsiveness of the professor will affect the usefulness of email from the student’s perspective. One conclusion that can be inferred from the student comments is that even though many of them use IM for personal use, they still are not comfortable using it with their professors. This is partially due to IM not being encouraged or available in their other classes.
Abbreviated questions Do you use IM (at any time, not just for class)? (1 ¼ yes) If you use IM, how often do you use it? (1-5) If you use IM, how long have you used it? (1-3) Have you ever used IM for this class? (1 ¼ yes) How often have you used it for this class? (1-6) If you have used IM for this class, have you found it useful? (2 ¼ yes) Rate method of communication with instructor (face-to-face) (1-5) Rate method of communication with instructor (telephone) (1-5) Rate method of communication with instructor (email) (1-5) Rate method of communication with instructor (instant messaging) (1-5) IM substitutes for face-to-face interaction (1-5) IM is a supplement to face-to-face interaction (1-5)
Question number
Total surveys (129)
1 1A 1B 2 2A 2B 3a 3b 3c
50.0% 3.573 2.324 15.7% 2.793 1.940 4.239 2.745 4.105
3d 4a 4b
2.925 2.218 3.474
Using instant messenger in the finance course 135
Table I. IM survey results
MF 34,2
136
Conclusions In our sample of 129 students, we find that about one-half of our students use IM at any time, although only about 15 per cent have used IM for our courses. Since about 47 per cent of our sample were MBA students, there is probably a bias towards those that do not use IM. Almost 100 per cent of the students that use IM for class feel it is useful. For IM usage to increase in academics, both students and professors must become more comfortable with its daily use. Writers indicate that IM is now at the stage where email was in 1990 and that by 2008 virtually all email users will also be utilizing IM. Through its increasing usage, IM has solved many communication problems in corporations. ‘‘Like email before it, IM technology has the potential to reshape how workers communicate and share knowledge (Bulkeley, 2002).’’ References Agarwal, R. and Day, A.E. (1998), ‘‘The impact of the Internet on economic education’’, Journal of Economic Education, Vol. 29 No. 2, Spring, pp. 99-110. Atwong, C.T. and Hugstad, P. (1997), ‘‘Internet technology and the future of marketing education’’, Journal of Marketing Education, Vol. 19 No. 3, pp. 44. Benbunan-Fich, R., Lozada, H.R., Pirog, S., Priluck, R. and Wisenblit, J. (2001), ‘‘Integrating information technology into the marketing curriculum’’, Journal of Marketing Education, Vol. 23 No. 1, pp. 5-15. Bulkeley, W.M. (2002), ‘‘Instant message goes corporate; you can’t hide’’, Wall Street Journal, 4 September, p. B1. Debreceny, R., Smith, G.S. and White, C.E. (1996), ‘‘Internet methodologies and the accounting curriculum: a first look’’, Accounting Perspectives, Vol. 2 No. 1, Spring, pp. 107-24. Grinder, B. (1997), ‘‘An overview of financial services resources on the Internet’’, Financial Services Review, Vol. 6 No. 2, pp. 125-40. Herbst, A.F. (1996), ‘‘The ways in which the financial engineer can use the Internet’’, Financial Practice and Education, Vol. 6 No. 2, Fall/Winter, pp. 111-21. Kaynama, S.A. and Keesling, G. (2000), ‘‘Development of a web-based Internet marketing course’’, Journal of Marketing Education, Vol. 22 No. 2, pp. 84-9. Leon, R.V. and Parr, W. (2000), Use of course home pages in teaching statistics, The American Statistician, Vol. 54 No. 1, pp. 44-8. Lincoln, D.J. (2001), ‘‘Marketing educator internet adoption in 1998 versus 2000’’, Journal of Marketing Education, Vol. 23 No. 2, pp. 103-16. Manning, L.M. (1996), ‘‘Economics on the internet: electronic mail in the classroom’’, Journal of Economic Education, Vol. 27 No. 3, Summer, pp. 201-4. McBane, D.A. (1997), ‘‘Marketing departments on the World Wide Web: state of the art and recommendations’’, Journal of Marketing Education, Vol. 19, Spring, pp. 14-25. Michelson, S. and Stanley, D.S. (1999), ‘‘Applications of WWW technology in teaching finance’’, Financial Services Review, Vol. 8 No. 4, pp. 319-28. Pettijohn, J. (1996), ‘‘A guide to locating financial information on the Internet’’, Financial Practice and Education, Vol. 6 No. 2, Fall/Winter, pp. 102-10. Ray, R. (1996), ‘‘An introduction to finance on the Internet’’, Financial Practice and Education, Vol. 6 No. 2, Fall/Winter, pp. 95-101. Smith, E.B. (2003), ‘‘Wall St. bloodhounds track instant messages for clues’’, USA Today, 18 September, p. 2B.
Smith, S.D. (1996), ‘‘Using EDGAR on the internet to teach finance and business courses’’, Journal of Financial Education, Vol. 22 No. 2, Fall, pp. 76-8. Young, J.R. (2002), ‘‘Ever so slowly, colleges start to count work with technology in tenure decisions’’, The Chronicle of Higher Education, 22 February, p. A25. Weber, T.E. (2002), ‘‘Here’s instant message for chat technology: time 4 u to grow up’’, Wall Street Journal, 25 February, p. B1. Wessel, H. (2003), ‘‘IM’’, Orlando Sentinel, 25 June, p. G1. Further reading Enbysk, M. (2003), ‘‘Blame it on instant messaging’’, MSN web page, SmallTech, 7 February. Komando, K. (2003), ‘‘Using IM: know the lingo’’, MSN web page, Tech Commands, 7 February. Skinner, D. and Kim, M.K. (2001), ‘‘Student email projects: from casual conversation to professional communication’’, Journal of the Academy of Business Education, Fall, pp. 87-98. Appendix. Instant messaging survey 1. Do you use instant messaging (at any time, not just for class)? ———— Yes (go to 1A) ———— No (go to 2) 1A. If you use IM, how often do you use it? — None — several times a month — several times a week — several times a day — all the time 1B. If you use IM, how long have you used it? — 1 year or less — 1-3 years — greater than 3 years 2. Have you ever used IM for this class? ———— Yes (go to 2A) ———— No (go to 3) 2A. How often have you used it for this class during the semester? — None — once — 2-5 — 6-10 — 10-20 — more than 20 2B. If you have used IM for this class, have you found it useful? ———— Yes ———— No ———— Indifferent 3. Rate the following methods of communication with your college instructors in general using a scale of 1=definitely do NOT like to 5=definitely do like. Face-to-face
—
Telephone
—
E-mail
—
Instant messaging — Please provide comments, for example, likes and dislikes, to explain your ratings above. Face-to-face ————————————————————————————————————— ————————————————————————————————————— Telephone ————————————————————————————————————— ————————————————————————————————————— E-mail ————————————————————————————————————— ————————————————————————————————————— Instant messaging ————————————————————————————————————— —————————————————————————————————————
Using instant messenger in the finance course 137
MF 34,2
4. How do you view instant messaging in comparison to face-to-face conversation, using a scale of 1=strongly disagree to 5=strongly agree? — IM substitutes for face-to-face interaction. — IM is a supplement to face-to-face interaction. Please provide comments on any of the following:
138
1. If you have used IM, why do you like it or not like it? ————————————————————————————————————— ————————————————————————————————————— 2. If you haven’t used IM, why not? ————————————————————————————————————— ————————————————————————————————————— 3. How can the use of IM for class be improved? ————————————————————————————————————— ————————————————————————————————————— Corresponding author Stanley D. Smith can be contacted at:
[email protected]
To purchase reprints of this article please e-mail:
[email protected] Or visit our web site for further details: www.emeraldinsight.com/reprints