Advanced Topics in End User Computing Vol. 1
Mo Adam Mahmood University of Texas at El Paso, USA
This book is a release of the Advanced Topics in End User Computing Series
Idea Group Publishing Hershey
Information Science Publishing
• London • Melbourne • Singapore • Beijing
Acquisition Editor: Managing Editor: Development Editor: Copy Editor: Typesetter: Cover Design: Printed at:
Mehdi Khosrowpour Jan Travers Michele Rossi Amy Bingham LeAnn Whitcomb Tedi Wingard Integrated Book Technology
Published in the United States of America by Idea Group Publishing 1331 E. Chocolate Avenue Hershey PA 17033-1117 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.idea-group.com and in the United Kingdom by Idea Group Publishing 3 Henrietta Street Covent Garden London WC2E 8LU Tel: 44 20 7240 0856 Fax: 44 20 7379 3313 Web site: http://www.eurospan.co.uk Copyright © 2002 by Idea Group Publishing. All rights reserved. No part of this book may be reproduced in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Library of Congress Cataloguing in Publication Data ISBN 1-930708-42-4 eISBN 1-59140-028-7 Advanced Topics in End User Computing is the inaugural book of the Idea Group Inc. series named, Advanced Topics in End User Computing Series, ISSN 1537-9310. British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library.
NEW from Idea Group Publishing • • • • • • • • • • • • • • • • • • • • • • • • • •
Data Mining: A Heuristic Approach, Hussein Aly Abbass, Ruhul Amin Sarker & Charles S. Newton ISBN: 1-930708-25-4 / eISBN: 1-59140-011-2 / 310 pages / US$89.95 / © 2002 Managing Information Technology in Small Business: Challenges and Solutions, Stephen Burgess ISBN: 1-930708-35-1 / eISBN: 1-59140-012-0 / 367 pages / US$74.95 / © 2002 Managing Web Usage in the Workplace: A Social, Ethical and Legal Perspective, Murugan Anandarajan & Claire A. Simmers ISBN: 1-930708-18-1 / eISBN: 1-59140-003-1 / 386 pages / US$74.95 / © 2002 Challenges of Information Technology Education in the 21st Century, Eli Cohen ISBN: 1-930708-34-3 / eISBN: 1-59140-023-6 / 290 pages / US$74.95 / © 2002 Social Responsibility in the Information Age: Issues and Controversies, Gurpreet Dhillon ISBN: 1-930708-11-4 / eISBN: 1-59140-008-2 / 282 pages / US$74.95 / © 2002 Database Integrity: Challenges and Solutions, Jorge H. Doorn and Laura Rivero ISBN: 1-930708-38-6 / eISBN: 1-59140-024-4 / 300 pages / US$74.95 / © 2002 Managing Virtual Web Organizations in the 21st Century: Issues and Challenges, Ulrich Franke ISBN: 1-930708-24-6 / eISBN: 1-59140-016-3 / 368 pages / US$74.95 / © 2002 Managing Business with Electronic Commerce: Issues and Trends, Aryya Gangopadhyay ISBN: 1-930708-12-2 / eISBN: 1-59140-007-4 / 272 pages / US$74.95 / © 2002 Electronic Government: Design, Applications and Management, Åke Grönlund ISBN: 1-930708-19-X / eISBN: 1-59140-002-3 / 388 pages / US$74.95 / © 2002 Knowledge Media in Health Care: Opportunities and Challenges, Rolf Grutter ISBN: 1-930708-13-0 / eISBN: 1-59140-006-6 / 296 pages / US$74.95 / © 2002 Internet Management Issues: A Global Perspective, John D. Haynes ISBN: 1-930708-21-1 / eISBN: 1-59140-015-5 / 352 pages / US$74.95 / © 2002 Enterprise Resource Planning: Global Opportunities and Challenges, Liaquat Hossain, Jon David Patrick & M. A. Rashid ISBN: 1-930708-36-X / eISBN: 1-59140-025-2 / 300 pages / US$89.95 / © 2002 The Design and Management of Effective Distance Learning Programs, Richard Discenza, Caroline Howard, & Karen Schenk ISBN: 1-930708-20-3 / eISBN: 1-59140-001-5 / 312 pages / US$74.95 / © 2002 Multirate Systems: Design and Applications, Gordana Jovanovic-Dolecek ISBN: 1-930708-30-0 / eISBN: 1-59140-019-8 / 322 pages / US$74.95 / © 2002 Managing IT/Community Partnerships in the 21st Century, Jonathan Lazar ISBN: 1-930708-33-5 / eISBN: 1-59140-022-8 / 295 pages / US$89.95 / © 2002 Multimedia Networking: Technology, Management and Applications, Syed Mahbubur Rahman ISBN: 1-930708-14-9 / eISBN: 1-59140-005-8 / 498 pages / US$89.95 / © 2002 Cases on Worldwide E-Commerce: Theory in Action, Mahesh Raisinghani ISBN: 1-930708-27-0 / eISBN: 1-59140-013-9 / 276 pages / US$74.95 / © 2002 Designing Instruction for Technology-Enhanced Learning, Patricia L. Rogers ISBN: 1-930708-28-9 / eISBN: 1-59140-014-7 / 286 pages / US$74.95 / © 2002 Heuristic and Optimization for Knowledge Discovery, Ruhul Amin Sarker, Hussein Aly Abbass & Charles Newton ISBN: 1-930708-26-2 / eISBN: 1-59140-017-1 / 296 pages / US$89.95 / © 2002 Distributed Multimedia Databases: Techniques and Applications, Timothy K. Shih ISBN: 1-930708-29-7 / eISBN: 1-59140-018-X / 384 pages / US$74.95 / © 2002 Neural Networks in Business: Techniques and Applications, Kate Smith and Jatinder Gupta ISBN: 1-930708-31-9 / eISBN: 1-59140-020-1 / 272 pages / US$89.95 / © 2002 Managing the Human Side of Information Technology: Challenges and Solutions, Edward Szewczak & Coral Snodgrass ISBN: 1-930708-32-7 / eISBN: 1-59140-021-X / 364 pages / US$89.95 / © 2002 Cases on Global IT Applications and Management: Successes and Pitfalls, Felix B. Tan ISBN: 1-930708-16-5 / eISBN: 1-59140-000-7 / 300 pages / US$74.95 / © 2002 Enterprise Networking: Multilayer Switching and Applications, Vasilis Theoharakis & Dimitrios Serpanos ISBN: 1-930708-17-3 / eISBN: 1-59140-004-X / 282 pages / US$89.95 / © 2002 Measuring the Value of Information Technology, Han T. M. van der Zee ISBN: 1-930708-08-4 / eISBN: 1-59140-010-4 / 224 pages / US$74.95 / © 2002 Business to Business Electronic Commerce: Challenges and Solutions, Merrill Warkentin ISBN: 1-930708-09-2 / eISBN: 1-59140-009-0 / 308 pages / US$89.95 / © 2002
Excellent additions to your institution’s library! Recommend these titles to your Librarian! To receive a copy of the Idea Group Publishing catalog, please contact (toll free) 1/800-345-4332, fax 1/717-533-8661,or visit the IGP Online Bookstore at: [http://www.idea-group.com]! Note: All IGP books are also available as ebooks on netlibrary.com as well as other ebook sources. Contact Ms. Carrie Stull at [
[email protected]] to receive a complete list of sources where you can obtain ebook information or IGP titles.
Advanced Topics in End User Computing Table of Contents
Introduction .................................................................................................. viii Mo Adam Mahmood, University of Texas at El Paso, USA Section I: Medical Informatics Chapter I Introducing Computer-Based Telemedicine in Three Rural Missouri Counties ................................................................................ 1 Kimberly D. Harris, Duquesne University, USA Joseph F. Donaldson, University of Missouri at Columbia, USA James D. Campbell, University of Missouri at Columbia, USA Chapter II Case Study of a Patient Data Management System: A Complex Implementation in an Intensive Care Unit .......................... 19 Nathalie Mitev, The London School of Economics, UK Sharon Kerkham, Salford University, UK Chapter III Experiences from Health Information System Implementation Projects Reported in Canada Between 1991 and 1997 .............................................................................. 36 Francis Lau, University of Alberta, Canada Marilynne Hebert, University of British Columbia, Canada Chapter IV Nursing Staff Requirements for Telemedicine in the Neonatal Intensive Care Unit ..................................................................... 52 Tara Qavi, Flow Interactive Ltd.,UK Lisa Corley, University of Salford, UK Steve Kay, University of Salford, UK
Chapter V End-User Directed Requirements–A Case in Medication Ordering ................................................................................... 72 Stephen L. Chan, Hong Kong Baptist University, Hong Kong Chapter VI VDT Health Hazards: A Guide for End Users and Managers .............. 83 Carol Clark, Middle Tennessee State University, USA Section II: The Role of End User Interface, Training and Attitudes toward Information Systems Success Chapter VII The Role of Training in Preparing End Users to Learn Related Software Packages ............................................................. 94 Conrad Shayo, California State University, USA Lorne Olfman, Claremont Graduate University, USA Chapter VIII The Human Side of Information Systems Development: A Case of an Intervention at a British Visitor Attraction .................... 116 Brian Lehaney, The University of Luton Business School, UK Steve Clarke, The University of Luton Business School, UK Sarah Spencer-Matthews, Swinburne University of Technology, Australia Vikki Kimberlee, The University of Luton Business School, UK Chapter IX Exploring the Relationship between Institutional Context, User Participation, and Organizational Change in a European Telecommunications Company ........................ 129 Tom Butler, University College Cork, Ireland Brian Fitzgerald, University College Cork, Ireland Chapter X Studying the Translations of NHSnet ...................................................... 158 Edgar A. Whitley, London School of Economics and Political Science, UK Athanasia Pouloudi, Brunel University, UK Chapter XI Predicting End User Performance ........................................................... 177 I. M. Jawahar, Illinois State University, USA B. Elango, Illinois State University, USA
Chapter XII The Role of User Ownership and Positive User Attitudes in the Successful Adoption of Information Systems within NHS Community Trusts ................................................................. 188 Crispin R. Coombs, Loughborough University, UK Neil F. Doherty, Loughborough University, UK John Loan-Clarke, Loughborough University, UK Chapter XIII The Effect of Individual Differences on Computer Attitudes .............. 210 Claudia Orr, Northern Michigan University, USA David Allen, Northern Michigan University, USA Sandra Poindexter, Northern Michigan University, USA Chapter XIV Computer Viruses: Winnowing Fact from Fiction ................................. 233 Stu Westin, University of Rhode Island, USA Section III: Decision Support Systems and Artificial Neural Networks Chapter XV Success Surrogates in Representational Decision Support Systems ......................................................................... 243 Roger McHaney, Kansas State University, USA Timothy Paul Cronan, University of Arkansas, USA Chapter XVI A Decision Support System for Prescriptive Academic Advising ..................................................................................... 263 Louis A. Le Blanc, Berry College, USA Conway T. Rucks, University of Arkansas at Little Rock, USA W. Scott Murray, University of Arkansas at Little Rock, USA Chapter XVII Geographic Information Systems: How Cognitive Style Impacts Decision-Making Effectiveness ................................................ 285 Martin D. Crossland, Oklahoma State University, USA Richard T. Herschel, St. Joseph’s University, USA William C. Perkins, Indiana University, USA Joseph N. Scudder, Northern Illinois University, USA
Chapter XVIII Are Remote and Non-Remote Workers Different? Exploring the Impact of Trust, Work Experience and Connectivity on Performance Outcomes ................................................ 302 D. Sandy Staples, Queen’s University, Canada Chapter XIX Measuring the Impact of Information Systems on Organizational Behavior ............................................................................ 325 R. Wayne Headrick, New Mexico State University, USA George W. Morgan, Southwest Texas State University, USA Chapter XX A New Approach to Evaluating Business Ethics: An Artificial Neural Networks Application ............................................. 334 Mo Adam Mahmood, University of Texas at El Paso, USA Gary L. Sullivan, University of Texas at El Paso, USA Ray-Lin Tung, Taiwan Comprehensive Bibliography ................................................................... 351 About the Authors ...................................................................................... 396 Index .......................................................................................................... 403
viii
Preface The End User Computing area has grown tremendously over the last two decades. The popularity of the Journal of End User Computing (JEUC) testifies to that effect. According to the latest survey of information systems (IS) journals, JEUC has been ranked as 37th worldwide out of a total of top 50 IS Journals in the world (Communications of the ACM, 44, 9, September 2001). The present scholarly book is a collection of some of the best manuscripts that were published in JEUC over the last two years. The book is divided into three parts: Part I of the book deals with a new and interesting area, Medical Informatics. It includes six manuscripts. The manuscript by Harris, Donaldson, and Campbell starts the section by investigating predictors of utilization of the computer-based telemedicine in three rural Missouri counties. The findings of the study revealed that for e-mail, behavioral intentions/ attitude, age, organizational support, and time were the most significant predictors, while for the World Wide Web, only behavioral intentions and attitude predicted utilization. The second paper in the section is a case study by Mitev and Kerkham. It details the events surrounding the introduction of a patient data management system (PDMS) into an intensive care unit in a UK hospital. The research showed that PDMS implementation is complex and involved organizational issues related to the cost of healthcare, legal and purchasing requirements, systems integration, training and staff expertise, and relationships with suppliers. It also demonstrated that PDMS has significant impact on working practices. The paper also illustrated how external policies and procedures can interfere with the management of projects and with system implementation. Lau and Hebert, in the third paper, review a broad range of health information systems projects in Canada. This paper is an attempt to follow up on projects that have been previously reported in the literature. This retrospective analysis suggests the need for organizational commitment; resource support and training; managing project, change process and communication; organizational and user involvement and team approach; system capability; information quality; and demonstrable positive consequences from computerization. The following research, by Qavi, Corley, and Kay, looked at the nursing staff acceptance of a videoconferencing system within a neonatal intensive care unit and identified a set of recommendations to be integrated into system design to maximize usability of the system by nursing end users. Interestingly, the study showed that nurses limit their reliance on technology and draw heavily upon their own senses and intuition to construct a holistic view of the patient. The fifth paper, by Chan, describes the implementation of a physician order entry system for medication by using scanning and image processing. The end-user context is presented first, leading to the specifications of design and operational requirements. This is followed by the presentation of the scanning and image processing system (SIPS). SIPS uses specially designed order forms for doctors to write orders that are then scanned into the computer that performs recognition and image processing. This allows the administrative processes to be automated. The paper provides a useful illustration of the need to be sensitive to the profile of the end user group.
ix The sixth and final paper, by Clark, in this section outlines major health issues associated with VDT use. It provides guidelines for both end users and managers to help eliminate or at least reduce the potential negative health effects of VDT use. Part II of the book discusses a fairly well-known topic in the end-user computing area. It reviews the role of end-user interface, training, and attitudes on information systems success. The first manuscript in this section, by Shayo and Olfman, examines the role of training in preparing end users to learn related software packages. The objective is to determine what types of formal training methods can provide appropriate “mapping via training” of a new but related software application, given that “mapping via analogy” is also taking place. The results indicate that both task context and the number of software packages learned influence trainees’ mental models of the software, their self-efficacy expectations, and their perceptions about the usefulness of the training. The second paper in this section, by Lehaney, Clarke, Spencer-Matthews, and Kimberlee, investigated the success and failure of information systems within the British tourism industry. Over concentration on technical rather than human issues during the system development process is found to be the main cause for system failure. The need for a more human-centered approach to IS development is supported, and an example of such an approach is provided. The following paper, by Butler and Fitzgerald, explores the relationship between institutional context, user participation, and organizational change in the development of information systems. With some notable exceptions, researchers have chosen to adopt variance- rather than process-based approaches to the study of these phenomena and have, therefore, failed to capture the complex interrelationships that exist between them. This study addresses these deficiencies and makes several important contributions to the literature. Whitley and Pouloudi, in the fourth paper of this section, present a framework to help understand the socio-political context in which information systems projects are usually positioned. They then illustrate its relevance using a national information system, NHSNet. Their research shows that the representatives of stakeholders play an important role in system development and usage. They go so far as to say that the IS development team has a responsibility to listen to the stakeholders and the stakeholders have a responsibility to make sure that they are heard. Citing inconsistent results in previously published studies in the area, Jawahar and Elango, in the fifth paper of this section, investigate the effect of attitudes, goal setting, and selfefficacy on end user performance. Their results show that the former three variables significantly affect end user performance. This prompted authors to suggest that perhaps end user performance can be enhanced by helping shape end user attitudes toward working with computers, teaching end users to set specific and challenging goals, and reinforcing end users’ beliefs to effectively learn and use information technology. Coombs, Doherty, and Loan-Clarke, in the sixth paper, investigate the role of user ownership and positive attitudes in the successful adoption of information systems. Despite the existence of a “best practice” literature, many projects still fail. The authors suggest that two additional factors, user ownership and user positive attitudes, deserve further development and investigation. A multiple case-study approach was used to investigate these factors. Their
x analysis indicates that both user ownership and positive user attitudes are crucial to the success of an information system. In addition, they identify that best practice variables can also facilitate the development of user ownership and user positive attitudes. Orr, Allen, and Poindexter, in the seventh paper in this section, point to the need for computer competence on the part of citizens to function efficiently on a personal level in a society but also to develop, advance, and success in professional lives. In spite of that need, the literature reports high levels of anxiety and negative attitudes on the part of people towards using computers. The authors, in this research, investigate the relationship between computer attitudes and factors that are likely to affect computer attitudes (e.g., computer experience and selected demographic, educational, and personality variables). Their objective is to develop an understanding of these variables. It will be difficult to find even one computer end user who has not been affected in one way or another by a computer virus. In the last and final paper in this section Stu Westin, who has been serving JEUC for a number of years as an associate editor with distinction, considers the past and current status of computer viruses, the so-called “defensive computing” and the degree to which the situation has been clouded by hype, misinformation, and misunderstanding. Part III of the book deals with a couple of well-known tools in the end-user computing area: decision support system and artificial neural network. McHaney and Cronan, in the first paper of this section, discuss the organizational impact of discrete event computer simulation, classified as a representational decision support system. They focus on the external validity aspects of two instruments, the Davis Measure of User Acceptance of Information Technology and the Doll and Torkzadeh Measure of End-User Computing Satisfaction, that can be used to measure organizational impact of this type of decision support system. They find that the Doll and Torkzadeh instrument retain its psychometric properties better than the Davis instrument when applied to users of discrete event computer simulation. The second paper, by Le Blanc, Rucks, and Murray, describes a decision support system that is designed for prescriptive academic advising. Using this system a student, with a minimum of computer knowledge, can obtain an optimized course listing in less than five minutes without the assistance of a human advisor. This allows advisors to spend time on more substantive or developmental advising issues, such as choice of electives, career options, and life career goals. In the third paper, Crossland, Herschel, Perkins, and Scrudder describe the impact of task and cognitive style on decision-making effectiveness in the context of a geographic information system. They investigate as to how two individual cognitive style factors such as field dependence and need for cognition relate to decision-making performance for a spatial task. They find significant relationship between the individual cognitive style factors and two dependent performance variables, solution times and percent error. The fourth paper, by Sandy Staples, tests a number of relationships that were suggested in the literature as being relevant in a remote work environment. As the practice of employees working remotely from their managers and colleagues grows especially in view of the World
xi Trade Center bombing on September 11, 2001, so does the importance of making these remote end users of technology effective members of organizations. The author found more frequent communications between the manager and employee to be uniquely associated with higher levels of interpersonal trust for remote workers. Cognition-based trust is also found to be more important than affect-based trust in a remote work environment. In the fifth paper, Headrick and Morgan develop and test a methodology that can measure the impact an information system may have on the behavioral climate of an organization. The authors argue that traditional concentration on short-term, readily quantifiable functional factors in designing information systems has resulted in the development of systems that can produce the required output but often fail to promote the general behavioral climate objective of the organization. They authors claim that their methodology, utilizing pre- and postimplementation assessments of an organization’s behavioral climate, allows systems developers to identify specific potential design criteria which will increase the degree to which the organization’s behavioral goals and objectives are met In the sixth and final paper, Mahmood and Sullivan use a powerful yet underutilized tool, artificial neural network, to evaluate business ethics. So far, empirical studies have used traditional quantitative tools, such as regression or multiple discriminant analysis (MDA), in ethics research. The authors argue that more advanced tools are needed to understand ethical decision making. In this exploratory research, they present a new approach to classifying, categorizing, and analyzing ethical decision situations. A comparative performance analysis of artificial neural networks, MDA, and chance show that artificial neural networks predict better in both training and testing phases. .
Section I Medical Informatics
Introducing Computer-Based Telemedicine 1
Chapter I
Introducing ComputerBased Telemedicine in Three Rural Missouri Counties Kimberly D. Harris Duquesne University, USA Joseph F. Donaldson and James D. Campbell University of Missouri–Columbia, USA
This study investigated predictors of utilization of the computer-based telemedicine in three rural Missouri counties. Participating health care agencies were given computers and access to an Internet-based workstation that provided e-mail and World Wide Web (WWW) services. Utilization data for e-mail messages sent and WWW pages accessed were collected through proxy servers. A survey was distributed to those employees who are enrolled in the Rural Telemedicine Evaluation Project (RTEP), which addressed perceptions of the Internet-based RTEP workstation. The results of the survey were analyzed to see how perceptions and demographic variables predicted actual utilization. The findings of the study revealed that for e-mail, behavioral intentions/attitude, age, organizational support, and time were the most significant predictors. For WWW, only the behavioral intentions/attitude subscale predicted utilization. The majority of respondents did not utilize the e-mail technology. Strategies need to be developed through training interventions and organizational policies to address non-utilization. Copyright © 2002, Idea Group Publishing.
2 Harris, Donaldson & Campbell
INTRODUCTION Technology is perhaps one of the greatest tools under the control of mankind. It can be used for positive or negative purposes and has proven a powerful force for change (Surry, 1997). In fact, some would argue that technology is a key governing force in society, and that technological change drives social change (Smith, 1996). The Internet is one technology that has contributed to societal change and has provided opportunities to revolutionize health care. The Internet has afforded the medical community a mechanism to provide access to information in a timely manner. This is particularly important in today’s society due to the continually expanding body of medical knowledge and the changes in health care delivery that require practitioners to make more important and complex decisions in less time (Lundberg, 1998). The Internet has the potential to improve the care provided to patients and to enhance biomedical research by connecting practitioners to the most up-todate information available (Gallagher & McFarland, 1996). However, an important consideration in the diffusion of information technologies was put nicely by Enrico Coiera (1995) when he said, “Medical informatics is as much about computers as cardiology is about stethoscopes. … Any attempt to use information technology will fail dramatically when the motivation is the application of technology for its own sake rather than the solution of clinical problems” (Coiera, 1995). Rural health communities face unique issues in providing care to their population. Not only has it been difficult to recruit health care professionals, but also to retain them. Isolation, lack of communication, difficult access to updated medical information, little contact with colleagues, and lack of continuing medical education opportunities have been identified as factors contributing to low retention rates and shortage of supply of rural health care providers, particularly physicians (Conte et al, 1992; Harned, 1993; Mackesy, 1993; Forti et al, 1995; Anderson et al, 1994; Rogers, 1995; Davis, 1989). As a consequence, rural health care suffers. One of the main goals of telemedicine technology is to make an impact on improving rural health care, including nursing care, administrative efficiency, and communication amongst and between all health care providers. Research on the acceptability of telemedicine as a particular form of technology is limited. Thus, this study was informed by literature and research related to adoption of innovations, medical educational technology, and rural health care to explore factors associated with acceptance of telemedicine by rural health care providers. It does so by addressing issues of acceptability of a specific aspect of the University of Missouri’s telemedicine technologies by health care providers in three rural Missouri counties.
Introducing Computer-Based Telemedicine 3
BACKGROUND OF THE TELEMEDICINE EVALUATION PROJECT The University of Missouri–Columbia’s Telemedicine Demonstration Project was developed in December of 1995 to address rural health care provider needs such as isolation, lack of communication, rapid access to updated medical information, contact with colleagues, and continuing medical education opportunities. The Missouri Telemedicine Network (MTN) includes two components: video conferencing and computer infrastructure. Funding for the project came, in part, from the National Library of Medicine (NLM), with the stipulation that the University would provide a thorough evaluation on the computer component. Thus, the evaluation portion of the Telemedicine program is referred to as the Rural Telemedicine Evaluation Project (RTEP). This study focused on the utilization of a portion of the computer component, known as the RTEP workstation, in three rural Missouri counties. The RTEP workstation is an Internet program that is housed on a server on the Internet in three rural Missouri communities. Elements of the RTEP workstation include access to e-mail, the World Wide Web (WWW), community information, and an address book–all designed to address the unique challenges of rural health care, including access to electronic clinical records. While on the surface the RTEP workstation appears to have the ability to address successfully some of the issues of rural health care identified earlier, the individuals who are enrolled in the demonstration project may not be willing to utilize the new technology in a way that will allow the RTEP workstation to address those issues. Examining the utilization of this technology gives insight into the various issues surrounding acceptance of this technology by the participants.
Theoretical Foundations Diffusion of innovation research examines how various factors interact to facilitate or impede the adoption of a specific new product or practice among members of a particular group. The four major factors that Rogers (1995) identified as influencing the diffusion process are the innovation itself, how information about the innovation is communicated, time, and the nature of the social system where the innovation is being implemented. From Rogers’ original work in 1962, an onslaught of diffusion research has been conducted, resulting in various predictive models with the intent of utilizing the model to accelerate the adoption of the innovation. One such model is the Technology Acceptance Model (TAM).
4 Harris, Donaldson & Campbell
Fred Davis (1989) originally developed the TAM, under contract with IBM Canada, Ltd., in the mid-1980s for the purpose of evaluating the market potential for a variety of then-emerging PC-based applications in the area of multimedia, image processing, and pen-based computing. TAM was used to guide investments in the early development of these various technologies (Davis, 1995). The TAM purports to provide a basis for tracing the impact of external factors on internal beliefs, attitudes, and behavioral intentions. The TAM suggests that behavioral intentions (BI) are jointly determined by the person’s attitude (A) and perception of the technology’s usefulness (U), with relative weights estimated by the regression: BI = A + U (Davis, 1989). In the TAM, the behavioral intention (BI) of the users to actually utilize the technology is determined by their attitude toward using. However, the attitude has first been influenced by two specific beliefs: the user’s perception of how easy the technology is to use, or “perceived ease of use” (EOU) and the perception of how useful the technology will actually be in the user’s job, or “perceived usefulness” (U). In other words, EOU has a direct effect on U, and together, they affect the A. Then, the A determines the user’s BI, which results in their actual utilization (Davis, 1989; Davis, 1989; and Davis, 1996).
Research Questions Research Question 1: Is utilization of the e-mail and WWW portion of the RTEP workstation, as measured by the automated tracking of use on the proxy servers, predicted by the perceptions, behavioral intentions, and attitudes represented in the TAM model? Figure 1: The technology acceptance model
Perceived Usefulness
External Variables
Attitude
Perceived Ease of Use
Behavioral Intention
Actual Utilization
Introducing Computer-Based Telemedicine 5
Research Question 2: After adjusting for the TAM variables, are the demographic variables: age, gender, educational level, and occupation associated with the utilization of e-mail and the WWW, as measured by the automated tracking of use on the proxy servers? Research Question 3: After adjusting for the TAM variables and the demographic variables, do the five factors identified from previous interviews from the participants (time, access, apprehension, technological problems, and ownership), along with “organizational support,” individually significantly predict utilization of the e-mail and world wide web portion of the RTEP workstation? Research Question 3a: If it is found that a factor from Research Question 3 individually significantly predicts utilization, after adjusting for the TAM variables and the demographic variables, what is the relationship between that factor and the TAM variables?
Method This study employed a survey method. This method was chosen because the study is examining perceptions of the participants regarding the technology. One way to find out what those perceptions are, and then to quantify them, is through the survey method (Babbie, 1990).
Subjects The subjects in this study are employees of health care organizations who have volunteered to participate in a series of evaluation studies as part of the RTEP project. In order to receive access to the RTEP workstation, the participants completed an enrollment package, which included a consent form, which allowed data about their utilization of the RTEP to be collected and employed in a variety of evaluation studies. While all employees were allowed to participate, this study is limited to the physicians, nursing personnel (excluding certified nurses aids), and administrative personnel. There were a total of 276 potential participants. There were 69 participants from Cooper County, 107 from Linn County, and 100 from Macon County. The total population (not a sample) of physicians, nurses, and administrative personnel were delivered a survey. Variables The dependent variables were utilization of the World Wide Web (WWW) through the RTEP workstation, and e-mail communications sent through the RTEP. These were collected unobtrusively and automatically from the proxy servers located in each county. The independent variables that
6 Harris, Donaldson & Campbell
were based on the TAM included attitude, behavioral intentions, perceived ease of use, and perceived usefulness. Other independent variables included age, gender, educational level, occupation, and organizational support. The independent variables that were based on the preliminary results of a prior qualitative telemedicine evaluation study were time, access, technological problems, apprehension, and ownership. Instrumentation The survey instrument was intended to gather as much information as possible as to the reasons why, or why not, people utilize the RTEP workstation. The first section collected demographic and self-report data, the second section addressed the barriers that had previously been identified through interviews with participants in a prior telemedicine evaluation study, and the last section dealt with the variables from the TAM. A page was also included at the end for respondents to provide any additional comments. The instrument was pilot tested for face validity, item clarity, directions, and readability.
Results Three months of utilization data were collected: October, November, and December of 1998. These three months were chosen due to the targeted completion date in September 1998 of training for all participants on use of the RTEP workstation. The overall response rate was 78%. Physicians had a 50% response rate, while administrative personnel (85%), nurse practitioners (100%), and nurses (79%) responded at a higher rate. The mean age of respondents was 42.49 years. The majority of the respondents were female (184, or 85.6%); only 30 (14%) were male respondents. This was expected, due to the distribution of gender in the population. This reflects the female domination of the nursing profession in these counties and elsewhere. The Technology Acceptance Model Davis (1989) reported that the instrument he developed from the TAM was shown to be reliable through psychometric testing. Adams et al. (1992) found further support for this instrument through psychometric testing. These authors found it to be reliable, have construct validity, and composed of two distinct factors, U and EOU. Morris and Dillon (1997) used the four original factors in Davis’ TAM, i.e., EOU, U, A, and BI, claiming, “TAM offers a valid means of predicting system acceptability, as measured by system use” (p. 64). The results of this survey were subjected to factor analysis of the original four variables to test further its validity and address the first research question.
Introducing Computer-Based Telemedicine 7
An alpha factor analysis was completed on all of the TAM subscales (for both e-mail and WWW), with a cut off point of .35. After a Varimax rotation, four factors were identified: perceived usefulness (U), behavioral intentions/ attitude (BI/A), perceived ease of use (EOU) for e-mail, and perceived ease of use for the WWW. Reliability tests were conducted on both the e-mail and WWW TAM subscales. Davis (1989; 1993) originally designed the model, but Adams, et. al (1992), Szajna (1994), and Morris and Dillon (1997) have replicated it. To replicate it further, a series of linear regressions were computed on the three TAM e-mail subscales: EOU, U, and BI/A, in order to examine the relationships between the three subscales for e-mail utilization. Table 1 presents the linear regression statistics. The e-mail TAM subscales worked together as purported by Davis (1989). Only 8% of the variance was accounted for in the linear regression when actual utilization was used. However, when the total self-report of email utilization was used in this same regression, 13.5% of the variance is accounted for. This is similar to the R2 that was reported in the previous studies that used self-reported utilization data. The same procedure was conducted on the WWW TAM subscales (Table 2). Again, these subscales worked together as purported by Davis Table 1: Linear regression of TAM for e-mail TAM subscales
B
Standard Error of B
t
p
R2
Result
EOU U
.
.751
.050
14.98
.000** .548
Supported
U
.
.734
.047
15.63
.000** .570
Supported
EOU BI/A
.614
.056
10.96
.000** .385
Supported
BI/A
5.121 1.223
4.19
.000** .080
Supported
BI/A
usage
Note.: The p value for “‘BI/A
usage’” may not be strictly accurate due to the departures
from normality in the dependent variable. *p < .05. **p < .001.
8 Harris, Donaldson & Campbell
(1989). The variance accounted for in the WWW TAM is higher (11%) than in the e-mail TAM when using actual utilization data in the regression. However, when the self-reported data were used in the regression, the R2 decreased to .034, or 3.4% of the variance was accounted for by the behavioral intentions/attitude variable. E-mail Utilization E-mail utilization of the RTEP workstation, as measured by the proxy servers in each county, was not as high as expected, with 141 (66%) respondents never utilizing it during the three-month period of analysis. Due to the magnitude of nonutilization, resulting in clear violations of normality, logistic regressions were computed rather than multiple regressions. The dependent variable of e-mail utilization was dichotomized, i.e., either respondents used it, or they did not (yes/ no). A logistic regression was computed on the TAM variables alone, with behavioral intentions/attitude as the single predictor of e-mail utilization. When the demographic data were added into the logistic regression model to answer the second research question, age was also a significant predictor. The full logistic regression model, computed to address the third research question, including all independent variables, revealed the following significant predictors of e-mail utilization of the RTEP workstation: behavioral intentions/attitude, age, time, and organizational support. Table 2: Linear regression on WWW TAM subscales TAM subscales
EOU U
B
Standard Error of B
t
p
R2
Result
U
.
.650
.056
11.62
.000**
.422
Supported
BI/A
.
.808
.043
18.71
.000**
.655
Supported
.643
.055
11.67
.000**
.417
Supported
47.082
4.97
.000**
.110
Supported
EOU BI/A BI/A
usage
233.989
Note.: The p value for “‘BI/A
usage’” may not be strictly accurate due to the departures
from normality in the dependent variable. *p < .05. **p < .001.
Introducing Computer-Based Telemedicine 9
Table 3: Full logistic regression on e-mail utilization Variable
B
S.E.
Wald
df
Sig
BI/A
-2.0895
.5796
12.9980
1
.0003**
EOU
.1813
.3879
.2185
1
.6402
U
-.0507
.4408
.0132
1
.9084
Age
-.0511
.0230
4.9553
1
.0260*
.8998
4
.9246
Edu level Edu(1)
.0721
1.4884
.0023
1
.9613
Edu(2)
.5638
1.2195
.2137
1
.6438
Edu(3)
.3501
1.1633
.0906
1
.7634
Edu(4)
.8078
1.1638
.4817
1
.4876
4.2630
3
.2344
Occupation Occ(1)
2.6606
1.8328
2.1074
1
.1466
Occ(2)
.8679
1.8416
.2221
1
.6374
Occ(3)
-.0518
1.8679
.0008
1
.9779
Gender
-.1144
.7722
.0219
1
.8823
Time
-.5836
.2930
3.9691
1
.0463*
Apprehension
.2756
.2689
1.0504
1
.3054
Org Support
.5971
.3052
3.8277
1
.0504*
Ownership
.4153
.2613
2.5265
1
.1120
Access
-.1145
.3194
.1284
1
.7201
(Constant)
2.0560
2.4264
.7180
1
.3968
*p < .05 **p < .001
10 Harris, Donaldson & Campbell
WWW Utilization More respondents utilized the World Wide Web (WWW) component of the RTEP workstation, as measured by the proxy servers in each county, than the e-mail technology. Nevertheless, there were 75 (35%) respondents who never utilized the WWW during the designated time period of analysis. Again, due to the magnitude of non-utilization and violations of normality, logistic regressions were computed rather than multiple regressions. The dependent variable of WWW utilization was dichotomized (yes/no). A logistic regression was computed on the TAM variables alone to address the first research question, with behavioral intentions/attitude as the single predictor of WWW utilization. With regard to research questions 2 and 3, the only significant predictor of WWW utilization when all of the independent variables were included was the behavioral intentions/attitude variable from the TAM. The full logistic regression is presented in Table 4. Research question 3a posited that if it was found that a factor from research question 3 individually significantly predicted utilization, after adjusting for the TAM and demographic variables, what was the relationship between that factor and the TAM variables? If the relationship was statistically significant, one would expect there to be a weak correlation between that factor and the variables in the TAM model. Conversely, if there was weak additional predictive power, then there may be a strong correlation between the weak factor and one of the TAM variables. A correlation matrix was performed on all the independent variables. The variables, BI/A, age, and time were the most significant predictors of the probability that a respondent would utilize the e-mail component of the RTEP workstation. BI/A was the strongest predictor (.0003), followed by age (p=.0269), then time (p=.0494). Organizational support was a weak predictor (p=.0504). Age was a strong predictor in the logistic regression model for email utilization; therefore, one would expect a weak correlation with the TAM variables. In fact, age does not significantly correlate with two of the variables within the TAM, EOU and U. Age is statistically significant at the p=.05 level with BI/A, with a correlation coefficient of –.162 (p=.022), however, it is not meaningful in that it only accounts for 3% of the variance (R2=.026). Organizational support and time were fairly weak predictors, therefore, stronger correlations would be expected with the BI/A factor. In fact, there was a stronger correlation for organizational support, with a correlation coefficient of .470 (p=.000), accounting for 22% of the variance (R2=.22). Time was not as meaningful a correlation as organizational support, with a correlation coefficient of .299 (p=.000), accounting for only 9% of the variance (R2=.089).
Introducing Computer-Based Telemedicine 11
Table 4: Full logistic regression on WWW utilization Variable B S.E. Wald
df
Sig
BI/A
-.8457
.3916
4.6650
1
.0308*
U
-.0516
.4035
.0164
1
.8982
EOU
.1055
.3388
.0969
1
.7556
Age
-.0291
.0209
1.9391
1
.1638
Gender
-.0221
.7654
.0008
1
.9770
3.7516
3
.2896
Occupation Occ(1)
6.1190
20.845
.0862
1
.7691
Occ(2)
6.5388
20.867
.0982
1
.7540
Occ(3)
4.1908
20.890
.0402
1
.8410
2.2704
4
.6862
Edu level Edu(1)
-.9146
1.7045
.2879
1
.5915
Edu(2)
-.8781
1.4492
.3671
1
.5446
Edu(3)
-.5366
1.3979
.1474
1
.7011
Edu(4)
.2356
1.4445
.0266
1
.8704
Time
-.5042
.2691
3.5119
1
.0609
Apprehension
-.0415
.2574
.0260
1
.8720
Org Support
.2981
.2765
1.1618
1
.2811
Ownership
.0370
.2517
.0217
1
.8830
Access
-.2744
.2844
.9310
1
.3346
(Constant)
-5.362
20.904
.0658
1
.7975
*p < .05
12 Harris, Donaldson & Campbell
Limitations of the Study The large amount of non-utilization was a limitation for the purpose of analysis. There were 141 (66%) respondents who did not utilize e-mail, and 75 (35%) respondents who did not utilize the WWW during the three-month time period of time for which utilization data were collected. As a result, it was important to use logistic regressions rather than linear or multiple regressions. A linear regression assumes a normal distribution, and the utilization data were so skewed due to non-utilization that logistic regressions were necessary in order to analyze the data appropriately. However, the very fact that there were so many “non-users” was significant in and of itself. Another limitation to this study was the overwhelming number of female versus male respondents. This was expected due to the population from which the respondents came. The nursing population is female dominated in these three counties, as well as at the national level (Cervero, 1988). The unequal group size of occupations was also a limitation. With only four nurse practitioners, it was difficult to make any conclusions with respect to utilization of the telemedicine technologies for that group. Training was supposed to have been completed by the end of September 1998. However, there were a few respondents who claimed they had never received any kind of training on the technology. One respondent commented that she did not have a password yet. Another respondent simply said, “not trained.” Eight respondents requested more training. The utilization data were limited to the activity of the respondents through the RTEP workstation. The respondents may utilize e-mail and WWW technology through another Internet provider for health care purposes. Conclusions from this study can only be drawn with respect to the respondents and their utilization of the RTEP workstation.
DISCUSSION The RTEP was designed to meet the physicians’ needs, first and foremost. Each physician was given a computer for his or her office. Nurses and administrators may have had to share a computer, but the physicians were given their own. This was done in order to address the “access to updated information” need identified by rural physicians. Therefore, this study was particularly interested in the physicians’ responses and utilization. While the physicians who responded to this study scored favorably on the TAM e-mail subscales with regard to perceived usefulness (mean=4.60), perceived ease of use (mean=4.73), and behavioral intentions/attitude (mean=5.67), only one
Introducing Computer-Based Telemedicine 13
physician utilized the e-mail technology. This inconsistency in response versus utilization may be due to “social desirability,” or unconsciously trying to give socially desirable answers (Streiner and Norman, 1995). There are studies that indicate that physicians do not perceive the Internet as useful. Three out of the ten physicians did not utilize the WWW component of the RTEP workstation during the three month time period of analysis. Only two physicians had over a thousand hits, one physician had slightly over one hundred hits, and two physicians had over 50 hits. The remaining two physicians had only approximately 20 hits. According to Brown (1998), physicians in the United States have had a lukewarm relationship with the Internet. This relationship is due to physicians’ lack of perceived usefulness of the Internet. Most physicians need to be convinced that the Internet offers a “critical mass” of high quality information for both them and their patients. Brown claimed that only 25% of online physicians surveyed in 1997 liked to use the Internet and commercial online services for patient education, and only 44% of online physicians liked to use the Internet and commercial online services for Continuing Medical Education (CME). With respect to the other respondents in the study, some respondents may not perceive the technology useful in their work but use the technology for other purposes. In other words, non-utilization does not necessarily mean that the respondents do not accept the technology. Some respondents did not utilize the technology at work but used the Internet from home. One respondent commented, “Most of this information does not relate to me and my job at CCMH. I have very little access to the computer & resulting e-mail & WWW. I do my thing on my own personal computer at home.” The second research question addressed whether or not the demographic data would add any predictive value to the TAM. Age added predictive value to the TAM. When divided into age groups, respondents between the ages of 30 and 49 had the highest mean for both e-mail and WWW utilization. Respondents over the age of 60 had the lowest mean for e-mail utilization (1.23 messages sent), and respondents between the ages of 50 and 59 had the lowest mean for WWW utilization (315.20 hits). This is inconsistent with Miller’s (1996) study where he found that younger and older users of the Internet focus on communicating, while middle-aged users focus more on seeking information, such as health and medical information. However, for this set of respondents, it is the middle-aged with the highest means for utilization of both e-mail and WWW. This difference in findings may be due to several reasons. First, this study was limited to health care professionals. Second, and perhaps more importantly, it was limited to those professionals working in a rural area. Miller’s study was an overall study of those people already using the Internet.
14 Harris, Donaldson & Campbell
Segars and Grover (Segars, 1993) suggested that the relationships within the TAM may be more complex than previously thought. These authors agreed and developed additional factors that may have more to do with the fact that the technology was implemented specifically in health care settings in rural areas. Therefore, based on the previous qualitative telemedicine research mentioned earlier, the third research question addressed additional factors that may add predictive power to the existing TAM. The additional factors considered in the third research question were the subscales, time, apprehension, access, organizational support, and ownership. The two subscales that added additional predictive power in the logistic regression model were time and organizational support. According to Treister (1998), “physicians often fail to embrace a complex information system, may not see its relevance to their practices [perceived usefulness], and are characteristically reluctant to invest the time (italics added) and energy to be trained in its use.” (p. 20) Physicians had an overall mean for the subscale time of 3.92, which means, that overall, they reported neutral feelings regarding having the time to learn and utilize the RTEP workstation. Only one physician had sent e-mail messages during the three-month time period of analysis; however, all but three physicians utilized the WWW technology. Further investigation into physician utilization of e-mail revealed that they received messages but did not send messages (with the exception of one physician). If the physician is opening his or her e-mail and printing the message, it could be said that they are, in fact, utilizing the technology. If the physician is requiring someone else to check his or her e-mail account, it could be said that they are using the technology via a proxy user. This is similar to findings from the 1997 American Interactive Healthcare Professional Survey, which revealed that 43% of the physicians surveyed used the Internet for professional purposes. However, 26% of those physicians had someone else go online on their behalf (Brown, 1998), using the Internet via a proxy user. Time was mentioned by several of the respondents in the comments section contained in the last page of the survey instrument. One respondent said, “I would like to use it, but I just don’t have the time,” and another said, “I don’t use the RTEP workstation–I’m usually busy with patient care and as soon as I would start a light would ring and I’d have to stop. Need time with no interruptions.” There was one respondent who returned a survey but did not fill out the questionnaire. Instead, she just wrote “No Time” across Section B (which contained the additional subscales), and “No Time To Use” across Section C (which contained the TAM and ownership subscales) of the survey instrument.
Introducing Computer-Based Telemedicine 15
Administrators had the highest overall mean for the time subscale (4.2712); however, they would not have to take time away from patient care to utilize the technology as the physicians and nurses would. Organizational support was the other significant predictor when all of the variables were added to the entire logistic regression model. The overall means for each occupation for organizational support was over 5, which means that the respondents reported agreement that their organization was supportive of their use of both e-mail and WWW components of the RTEP workstation. Yet, due to the magnitude of non-utilization, it is clear that respondents have not accepted the e-mail component of the RTEP workstation. One respondent, who had reported a “strongly agree” on both organizational support questions on the survey, commented on the last page of the survey instrument, “I would prefer NOT to use RTEP’s email, but [I] have been told that [it] is the only one I will have access to unless I didn’t [sic] use [the] University.” Research question 3a addressed the relationships between the additional significant predictors of utilization and the TAM variables. For the WWW, there were no additional predictors. Only behavioral intentions/attitude predicted the probability that a respondent would utilize the WWW component of the RTEP workstation. However, for e-mail, the additional predictors were age, time, and organizational support. Age had statistically significant correlations with the BI/A for the e-mail component of the RTEP workstation. However, this correlation was not particularly meaningful, producing an R2 of .026, or 3% of the variance for email utilization. Therefore, because age was a strong predictor in the e-mail logistic regression model, and had weak correlations with the TAM variables, age is a separate consideration that should be included in future studies. The TAM by itself could not predict e-mail utilization as well as could when age was included in the logistic regression model, for this set of respondents. The subscale, time, did not correlate with age, however, it did correlate with every other subscale and all the other demographic variables. The TAM alone could not predict for e-mail utilization as well as when age and time were included in the e-mail logistic regression model for this set of respondents. Therefore the variable, time needs to be included in future studies of acceptance of information technologies (Treister, 1998; Aydin & Forsythe, 1998). The last additional predictor, organizational support, did not correlate with age, gender, or occupation. However, it did significantly correlate with educational level, and with every other subscale in the model. Organizational support was only significant when all of the other variables were in the logistic
16 Harris, Donaldson & Campbell
regression model. It appeared that organizational support was necessary, but not sufficient in and of itself. The strongest correlations were with the TAM subscales[ therefore the TAM appears to account for much of the organizational support subscale. Because organizational support was a statistically significant predictor in the e-mail logistic regression model, it is important to consider the organizational structure of these organizations–particularly, the role of the physicians within the rural setting, due to their authority and leadership roles within the organization. The physicians in this study clearly have not accepted the e-mail technology, and most are only moderately utilizing the WWW. This indicates a need for further studies to examine the effect of physician leadership and organizational support on the acceptance of information technologies. Examining the role physicians should take within health care organizations, Heydt (1999) claimed that physicians should take the lead regarding changes within the health care system, including technological change. As a physician himself, Heydt noted that these are “disconcerting” times for physicians because of all the changes, driven largely by constraints on financial resources. However, physicians must recognize the need to adapt and should take more control over these changes. “We must then lead change and at the same time govern its pace” (p. 43). Implementers of telemedicine technologies must realize the leadership role physicians have in rural health care organizations and allow them to lead in the technological changes.
Additional Findings There were additional findings from this study that deserve some discussion as well. Examination of self-report utilization versus actual utilization revealed that nearly 30% of the respondents who reported using their RTEP e-mail account actually had no activity during the three-month period of analysis. In addition, nearly 30% of the respondents reported having a higher utilization of their RTEP e-mail than their actual utilization revealed. Only 25% of the respondents accurately reported non-utilization of the RTEP workstation’s WWW. There may be several reasons for the discrepancies. Respondents may base frequency estimations on a rate of behavioral episodes without actually recalling any incidences. Discrepancies may also be due to the respondents’ memory errors, which are viewed as the greatest detriment to accurate reporting (Blair & Burton, 1987). Respondents may also have answered the self-report question with a social desire bias (Streiner & Norman, 1995). Future studies are needed that examine actual utilization as opposed to self-report utilization.
Introducing Computer-Based Telemedicine 17
CONCLUSIONS Three main conclusions may be drawn from this investigation. First, there are distinct differences between the use of e-mail technologies and the WWW. One is for communication purposes, and the other is for information gathering purposes. More people were willing to use the WWW component of the RTEP workstation than the e-mail component. The second conclusion that may be drawn is that if it is assumed that acceptance of the technology is measured by its utilization, then this technology has not been widely accepted. Over half of the respondents (66%) did not utilize e-mail at all during the three-month period, and 35% of the respondents did not utilize the WWW. However, utilization is not mandatory within these rural health organizations, and therefore adoption may take longer. The last conclusion drawn from this study is that while the TAM was able to predict the probability that a respondent would use the technology, there were other significant predictors regarding e-mail utilization: age, organizational support, and time. The issue of e-mail communication appears to be more complex for this set of respondents than the TAM provides. Although there were strong correlations between time and organizational support with the TAM variables, age was a weak correlation with the TAM variables. Therefore, to obtain a comprehensive predictive model for the utilization of computer-based telemedicine, these other variables should be included. The TAM was sufficient in and of itself regarding the prediction of the probability that a respondent would utilize the WWW component of the RTEP workstation.
Implications for Future Research More studies need to be conducted to refine this survey instrument so that it may be adapted to various rural health organizations in aiding the implementation process of new information technologies. Adaptations need to address issues of rural versus urban settings and communication needs versus information seeking needs. The nursing and physician occupations may need additional time scheduled for training sessions to deal with specific learning needs, as identified by the participants. A refined survey may then be readministered to measure changes in perceptions of the technology and then tested against current or future utilization. More research is needed that investigates the impact that email is having on rural health care. Communication studies need to be conducted that examine how e-mail between nurses and physicians can be time and cost-effective. This type of research needs to be done in nursing
18 Harris, Donaldson & Campbell
homes and hospitals in particular, where communication needs to occur between professionals that are not in the same geographic location. More research is also needed that investigates what kind of impact the WWW is having on patient education. Patient education was listed as the most important use of the RTEP workstation. How does this impact the patient’s care? Do the patients go on to take better care of their condition? Does the education aid the communication between the patient and the physician? What about between the physician and the nurse? There are still many questions to be addressed in researching the impact of Internet technologies on rural health care. This study looked at predictors of the utilization of the technology. Many of them were not. Respondents indicated that they did not perceive e-mail to be useful. Respondents who did not use the WWW indicated that they did not intend to use the WWW, nor did they have the time to use it. Future research should examine why utilization was so low. Overall, further research needs to be conducted to investigate the impact of computer-based telemedicine technologies on the impact of physician isolation and retention, patient care, communication, and medical education. There have been little or no wide-ranging studies indicating whether physician access to Internet technologies has improved patient care (Southwick, 1997).
Case Study of a Patient Data Management System 19
Chapter II
Case Study of a Patient Data Management System: A Complex Implementation in an Intensive Care Unit Nathalie Mitev The London School of Economics, UK Aarhus University School of Business, Denmark Sharon Kerkham Salford University, UK
Since the National Health Service reforms were introduced, the NHS has moved towards a greater emphasis on accountability and efficiency of healthcare. These changes rely on the swift delivery of IT systems, implemented into the NHS because of the urgency to collect data to support these measures. The case study details the events surrounding the introduction of a patient data management system into an intensive care unit in a UK hospital. It shows that its implementation was complex and involved organizational issues related to the costing of healthcare, legal and purchasing requirements, systems integration, training and staff expertise, and relationships with suppliers. It is suggested that the NHS is providing an R&D environment which others are benefiting from. The NHS is supporting software development activities that are not recognized, and the true costs of this task are difficult to estimate. It is also argued that introducing PDMS crystallizes many different expectations making them unmanageably complex. This Copyright © 2002, Idea Group Publishing.
20 Mitev & Kerkham
could also be due to PDMS being a higher order innovation that attempts to integrate information systems products and services with the core business.
INTRODUCTION The National Health Service (NHS) costs the UK approximately £38 billion a year (James, 1995), of which £220 million is spent on IT (Lock, 1996). New IT applications not only support administrative functions and medical diagnosis, but are also increasingly used to support resource management and medical audit (Metnitz and Lenz, 1995; Sheaff and Peel, 1995). One such application is patient data management systems (PDMS) in intensive care units, where nurses’ main task of planning and implementing patient care requires an awareness of a set of physiological parameters which provide an overview of the patient’s general condition (Ireland et al., 1997). The collection of patient data is also a legal requirement of the NHS executive. The implementation of these new technologies is not proving easy for the NHS. Healthcare professionals involved with IT projects often lack in experience in IT development. Risks are higher in clinical applications which require strong user involvement. These technologies are also being implemented into the NHS at a fast rate because of the urgency to collect data to support accountability measures. The NHS has changed quite dramatically over recent years, not least with the introduction of “competitive market forces” (Peel, 1996; Protti et al., 1996). The current healthcare reforms come from various government White Papers, moving the philosophy of the NHS towards emphasizing business themes and client choice, and they rely on the “swift” delivery of IT systems (Willcocks, 1991). All chief executives of health authorities and NHS trusts are now “accountable officers,” responsible for the efficient use of resources, and are personally responsible for performance (Warden, 1996). Sotheran (1996) argues that using IT in the NHS entails new work structures and changes in activities performed and that redistribution of control and power will occur as a result. Bloomfield et al. (1992) found a diversity of interpretations by those involved, that the intended focus of the systems varied from management responsibility, medical speciality, doctor to patient group levels, and that views from one peer group could be imposed upon another. Lock (1996) advocates that “the impact of computer systems on patient care as well as on the business objectives of hospitals should be considered.” The “benefits realization” approach (Treharne, 1995) is recommended to quantify and document benefits. Donaldson (1996) claims that this process can help justify the investments. However, it seems that the “benefits realization”
Case Study of a Patient Data Management System 21
methods are not being implemented or are failing for the following reasons (Treharne, 1995): an overemphasis on IT relative to other critical issues; a lack of focus; a shortage of skills; ineffective business/IT partnership; absence of benefit management process. Generally, the rapid movement of information technologies into health care organizations has raised managerial concern regarding the capability of today’s institutions to satisfactorily manage their introduction. Indeed, several health care institutions have consumed “huge amounts of money and frustrated countless people in wasted information systems implementation efforts” and there are “no easy answers as to why so many health informatics projects are not more successful” (Pare, Elam and Ward, 1997). In this light, the aim of this study is to provide a deeper understanding of how clinical information systems are being implemented, using a case study methodology.
OBJECTIVES AND METHODS From a theoretical standpoint, it is suggested that adoption and diffusion of information systems (IS) depends on the type of IS innovation concerned. Swanson (1994) and McMaster et al. (1997) suggest there are three IS innovation types: • Process innovations which are confined to the IS core. • Application of IS products and services to support the administrative core of the business. • Integration of IS products and services with core business technology. PDMS are innovative computer systems, which attempt to integrate administrative functions and clinical decision making. Introducing this third type of innovation tends to have far broader ramifications across the overall business domain. Our research objective is to illustrate the resulting complexity of the relationship between this type of technology and organizational change through the investigation of as many facets as possible of the implementation of a PDMS in an intensive care unit (ICU). The case study explores these implementation issues and is based on an in-depth examination of the introduction of a PDMS in an ICU in order to offer insights to those who have responsibility for managing complex and risky information system implementation projects. Intensive fieldwork was carried out with members of a PDMS project in an intensive care unit (ICU) in a Northwest hospital, over a period of one year (July 1996 to July 1997). This corresponded to the introduction of a commercial PDMS and its early adaptation to this particular context, which was an interesting opportunity as PDMS were still rare in the UK in 1996. An online PDMS system was being
22 Mitev & Kerkham
introduced to help with the enormous amount of data that is produced from advanced monitoring equipment. The case study approach was chosen because it allows the researcher to ask penetrating questions and capture the richness of organizational behavior. A case study approach is also generally recommended in order to gain insight into emerging and previously unresearched topics and when it is difficult to control behavioral events or variables (Benbasat, Goldstein and Mead, 1987; Kaplan and Maxwell, 1994). This qualitative approach seemed particularly appropriate since incorporating computers into all aspects of daily ICU operations is a “formidable task,” both technically and logistically, which requires “close cooperation between physicians, nurses, basic scientists, computer specialists, hospital administrators and equipment manufacturers” (Nenov, Read and Mock, 1994). Given the research has a descriptive and exploratory focus, a combination of data collection techniques was utilized, as recommended by Marshall and Rossman (1989): observation of everyday practices, attendance at meetings and training sessions, informal participation and in-depth interviews with all members of the PDMS project (software suppliers, hospital information systems staff, medical physicists, nurses, medical consultants, hospital administrators). Of particular importance at the time were the legal, purchasing and administrative constraints specific to the NHS that were placed on the ICU. These were also researched using secondary internal sources to gain an understanding of the broader organizational setup and also because they affected how the software was purchased, modified, and implemented. The commercial PDMS had to be dramatically modified to suit its users, and this transformation is currently still continuing. This combination of such qualitative techniques has been used in other IS studies in healthcare (Kaplan and Maxwell, 1994); they enable the elicitation of organizational members’ views and experiences in their own terms about sensitive matters and issues of their own choice, instead of collecting data that are simply a choice among preestablished response categories. Additionally, research of this kind is appropriate for unravelling the complexities of organizational change, for providing rich insights and generating an understanding of the reality of a particular situation, and can provide a good basis for discussion. On the other hand, relying on organizational members’ qualitative interpretations and complex associations between events, facts and a range of organizational issues make it more difficult to separate “data” from findings. The evolution of information systems in healthcare and their introduction in intensive care is first briefly described. The case study events are then
Case Study of a Patient Data Management System 23
presented, covering the history of the project, the initial specifications, the choice of software, the hardware requirements and difficulties, the programming changes performed, the training carried out, the practical problems experienced, the continuing issue of software upgrades, user satisfaction, organizational practices, and the role of suppliers. The main findings about implementation and organizational issues are identified as time and cost constraints, underestimation of labor effort, the perception of IS implementation as a one off event, the power of suppliers, the lack of project management, the difficulties in managing expectations, the issues of IT expertise, and internal conflicts. Discussion points centre on the vision of IS as a technical fix, the difficulty in transferring technical solutions to different contexts, the problem in estimating benefits, and the institutional barriers and politics. Finally, it is concluded that these implementation difficulties are symptomatic of a complex IS innovation which attempts to integrate technology to core business processes.
INFORMATION SYSTEMS IN HEALTHCARE The introduction of PDMS in intensive care units is taking place in a broad context of using computers in the NHS. Hospital information support systems (HISS) are integrated systems supporting all the hospital operations. The activities and data relating to every patient (tests requested, results reported, etc.) are fed into the financial and information systems of the hospital, to enable hospitals to meet the requirements of the contracting environment (Thorp, 1995a). Guidelines were published as a result of studies of HISS implementation (Thorp, 1995b), which recommend, for instance, data interchange standards and the need for “benefits realization.” The development of the electronic patient record (EPR) further supports clinicians in the recording of medical records (McKenna, 1996). The EPR describes the record of the periodic care provided by one institution. One aim of EPRs is to be developed into a “comprehensive” information system for the whole of a hospital and beyond. The UK EPR program has five EPR and Integrated Clinical Workstation demonstrator sites (Peel et al., 1997; Urquhart and Currell, 1999). Implementing an EPR is a very complex operation and involves major organizational and technological changes (Atkinson and Peel, 1997). By holding all patient data electronically and interfacing with the various administrative and clinical systems, the aim is to extract information for all levels. For example, as part of its EPR project, the Wirral Hospital NHS Trust has implemented electronic prescribing, whereby patient data is assembled and the prescription can be issued and printed without the need to
24 Mitev & Kerkham
access manual records. Wirral Hospital has the largest computerised prescribing system in the UK and is developing a rule based decision support system to trigger pharmacy interventions (Moore, 1995). Decision support systems enhance medical diagnoses and there are broadly two areas of support systems (Modell et al., 1995). Firstly, there are medical diagnostic DSS; these systems give alternative/supportive diagnostic information based on the input from the user and are implemented into specific areas of medicine. Broader systems are being developed to make use of EPRs (Miller, 1994; Pitty and Reeves, 1995). Secondly, there are databases that support the collection of clinical data, presenting and analyzing the information for medical decision support (Wyatt, 1991). An example can be found in ICUs, where monitoring equipment collects data, which feed into a patient data management system. Intensive care costs can account for up to 20% of a hospital’s total expenditure (Metnitz and Lenz, 1995), and there is an increasing demand by management to cut these costs. Rapid development of monitoring devices has increased the data available from ICUs ten-fold. The aim of PDMS is to collect data from monitoring devices at the bedside of the patient for medical and statistical management reporting. The ability to fully analyze this data has not previously been available, due to the large amounts of data that have to be processed and the slow arrival of outputs. There are only a few systems that have the ability to process PDMS data for quality management and cost accounting (Metnitz et al., 1996). Nonetheless, ‘basic’ PDMS are being introduced to help support these functions in the future. For instance, the University Clinics of Vienna have developed a system called ICDEV (Intensive Care Data Evaluation System), which is a scientific database tool for analyzing complex intensive care data (Metnitz et al., 1995). It is built to interface with two commercially available PDMS: Care Vue 9000 (Hewlett Packard, Andover, USA) and PICIS Chart+ (PICIS, Paris, France). The ICDEV enables the PDMS to be used for cost accounting, quality control, and auditing. ICDEV was first used at the Medical ICU of the Vienna General Hospital in June 1994 and its Neonatal ICU in December 1994 with Care Vue 9000, and in April 1995 with PICIS Chart+ at its Cardiothoracic ICU. Metnitz et al. (1995) report problems of integration with existing local networks and databases, which have required the expertise of engineers. Metnitz and Lenz (1995) have found that commercial PDMS can help optimize bed occupancy and facilitate analysis for scientific and quality control. On the other hand, they are expensive, require specialized maintenance, and they may not be faster than manual techniques. Metnitz and Lenz (1995)
Case Study of a Patient Data Management System 25
conclude that commercial PDMS still have some way to go before they are truly useful for both clinical and management analysis purposes. They state that those implementing PDMS must plan sufficiently before installation and implementation for reconfiguration, as most PDMS interfaces are presently not practical or reliable, and that co-operation between the system developer and purchaser is mandatory. Urschitz et al. (1998) report on local adjustments and enhancements of Care Vue 9000 such as knowledge-based systems for calculating the parenteral nutrition of newborn infants or for managing mechanical ventilation in two neonatal ICUs. They state that PDMS have to be constantly adapted to the users’ needs and to the changing clinical environment, and that there are yet unsolved problems of data evaluation and export. In terms of implementation issues, Langenberg (1996) argues that PDMS require good organization; specifications need to be defined before the process is started; a system should include data acquisition, database management and archiving of data, and coupling with a hospital information system and the possibility of data exchange is mandatory. However, Butler and Bender (1999) claim that the current economic climate makes the cost of ICU computer systems prohibitive for many institutions; that the literature describing ICU computer system benefits is often difficult to interpret; and that each implementation has many unique variables which make study comparison and replication potentially impossible. They suggest changes and issues can only be evaluated uniquely in each study unit or institution. Pierpont and Thilgen (1995) measured the effects of computerised charting on nursing in intensive care; they found that the total time manipulating data (entering or reviewing data) post-installation was unchanged; that time spent in patients’ rooms did not alter, although nurses had more time available for monitoring at the central station; and that computerised charting will not necessarily provide ICU nurses with a net excess of time for tasks unrelated to manipulating data.
CASE STUDY IN AN INTENSIVE CARE UNIT History of the Project The ICU at a UK Northwest hospital had long felt the need for a PDMS. The consultants first raised the idea for a computerised system that would collate and generate information from bedside monitors in the early 1980’s. At the time the technology was not available. The “management team” for the
26 Mitev & Kerkham
ICU consists of medical consultants, a Directorate manager, representatives of the nursing staff, medical physicists, and maintenance staff. The medical physicists build medical applications and equipment for the hospital. The management team determines organizational and purchasing issues for the ICU. In 1994 the management team realized that the ICU would have to update the existing patient monitoring system to function effectively. Investigations started and a request for purchasing a monitoring system and possibly a PDMS was given to the Regional Purchasing Office. At the beginning of December 1994 the request for funds for capital equipment was agreed. The system, monitors, and PDMS had to be on site by the end of the financial year (31 March 1995). All purchasing for the NHS has to go out to European Open Tender before it is bought; this further reduced the time available to the management team to choose a system, leaving no more than six weeks for appraisal of possible systems.
PDMS Specifications Nevertheless, the management team developed the following criteria to which the system had to adhere: the hardware was to have a life span of 7-10 years; the PDMS was to be combined with the monitoring system; the cheapest system had to be chosen, unless the case was strong enough to convince the Regional Office; the PDMS could be adapted to “fit” around users; charts produced had to be the same as the existing paper charts. The paper charts used by the staff are an agreed standard within the unit, which has taken a very long time to develop.
Choice of Software Due to the time constraints, the scale of evaluation had to be considerably reduced and investigations were limited to the UK. The only PDMS in working practice that the team was able to review was at Great Ormond Street Hospital in London. This PDMS did not fulfil all it’s criteria and was considered to be too difficult to use by the nursing staff representative. Once the purchase had been put out to European Open Tender, the ICU was then obliged to choose the cheapest system, which was the PICIS Marquette system, and it was also the team’s preferred choice due to its adaptability, at a cost of approximately £600,000. The monitoring system was introduced in March 1995. However, the PDMS was not fully implemented due to problems with the reporting/charts facility and was still not fully implemented two years (summer 1997) after purchasing the system.
Case Study of a Patient Data Management System 27
Hardware Laboratory Connections The PDMS software program was to collate blood gas levels and observations directly from the monitoring equipment at the bedside of the patient. Some examples of measures are tracheal suction, heart rate, blood temperature, sedation score, peak pressure, ventilation mode, pain score, and pulmonary mean. However, data from laboratory results, ventilators, drug infusions, and bedside observations which were intended to be available automatically through laboratory connections, were still entered manually during the period of the study.
Programming Changes A systems manager from the medical physicists department was appointed to adapt the system and has had to make considerable changes to it. PICIS Marquette has had to divulge information about the software that would not normally be given to the client. The systems manager has become an expert in the adaptation of this product and is consulted by the supplier for her expertise. There is no formal contractual agreement between the two parties. The relationship is very much based on trust. The systems supplier has reflected that “the software was never meant to be adapted as much as it has been” (Interview, 1997). On the other hand, a medical consultant commented that “the software was chosen because it could be adapted. The software would not have been used [as it was], even with training” (1996). The systems manager was originally employed to spend 1-2 days a week adapting the software. However, she worked full-time for the period 1995-1997 and the following modifications have been made: medical charts from the monitors and manual inputs have been reformulated to be the same as the written medical reports; new icons have been designed to ease user interaction; the drug list was extended; and 10 screens have had to be altered to fit with nurses’ practices, which has meant changing the original programming.
Training Training for the monitoring equipment was relatively smooth since staff was already familiar with this technology. Generally within the unit, nursing staff is not used to computers. The Directorate manager, who had considerable input into choosing the system, is not computer literate. Nursing staff received training from clinical trainers from PICIS Marquette for the monitoring equipment before the equipment was introduced. The PDMS software has been in constant development, so the system manager has run the PDMS
28 Mitev & Kerkham
training, since she is adapting the system. A charge nurse was designated to work with the systems manager and to give user feedback about the modifications. Five nurses identified as “super” trainers are trained by the systems manager, and they then train the other staff.
Practical Equipment Problems The ICU staff did not want the PCs that operate the PDMS to be on tables at the bedside. Firstly, this would violate health and safety regulations; secondly, this would not be practical in an already busy and hectic environment. The PDMS that the team saw before selection were desk-based. A “cart” was designed to put the PCs in, which was at an extra cost to the ICU. The first cart to arrive was like a giant washing machine, which was too big and obscured the view of the patient. It is vital in an ICU environment for the nurses to always see the patients. Once the PCs were housed in the carts, it was then found that the monitors were overheating and blowing up. Fans were fitted to the carts to cool the monitors; however, dust particles were then being blown over the patient, carrying the obvious danger of germs being spread. Such practical problems have generally been sorted out on-site by the Directorate manager. However, because of the charting and reporting problems, a paper system was still running alongside the computer system for two years after the introduction of the system.
Continuing Software Upgrades Partly based on their experiences at this hospital, PICIS Marquette then decided to improve reporting and were at the time producing an upgrade of the system to make it act as a database. New facilities were to include a drug prescription facility and laboratory connections. Moreover, the PDMS was to enable data collation for different statistical purposes (Therapeutic Intervention Scoring System, Intensive Care National Audit & Research Care, Contract Minimum Data Set, and Hospital Episode Statistics). All of these areas overlap, and the NHS Executive was still in discussions to decide if such duplication of information is required (Interview, hospital administrator, 1997). Whereas in the first implementation “time has been the big problem” (Interview, medical physicist, 1996), the systems manager envisaged the upgrade implementation being a smoother operation, as the first version was already partly in use and there would be an overlap period. A spare PC was to be used to make changes and test the new software before installing it for the nursing staff. A further difficulty was that upgraded software needs all the changes that were implemented into the original software. This will be the
Case Study of a Patient Data Management System 29
case for all future upgrades. The systems manager is expected to carry out this time-consuming activity along with any maintenance that the system requires.
User Satisfaction Overall, nursing staff felt that they adapted well to the new monitoring equipment, after a few teething problems. The management team expected the ICU to be totally “paperless” by June 1996; however, this did not happen. The Directorate manager felt that this caused the nursing staff to become disenchanted with the system. Major adaptations to the system caused considerable delays. The Directorate manager commented that the implementation has “taken longer than anticipated, probably because we incorporated more as we have gone along” (Interview, 1996).
Matching Working Practices Using the software as it stood would have meant totally changing the work procedures of the staff, and this is not possible in a working ICU. The management team decided to change the software package, not the working practices, and the software was chosen because it could be adapted. However, the Directorate manager reflected that at the time of purchase the medical consultants “thought that the system was going to do exactly what they wanted. We didn’t realize there would be so many problems” (Interview, 1996). PICIS Marquette was able to convince the medical consultants of the adaptability of the system, and the concerns of the systems manager were overlooked. The organizational hierarchy and the power of the medical consultants obviously played an important part in the decision-making process (Knights and Murray, 1994).
Role of Suppliers Throughout the interviews it became very apparent that the PDMS was still very much in the R&D stage. Little was known by the users about how much the software would have to be modified. The software was chosen because of its adaptability, but this was based upon the suppliers’ views of their own product. Suppliers played a great part in the introduction of this system. However, without informed professionals within the NHS, the IT systems purchased may not meet internal organizational needs easily. In this situation, it is hardly surprising that “the most difficult aspect of the implementation has been to convince staff that the system will save time when it is fully implemented, but at the moment this is not the case” (Interview, hospital administrator, 1997).
30 Mitev & Kerkham
ANALYSIS OF FINDINGS Time and Cost Constraints The PDMS had to be purchased quickly and on a strict budget, which is fairly typical in the NHS. For instance, the failed introduction of a computeraided dispatching system at the London Ambulance Services also suffered from arbitrary time and cost constraints. Purchasers were obliged to take the lowest tender unless there were “good and sufficient reasons to the contrary” (Flowers, 1996). However, “tight time-scales and inaccurate, inflexible funding have often occurred ... due to government and departmental political exigencies, policies and pressures. ... These factors need to be counterbalanced, even if this slows up decision making and implementation, if effective systems are to be delivered” (Willcocks 1991).
Underestimation of Labor Effort Knowledge-based systems such as diagnostic tools are likely to require more, rather than less, labor (Drucker, 1996). However, this extra cost was not accounted for when the system was purchased. The lifetime of the system was assumed to be 7-10 years rather than 4-5 years, which is recognized as more appropriate (Sotheran, 1996). There is the danger of staff responsible for procurement failing to recognize that it is dealing with something it does not understand. It was found that the systems manager has had to work on the project full time. With little training, she has been able to keep up to date with the documentation, to ensure that modifications are recorded. The commitment by those involved has resulted in the software being modified and developed at a relatively cheap cost. However, the institution has not benefited from the experiences and knowledge gained from this project. The project is not seen as a long-term project and, as a consequence, detailed information is not available and the true costs are very difficult to judge. Furthermore, the hospital cannot secure a method of benefiting from profits produced by the sales of software they have helped to develop. Equipment (such as the modified cart) or software (such as better charting/reporting tools) produced by the Medical Physicists Department are not patented.
One-Off Purchase vs Long-Term Investment Despite the existence of a set of guidelines governing the procurement of NHS computer systems, called POISE (Procurement of Information Systems Effectively), there was little evidence that these guidelines were employed at
Case Study of a Patient Data Management System 31
the ICU. POISE seems to be regarded by ICU staff as useful for large systems, such as HISS. Funding for the project did not reflect the fact that the PDMS was an infrastructure investment and required long-term investment (Willcocks and Fitzgerald, 1993). It is recommended that off-the-shelf systems be purchased by the NHS, with some later modification (Bates, 1995), thereby giving more power to suppliers. When computers are used to support patient care in the NHS, budgets are often funded year to year. They are seen as one-off software projects with some modification. The extent of modification can be very ambiguous, as has been seen in the case study. The introduction of computers in areas other than administration is bringing with it new challenges for healthcare professionals. The medical consultants felt that the modifications they required could be achieved, based on the advice of the supplier. Internal staff can have a far better understanding of the application. Yet giving responsibility of IT applications in critical environments to non-specialists can bring an enormous amount of risk (Heathfield et al., 1997).
Power of Suppliers Due to the problems encountered during implementation, PICIS Marquette has probably made a considerable loss. On the other hand, the supplier has been able to lock in the customer by providing monitoring equipment that is only compatible with its own PDMS. The vendor planned to use the system at the ICU as a launch pad for further sales, as it was their only reference site in the UK. It can be used to show other NHS clients how the system can be adapted. The systems manager has built a trust-based relationship with the supplier and she imparts her knowledge to the supplier, which was made available on the PICIS Marquette Website (PICIS, 1997). Adaptability is a strong selling point, especially since off the shelf systems with some modification are recommended for purchase at the NHS. PICIS Marquette supplies free copies of the upgraded software to the ICU (Interview, systems manager, 1997), whilst it benefits from the development work being carried out at the NHS expense. At present, this situation may suit the ICU. However, there are no formal contracts that could help resolve problems if relations deteriorate. Moreover, the true cost of the system and its development is hidden, not only in terms of upgrade purchasing, but labor costs. The main developer of the system, the systems manager, is employed as a medical physicist and not at the ICU. So the cost of this labor has not been added to the system cost. Also, should PICIS Marquette decide that it will no longer supply free software, the ICU will have an extra cost that it will not have planned for. PICIS Marquette made the coding of areas of the software more accessible to the ICU, which
32 Mitev & Kerkham
will mask the true effort required for other modifications. As a result, other NHS departments could start to follow the same route, unaware of the hidden costs. The case study has shown that suppliers can bring useful expertise; but they are not entirely without their own interests. The suppliers need to gain command of the business they are applying their IT products to; and conversely, the purchasers must become more knowledgeable about their own IT requirements (Peel, 1994).
Project Management The ICU project has not benefited from any project management methodology such as PRINCE, as recommended by the NHS. The systems manager had no training into methodologies or formal software development methods. This has meant that she has had to “find her way around” (Interview, systems manager, 1997). This led to “a lack of appreciation of complexity of the project” (Flowers, 1996). Negotiating and constantly changing requirements have highlighted the difficulties in agreeing aims. Without experience and knowledge of project planning, it is generally acknowledged that difficulties will arise. The case study has shown that medical systems are still evolving. This continual enhancement requires management and resources, just as the project required at its birth.
Managing Expectations Initially, the users of the PDMS were very excited about the implementation, with medical consultants pushing for its installation (Interviews, medical consultants and nurses, 1996-1997). However, over the implementation period enthusiasm dwindled (Interview, hospital administrator, 1997). This has probably occurred due to expectations being raised too high by “unrealistic claims of immediate advantages and benefits” (Thorp, 1995a). User involvement has gone far beyond working with a requirements analysis team. The users have been actively involved with producing their own specifications, even though they had no experience or training (Interview, nurse manager 1996).
IT Expertise and Internal Conflicts The medical physicists department in which the systems manager works is not part of the IT department. The IT department deals with administrative hardware and software applications. The Medical Physicists department is responsible for clinical equipment and applications. Medical physicists are only responsible for the clinical software they are asked to deal with.
Case Study of a Patient Data Management System 33
Departments often have their own arrangements for clinical IT. This makes the dissemination of information particularly difficult (Interview, medical physicist, 1997). The laboratory connections were not implemented due to internal conflicts between the laboratory and the ICU as to their areas of responsibilities. The laboratory may have felt that by giving information it may have become redundant, or that the ICU and the department of medical physicists were treading on its territory. These fragmented relationships between departments reflect the complex mix of expertise required in medical informatics.
DISCUSSION Technical Fix? There is a “growing awareness amongst those involved in the development and implementation of clinical systems, that social and organizational issues are at least of equal importance as technical issues in ensuring the success of a system” (Protti and Haskell, 1996). However, there is still a tendency to see technology as void of values (Bloomfield, 1995) and perhaps paradoxically, to expect it to solve clinical, financial, management, and quality problems, but without realizing the organizational and technical complexities, human resources implications and associated costs. As Atkinson (1992) claims, information is now perceived as the life blood of the NHS to “enable all operations, clinical, nursing, financial, estates, human resources.” But Coleman et al. (1993) argue that the clinical computing system is complex and that as we press it further to work in the complete care context, it tends to become unmanageably so. Hagland (1998) also argues that automating intensive patient care areas requires a different level of IT product, design, and development. Medical consultants’ clinical expectations of the PDMS were high. As Hoffman (1997) has found, persuading US doctors to use IT goes beyond monetary incentives. However, the technology could not deliver, and this may be because it was intended to fulfil both medical and management requirements.
Transferring Technical Solutions to Different Contexts An example of seeing technology as a neutral solution can be seen in the unforeseen large number of software modifications, which were due to the commercial package not fitting in with ICU nursing practices. An important factor was that the package used is European, and care planning embedded in
34 Mitev & Kerkham
the software reflects more hierarchical and prescription-oriented care planning practices, that differ from UK practices, where responsibility is more equally spread across staff.
Benefits Difficult to Estimate The drivers for change have been accountability, demands for high quality services, and cost effectiveness, but introducing IT may not be as beneficial as expected. With respect to hospital information systems, Bloomfield (1995) comments that “it is not evident that the efficiency gains secured through IT will outweigh the costs of constructing and implementing the systems involved.” Friesdorf et al. (1994) claim that the flexibility of PDMS is far from expectations and that maintenance requires continuous effort which cannot be afforded. East (1992) states that few conclusive studies prove that ICU systems have a favorable cost-to-benefit ratio.
Institutional Barriers and Politics Because of the NHS internal market, NHS trusts purchase their own offthe-shelf IT systems through tendering. They are therefore foregoing the economies of scale previously possible in a unified NHS. NHS trusts may save money through the tendering process and benefit from a freedom of choice (as long as it is the lowest tender), but overall at an extra cost. It is considered that indirect human and organizational costs can be up to four times as high as the technology and equipment costs; this saving seems small in comparison (Willcocks 1991). Implementation of IT will not automatically guarantee communication between departments, as witnessed by the failure to set up laboratory connections. The way technology is introduced and used is a political process, involving people with different occupational cultures (Knights and Murray, 1994), with values influencing its use.
CONCLUSION Healthcare is now a service that can be bought and sold, and whose effectiveness and efficiency can be measured. IT provides the means of collating this information, not only for administrative functions, but also within patient care and clinical decision making. It is being used for clinical diagnosis, along with online data collection from monitoring equipment. Research into the introduction of PDMS in an ICU shows that there is still some way to go before their usefulness can be realized, partly because the
Case Study of a Patient Data Management System 35
demands on the technology are complex and technology itself has yet to be fully assessed. Realistically, the project investigated in our case study is still at the development stage, even if it is not recognized. It would appear that the introduction of IS in the NHS is still perceived as an innovation of the second type (Howcroft and Mitev, 2000), i.e., one which only supports administrative processes, as opposed to a third type which integrates IS to core business processes. This would explain the oneoff purchasing approach; the difficulties in sustaining enthusiasm and user involvement; the underestimation of continuing labor costs and the dependence on a particular individual; and generally the lack of awareness of complex organizational implications of such integration. For instance, software had to be extensively modified to adapt to complex working practices in the ICU, which led to undue reliance on suppliers. IT skills were poor and also needed to be complemented with medical physics, and this was not supported organizationally. Recommendations from the supplier about the possible adaptability to their product were considered to be the most informed, even though the hospital systems manager eventually carried out many modifications. There has been a lack of understanding about the complexities surrounding development both by purchaser and supplier. The cost and times for the project were completely arbitrary, laid down by managers outside of the implementation. Whilst suppliers are having to put a great deal of work into getting these new technologies into NHS sites, in the long run the supplier will benefit most from the development that is carried out at the NHS’s expense. Healthcare professionals are performing tasks outside of their experience, purely out of necessity to get the project implemented. They were unable to perform to the best of their abilities or understand the complex minefield they were embarking upon. An understanding of PDMS as innovations of the third type would see them as long-term investments with important organizational ramifications. It may ensure that cost and time constraints are more realistic; that project management is better applied; that adequate labor resources are allocated; that collaboration between medical physics and IT skills is taken into account; that expectations are better managed; and that institutional barriers are removed. This mismatch in terms of perception needs to be addressed to avoid future difficulties. It is also argued that introducing PDMS crystallizes too many different expectations, making them unmanageably complex, particularly in the current economic climate; that more generally technology is perceived as a blank screen on which many expectations are projected; and that it takes on the often conflicting values of its promoters, developers, and users.
36 Lau & Hebert
Chapter III
Experiences from Health Information System Implementation Projects Reported in Canada between 1991 and 1997 Francis Lau University of Alberta, Canada Marilynne Hebert University of British Columbia, Canada Canada’s Health Informatics Association has been hosting annual conferences since the 1970’s as a way of bringing information systems professionals, health practitioners, policy makers, researchers, and industry together to share their ideas and experiences in the use of information systems in the health sector. This paper describes our findings on the outcome of information systems implementation projects reported at these conferences in the 1990’s. Fifty implementation projects published in the conference proceedings were reviewed, and the authors or designates of 24 of these projects were interviewed. The overall experiences, which are consistent with existing implementation literature, suggest the need for organizational commitment; resource support and training; managing project, change process, and communication; organizational/user involvement and teams approach; system capability; information quality; and demonstrable positive consequences from computerization. Copyright © 2002, Idea Group Publishing.
Experiences from Health Information System Implementation Projects 37
INTRODUCTION Canada’s Health Informatics Association, known historically as COACH (Canadian Organization for the Advancement of Computers in Health), has been hosting annual conferences since the 1970’s as a way of bringing information systems (IS) professionals, health practitioners, policy makers, researchers, and industry together to share their ideas and experiences in the use of information systems in the health sector. These conferences usually consist of keynote speakers describing the latest IS trends; presentations of new ideas, key issues and implementation projects; special interest group meetings; and IS vendor exhibits. One area of ongoing interest for conference participants is the implementation projects reported at the COACH conferences. Considering the high cost involved in planning, implementing, managing, and evaluating health information systems, any successes, failures, and lessons learned from these projects can provide valuable information for future projects. While one can certainly gain insights from the individual implementation projects reported, there has been no systematic effort to examine the cumulative experiences from these projects such as common issues, enablers, and barriers that influenced the implementation process and success. Over the years, numerous articles have also appeared in health informatics literature on systems implementation. Thus far, it is recognized that people and organizational issues are equally if not more important than technology itself when implementing IS (Lorenzi et al., 1997). Reasons cited for failures include ineffective communication, hostile culture, underestimation of complexity, scope creep, inadequate technology, lack of training, and failed leadership (Lorenzi and Riley, 2000). Anderson (1997) has stressed that IS affect distribution of resources and power as well as interdepartmental relations. As such, successful implementation requires active user involvement, attention to workflow and professional relations, and anticipating/managing behavioral and organizational changes. To date there has been little research done on Canadian experience in health information systems implementation. This paper reports the findings of our study on outcome of IS implementation projects reported at the COACH conferences in the 1990’s. First, we outline the study approach used. We then describe the results in terms of expectations being met, key implementation issues, system usage and changes over time, and lessons learned. Based on our findings we conclude with a summary of the experiences from these implementation projects, and how they compare with health informatics literature on implementation.
38 Lau & Hebert
APPROACH Study Scope The scope of this study included only articles published in the annual COACH Conference proceedings from 1991, 1992, 1993, 1994, and 1997. Proceedings from 1995 and 1996 were not available. An article was included in the study only if it described the past, present, or future implementation of a particular health information system.
Research Question The overall research question in this study was “What is the outcome of IS implementation projects reported at the COACH conferences from 1991 to 1997?” More specific questions included: • Have these systems met the expectations? • What were the key implementation issues and how were they addressed? • Are these systems still being used? Why or why not? • Have these systems been changed since they were first reported? If so, why and how? • What are the lessons learned from these projects? Also included after our initial study was the question of how these findings compared with what’s reported in the health informatics literature.
Study Phases The four phases in this study, which took place from January to July 98, consisted of (a) selecting articles describing system implementation from the COACH proceedings and summarizing them according to predefined criteria; (b) establishing a contact list of original authors and conducting telephone interviews with these authors or their designates; (c) analyzing article summaries and interview results; (d) writing the findings as a final report for the COACH organization. The interviews allowed us to determine if the authors’ views had changed over time since publishing their articles.
Data Collection and Analysis The two researchers reviewed the proceedings independently to select articles that were considered implementation projects. The two lists were then compared and merged into a common list of 50 articles. The research assistants summarized each article according to technology used, implementation experiences reported, and project evaluation conducted. For reliability,
Experiences from Health Information System Implementation Projects 39
10 of the articles were reviewed by at least two of the assistants. Discrepancies noted were discussed and resolved before proceeding with the remaining articles. The assistants located the original authors (or, if not available, individuals who were familiar with the system), prepared the contact list, arranged interviews with these authors or designates, conducted the interviews, and transcribed the interview responses. The interviews addressed all five specific research questions. The researchers analyzed the results independently, compared the findings for consistency, and produced a summary report for the COACH organization. There are several limitations to this study: (a) only the authors/designates were contacted regarding the projects, but the organization or users involved were not. It is recognized that their views may differ from the original authors.’ (b) not all authors took part in the study, which further reduced the sample size and validity of the findings; (c) many implementation projects that took place in Canada during the study time period were not reported through COACH, so the findings may not be representative of all IS implementation projects in the Canadian health setting.
RESULTS This section summarizes findings from the articles reviewed and interviews conducted. It includes a profile of the articles and contacts, the technology described in the articles, whether the expectations were met, key implementation issues identified, continued system use and changes reported, and the lessons learned.
Profile of Articles and Contacts The number of articles related to IS implementation projects that were published between 1991 and 1997 ranged from 8 to 13 each year (Table 1a). The articles averaged from two and a half to three pages between 1991 and 1994, but increased to close to eight pages in 1997. The overall proceedings averaged 100 pages, although the total number of articles dropped from a previous average of 30 per year to 13 in 1997. Twenty-four interviews were conducted. Almost half the authors and their designates were unavailable or chose not to participate in an interview (Table 1b).
The Technology Used The technology described in the articles included the types of computer systems, software applications, databases, and technical support that were
40 Lau & Hebert
Table 1a: Number of articles and interviews by year Year 1991 1992 1993 1994 1997
Number of implementation articles/total articles 11/26 (42%) 13/27 (48%) 9/31 (29%) 8/31 (26%) 9/13 (70%)
Length of articles/length of proceedings 30/97 pages 31/104 pages 29/106 pages 20/108 pages 70/94 pages
Average length of articles 2.7 pages 2.4 pages 3.2 pages 2.5 pages 7.8 pages
Number of interviews/number of articles (%) 2/11 (18%) 7/13 (54%) 5/9 (55%) 3/8 (38%) 7/9 (78%)
Table 1b: Summary of numbers of articles and contacts Contacted Interviewed Declined/No contact 33 19* 13 Original author 13 6 7 Designates 46 25* 20 Total contacts made 4 n/a 4 No contact made 50 24** 24 Total *1 interview conducted for two articles; **1 interview incomplete and excluded
planned or implemented. Over half of the 50 articles mentioned the type of computer systems used in the project, which included four main types: • mixed platforms such as mainframes with Unix and/or PC-based workstations; • standalone or networked PC systems; • high-end Unix workstations and/or minicomputers; and • special devices such as smart cards and pen-based computers. No one type of system listed above was found to be predominant. Most of the articles also mentioned the type of software application used in the project. These were categorized using an adaptation of the 1996 Resource Guide from Healthcare Computing & Communications Canada (COACH, 1996). The five most common types of applications included: • core patient information systems; • comprehensive Hospital Information Systems (we added this term as some projects included applications across several categories); • departmental systems (we added this term to represent several categories in the guide that relate to specific departments); • patient care systems; and • decision support systems. In some articles, the actual type of application used was not clear and had to be inferred by the researchers. Only some of the articles mentioned the type of database used in the project. Of these articles, relational databases such as
Experiences from Health Information System Implementation Projects 41
Informix and Oracle were the most common. The other database types were flat files, spreadsheets, hypertext and proprietary databases. The types of support resources needed to plan or carry out the implementation project were described in only some of the articles. While most of these articles recognized the need for support from different parts of the organization, many emphasized strong and ongoing support from the IS staff. A few projects were collaborative in nature, involving multiple partners such as vendors, government, and national health organizations.
Were Expectations Met? The implementation experiences described in the 50 articles were examined for their objectives and/or expectations, strengths, and weaknesses (or costs and benefits). These are summarized in Table 2. Ten of the 24 interviewees confirmed their expectations had been met; 10 were partly met; 2 not met; and 2 exceeded expectations. The most common expectations from interviewees were similar to the objectives, strengths, and benefits from the articles and summarized in Table 2. These are: • improved information availability, data collection, or standards; • increased efficiency or service provision; • user acceptance, involvement or stress reduction; and • cost reduction, funding availability and affordability. Four categories of reasons were given for why expectations were not met by the interviewees, which are similar to the weaknesses and costs listed in Table 2: • user apprehension, staff turnover, lack of buy-in or involvement; • systems lacking functionality, high maintenance, or technology limitations; • loss in efficiency; and • insufficient information from the system.
Key Implementation Issues Six categories of key implementation issues were also identified in the interviews. These issues related to: • training; • systems capability or information accuracy; • process changes or information flow; • user involvement and expectations; • support resources or champions; and • management support or project planning. The interviews also identified four of the most common methods of addressing these issues, which consisted of:
42 Lau & Hebert
Table 2: Implementation expectations, benefits and costs Objectives / expectations (most articles) • provide accessible, accurate, useful information • improve service, efficiency, utilization, and productivity • provide cost-effective, highquality care • interface/integrate different information systems • have user-friendly systems • standardize care and methods, evaluate outcome • achieve cost-savings from automation
• • • •
Strengths and benefits (majority of articles) • improved accuracy and accessibility of information • better planning, control, and decision making • improved efficiency and utilization of services • reduced cost or increased cost effectiveness • integrated, accessible systems
Weaknesses and costs (less than half the articles) • protection of individual’s privacy • limitations in high quality information • implementation process lengthy, delayed and frustrating • limitations in functionality, storage capacity, performance • user interface inefficient, no time savings • difficulty in getting user acceptance • long period required to monitor for trends • variation in terms and definitions • comparison of systems difficult • costly to implement and maintain • cost less than expected or cost figures provided
process management, planning or communication; support resources; training resources; and management support. The key implementation issues identified from the 24 interviews were also sorted by year to determine if they differed over the years. As seen in Table 3, training had been among the top two issues over the years except in 1994. Change in process and information flow was the second top issue in 1992 and 1994, and tied for second in 1991 and 1997. User involvement and expectations was the top issue in 1997. System capability and information accuracy was also among the top two issues in 1993 and 1994, and tied for second in 1991 and 1997. It would appear that one of the top implementation issues consistently identified over the years was training, with the other top issues being change in process and information flow, system capability and information accuracy, and more recently user involvement and expectations. These findings make sense in that the implementation issues were mostly concerned with deploying systems, reorganizing work practices, ensuring system function as required, and providing information needed.
Experiences from Health Information System Implementation Projects 43
Table 3: Top two implementation issues from interviews (8 tied for second) 1991 Training
1992 Training
1993 Training
Change in process , information flow*
Change in process , information flow
Sys tem capability, information accuracy
Sys tem capability, information accuracy* Management support, project planning*
1994 Sys tem capability, information accuracy Change in process, information flow
1997 User involvement, exp ectations Training*
Sy stem cap ability , information accuracy * Change in p rocess, information flow*
Continued System Use and Change Many of the systems described in the 24 projects were still being used at the time of the interview; five were partly used and two not used at all. The interviewees identified reasons for continued use, systems not being used, and changes in systems, as well as types of changes made and effects of change (summarized in Table 4). Information on evaluation includes whether evaluation was part of implementation, evaluation method used and its results, and project status at the time of conference publication. Over half of the 50 projects were already implemented when the articles were published; some were being implemented in varying stages; and for a few others implementation was being planned at the time. Only 20% reported some form of evaluation conducted as part of implementation. In a few articles, the authors noted that evaluation was being planned. Five evaluation methods were reported in 10 projects: • pilot study or field test; • pre/post implementation surveys or interviews; • cross-sectional interviews or focus groups; • trend analysis or comparison with the stated objectives; • literature review as part of the evaluation. • • • • • •
Evaluation results were reported in the following categories: increased knowledge, efficiency, or functionality; needs met, but no further elaboration; improved information, documentation, or decision making; differences noted between planned and actual events, with further work needed; no difference in staff attitude and level of computer knowledge; and results to be reported later.
44 Lau & Hebert
Table 4: Summary of continued system use and change Re asons for con tinue d use
Re asons for syste ms n ot use d
Re asons for ch ange (alm ost all syste ms)
Type s of change
Effe cts of chan ge
• sy stem features,
• technology not
• sy stem changes or
• new funct ionalit y
• imp roved
• user initiated
•
•
•
• ap p lication
• • • •
integration over time adequate resources user commitment or satisfaction cost savings or t oo exp ensive to rep lace imp roved efficiency , care, or w ork p ractices
•
available, lack of integration users not satisfied cost regionaliz ation
• • • p ilot p roject only
•
refinement imp roved services or p ractices changes imp roved informat ion accuracy , availabilit y
or exp ansion to other users and dep artments technology up grade refinement
•
p erformance, service, efficiency , or functionality user resistance, unrealistic exp ectations to no effect organiz ational or user satisfaction
• imp roved
information accuracy , accessibility , or decision making
Lessons Learned A total of 74 types of lessons were mentioned in 26 of 50 articles reviewed. Similarly, 64 lessons were identified in the 24 interviews. There were no substantial differences between lessons noted in the articles and interviews. These lessons were merged according to similarities and categorized under six themes in Table 5. The key themes for lessons learned are the need for: • organizational commitment, training, and resource support; • managing the project, communication and change process; • organization and user involvement using a teams approach; • system capability including flexibility, functionality, user-friendliness, integration; • information quality; and • demonstrated positive consequences of computerization. The lessons learned from interviews were sorted by year to determine if they differed over the years. Table 6 shows organizational commitment and training/resource support were the two most frequently mentioned lessons learned. On the other hand, managing projects, including communication and change process, was the other most frequently mentioned lesson in 1991, 1992, 1994, and 1997. For 1993 and 1994, the other lesson mentioned was organization, user involvement and teams approach. System capability tied for second place in 1991 and 1992 but dropped below the top two lessons thereafter.
Experiences from Health Information System Implementation Projects 45
Table 5: Summary of lessons learned Themes from lessons Organizational commitment, training, resource support
Managing project, communication and change process
74 lessons learned from articles
• training, dedicated resources, ongoing support
• commitment, management support, policy changes
• accommodate change, adapt systems, review practices/systems
• communication, cooperation • project management, infrastructure • fit with practice, difficult to
64 lessons learned from interviews • dedicated, ongoing resource with expertise, transferable skills • organizational commitment, steering committee, influential sponsors • different types and frequency of training needed • adequate funding • project planning, communication, staged implementation, management • managing/understanding change in process, organization, culture
formalize, agreeing on requirements
Organization and user involvement, teams approach System capability
Information quality
• knowledge of organization, services • different implementation methods • involve stakeholders at all stages, esp. physicians, champions, users • partnership, teamwork • flexibility, functionality of system, user-friendliness • need for complete order entry
• user participation, IS input, addressing user concerns
• teamwork • system flexibility, ease of use, integration, performance
• understanding technical need, avoid customization, ensure affordability
• data collection is a substantive task,
• understanding information need,
• access to information, security,
• information management and sharing
ensure quality of data confidentiality
collecting only information needed are complex, difficult to do
• patient access/control of information as empowerment
Consequences of computerization
• costing framework, link to premiums, realizing benefits • impacts of lengthy process
• realizing benefits, cost savings,
improvement in services, efficiency
Table 6: Top two lessons from interviews sorted by year (8 tied for second) 1991 M anaging p rojects, communication and change p rocess O rganiz at ional commit ment , training, resource sup p ort* Sy st em cap ability *
1992 O rganiz at ional commit ment , training, resource sup p ort M anaging p rojects, communication and change p rocess* Sy st em cap ability *
1993 O rganiz at ional commit ment , training, resource sup p ort O rganiz at ion, user involvement, teams ap p roach
1994 O rganiz at ional commit ment , training, resource sup p ort M anaging p rojects, communication and change p rocess* O rganiz at ion, user involvement, teams ap p roach*
1997 O rganiz at ional commit ment , training, resource sup p ort M anaging p roject s, communication and change p rocess
46 Lau & Hebert
DISCUSSION Summary of Experiences The implementation outcomes from the articles and interviews are summarized in Table 7 according to the original research questions: (a) whether expectations were met; (b) key implementation issues encountered; (c) reasons for continued system use; (d) system changes over time; and (e) lessons learned. The authors and interviewees noted a number of lessons learned over the years from these implementation projects: need for organizational commitment, resource support and training; managing the project, change process and communication; organizational and user involvement, and teams approach; system capability; information quality; and demonstrable positive consequences from computerization. These findings are consistent with recent literature on the organizational, people and technology aspects of medical informatics–system implementation is as much a process of social change as it is a technology deployment endeavor within the organization (e.g., Detmer and Friedman, 1994; Lorenzi et al., 1997; Massoleni et al., 1996; Weir et al., 1995). Table 7: Summary of implementation outcomes Expe ctati ons
•
exp ectations met or p artly met in 22 out of 24 interview s
Imple me ntation issu e s issues addressed by :
•
p lanning, p rocess management, communication
•
training, sup p ort resources
•
management sup p ort
•
imp roved service, savings, efficiency
•
sy stem cap ability , information accuracy , qualit y
•
accessible, accurate information
•
•
cost-effective, quality care
• • •
interface and/or integrate sy stems
S yste m change s
Le ssons l e arne d
•
•
all 24 interview s mentioned sy stems changed since first rep orted
•
74 lessons from 26 articles
•
64 lessons from 24 interview s
•
6 themes for lessons
•
managing p roject, change p rocess and communication
•
organiz ational sup p ort, commitment, training resources
•
organiz ation and user involvement
• • •
sy stem cap ability information quality
22/24 interview s mentioned sy stems still used
•
•
change in p rocess, information flow
sy stem functionality , integration evolved over time
new funct ionalit y , exp ansion to other users and dep artments
•
•
•
user involvement, exp ectations
user commitment, satisfaction
•
•
resource sup p ort, champ ions
cost savings, too exp ensive to rep lace
imp roved services, w ork p ractices user initiated changes
•
management sup p ort, p roject p lanning
•
imp roved efficiency , care, w ork p ractices
•
training
•
adequate resources
user-friendly standardiz ed care, methods, out come evaluation
C ontinue d syste m use
• •
imp roved information accuracy , availabilit y
consequences of comp uteriz ation
Experiences from Health Information System Implementation Projects 47
Specifically, organizational commitment is needed to provide the leadership, resources, and support necessary to implement the systems. Equally important is the ability to manage the project, change process, communication, and expectations, which can bring about apprehension, stress and anxiety from the staff if not addressed effectively. To be successful the stakeholders, especially the users, must be involved at all stages of the process and be provided with appropriate training based on individual ability, task requirement, and project timing (e.g., do not train too far in advance of the actual implementation, as users will forget what they have learned). Having champions or the “right” persons on the project team who can persuade, promote, and influence their colleagues is important, since there is always a tendency for individuals to cling to existing work practices out of familiarity and comfort. Also paramount are the capability of the system and the quality of the information being automated, which should lead to some demonstrable benefits for the users and the organization as a whole. Nothing is more frustrating for the users than working with a computer system that crashes, produces incomplete or inaccurate information, and requires more effort to complete the tasks with questionable benefits. The project team also needs to promptly identify and resolve implementation issues that emerge when deploying the system. The key issues identified from our interview findings are the same as the lessons learned. To be successful, the team must be willing to devote time, effort, resources, and compassion to resolve these issues in a time responsive manner. As can be seen from the findings, many of the systems continued to evolve over time after they were implemented. In some cases, the work practices were also adapted to take advantage of the functionality of these systems. These findings are consistent with implementation literature in suggesting that systems can be implemented and adapted over time, along with work practice changes, to emerge as unique systems in distinct settings (Anderson, 1997; Kaplan, 1994). As such, the project team and its organization must be prepared to dedicate adequate resources to manage the project, system, change, training, and support on an ongoing basis to ensure the system can continue to meet the evolving needs of the organization and its users. The two most frequent lessons learned were cited almost every year: the need to have organizational commitment, training, and resource support, as well as to manage the project, change process, and communication well. Similarly, the top implementation issues mentioned year after year were consistently around training, change process, user involvement, and managing expectations. These should be areas of attention for organizations and project teams about to embark on new implementation projects. It is also
48 Lau & Hebert
interesting to note that, while the level of technology sophistication may have improved over the years, the implementation issues and lessons learned were still essentially the same.
Need for Evaluation A major shortcoming noted in this study is the lack of evaluation of the implementation projects described. Such evaluation is important in justifying the IT investment made by the organization, as well as being a valuable resource for others wishing to implement similar systems. Even conducting a simple review of relevant literature, which was only mentioned once in the 50 projects we examined, might have provided some insight prior to system implementation. The adoption of some type evaluation framework, such as the IS success model by DeLone and McLean (1992), can provide a consistent approach to assessing these systems. Such a framework typically covers a wide range of measures including effectiveness of the implementation process, quality of the system, quality of the information, usefulness and use of the system and its information, as well as overall impacts on the individual, group, and the organization.
CONCLUDING REMARKS This study was conducted to determine the outcome of implementation projects presented at the COACH conferences between 1991 and 1997. Our intent was to identify key implementation issues involved and lessons learned to gain a better understanding of systems that have been implemented over the years in the field setting. Also of interest was how such experiences relate to health informatics literature on implementation. We believe sharing such cumulative experiences and providing suggestions for improvement are important ways to advance the practice of health informatics in Canada.
ACKNOWLEDGEMENTS We acknowledge the help of our research assistants Janny Shum, Les Wold and Tina Strack in collecting and organizing the data for this project. We thank the individuals who participated in the interviews to share their insights. Most importantly, we extend our gratitude to COACH for their funding and support to make this study a reality.
Experiences from Health Information System Implementation Projects 49
APPENDIX Table 1: The articles from the 1991 proceedings that were implementation projects. *denotes articles that were included in the interviews. Author Bardeau-Milner, D., Homer, L., Copping, A. Choat, N . Dornan, J., Garling, A., Powell, D. G. Gatchell, S., Waisman, M. S., Gadd, D. Genge, P., Wojdylo, S.* Germain, D. Greenburg, M., Manninen Richmond, O solen, H. Merrifield, K.* Persaud, D., Dawe, U. Strudwick, W., Terry, J. Carnvale, F., Gottesman, R. D., Malowany, R. K , Yien, C., Lam, A.
Title Computer support for food production at Rockyview General Hospital Computers and quality assurance: the experience of a 300-bed community hospital Benefits of clinician’s HIS utilization: expectations and potentials Workload measurement for clinical nutrition Strategies towards a successful implementation of a totally computerized nursing system Executive information systems, executive patient costing systems Design of a medical information system to support research in pediatric hematology/oncology Implementing a dietary computer system A winning combination: hospital risk management and the personal computer Working with an on-line human resource system–or life in the fast lane Development of an automated bedside nursing workstation
50 Lau & Hebert
Table 2: The articles from the 1992 proceedings that were implementation projects. *denotes articles that were included in the interviews. Author Adaskin et al.* Assang, W., Clement, H., Goodman, J.* Depalme, M. J., Shewchuk, M.* Evans, J., Parboosingh, J.* Goldbloom, A. L.
Title The impact of computerization on nursing Implementation of a clinical support system
Operating suite automation for the 21 st century Apple and IBM–they can exist together! Selection and implementation of a medical information system in a major teaching hospital: Process over product Jefferson, S. Victoria hospital’s “tower of Babel”: Supporting a multivendor systems environment Lake, P.* Implementing a connectivity strategy in a health care facility Laporte, J. Presentation of the Sidoci project Malanowich, N ., Volk, T. The B.C. Cardiovascular Surgery System: Clinical & Management Support System Powell, G., Douglas, N ., Westlake, Physician participation in the hospital corporation P.* information system: “Why bother?” Ross, S., Semel, M., Fitzpatrick, Nursing management data set: Maximum utility, C. maximum benefit Thrasher, P. Network systems in the Ottawa Civic Hospital Zilm, D., Harms, R.* Forms overlay laser printers–a unique opportunity for cost savings
Table 3: The articles from the 1993 proceedings that were implementation projects. *denotes articles that were included in the interviews. Author Blunt, D. W., Nichols, D. .K* Glazebrook, J. Hurley, D., Lawson, B.* Laukkanen, E. Nusbaum, M. H.* Percy, J., Fan, D.* Simpson, J., Simpson, J., BlairClare, B. Slusar, G., Enright, C. Tracy, S., Parsons, D., Wesson, P., Lane, P.*
Title Ambulatory patient cases: DC+PC+MC=RM Case study: Successful implementation of open systems in a community hospital Automated medication system–benefits for nursing Evaluation of pen based computing in an out patient cancer center Extending LAN boundaries to the enterprise and beyond Data for the user Post-implementation of an order entry/results reporting system: Curing “performance dip” and interdepartmental stress Northwestern health services: The integrated client record The O ntario Trauma Registry
Experiences from Health Information System Implementation Projects 51
Table 4: The articles from the 1994 proceedings that were implementation projects. *denotes articles that were included in the interviews. Author DeBeyer, T.*
Geiger, G., Gordon, D., Vachon, B., Mitra, A., Kunov, H. Genge, P.* Magnusson, C.* McMaster, G., Allison, A. Podolak, I. Simpson, J. Smith, D.
Title The long-term care client information system: A tool for the management and administration of client placement information A Disability Resource Information System Recognizing the full benefits of computerization The hand-held computer: More than a game boy Using technology to improve client care and interagency communications Database development in long-term care, chronic care, and rehabilitation The effects of a computerized care planning module on nurses’ planning activities Optical disk technology–Community hospital applications, a case presentation
Table 5: The articles from the 1997 proceedings that were implementation projects. *denotes articles that were included in the interviews. Author Andru, P. J.* Eagar, E. A., Eagar, F.* Gordon, D., Marafioti, S., Chapman, R.* Johnstone, R., Wattling, D., Buckley, J., Ho, D., Bestilny, S.* Monaghan, B. J., Vimr, M. A., Jefferson, S. Niebergal, D.* Rivera, K. Sisson, J. H., Park, G. S., Pike, L.* Warnock-Matheron, A., Sorge, M., Depalme, M.*
Title Hospital-community linkages: Community information needs, data warehousing, and pilot projects The University Park Medical Clinic success story: A case study in continuous computerization towards 2000 Management decision support and data warehousing Alberta mental health information system (AMHIS): A provincial strategy and implementation Data driven decision making: The cardiac care network of Ontario experience Benefit realization of computer-based patient records in medical group practices The Ontario smart health initiative Integrating critical care patient data into an enterprisewide information system New opportunities and challenges during restructuring
52 Qavi, Corley & Kay
Chapter IV
Nursing Staff Requirements for Telemedicine in the Neonatal Intensive Care Unit Tara Qavi Flow Interactive Ltd., UK Lisa Corley and Steve Kay University of Salford, UK
This research gauged nursing staff acceptance of a videoconferencing system within a Neonatal Intensive Care Unit (NICU) and identified a set of recommendations to be integrated into system design to maximize usability of the system by nursing end users. Both qualitative and quantitative data was collected through interview and questionnaire methods, designed to elicit system requirements from the nursing staff perspective. It is argued that videoconferencing should not substitute the physical tradition in which neonatal infants are monitored, nor be seen as a replacement for face-to-face communication. However, videoconferencing may provide a workable alternative when face-to-face communication is not possible. In particular, clinical and medical staff should maintain control over the operation of video links at all times. Copyright © 2002, Idea Group Publishing.
Nursing Staff Requirements for Telemedicine 53
INTRODUCTION Research Aims The Neonatal Intensive Care Unit (NICU) at Salford Royal Hospitals NHS Trust, UK, along with the IT Institute, University of Salford, are currently developing a videoconferencing system for use within the NICU, under the “Medilink Project.” The system is primarily intended to provide parental access to NICU infants without worry of location, as well as creating a facility for remote teaching and clinical observation. The system will provide a real-time video link between the infant and the parents and staff, relaying both sound and visual information. Rector et al. (1992) point to the poor record of system success in healthcare environments. It is argued that this lack of fruition may be attributed to the failure of the systems in meeting clinical requirements for usability. The degree of user involvement during the development process has been consistently identified as a key determinant of system success (Newman & Noble; Wastell & Sewards, 1995; Kontogiannis & Embrey, 1997). This research highlights the need to elicit user requirements at the onset of the development process and allow these to then drive the design of the system and interface and determine the functionality. In acknowledgment of such issues, user-centered design becomes a vital methodology in working towards usability, particularly in cases where potential stakeholder conflicts may arise. Gathering user requirements from all stakeholders involved may be argued to increase usability by enabling a sense of system ownership by end users through mediating functionality which is both relevant and actually required, and accounting for factors which may affect usability within the wider context of system use. The initial proposal put forward by the Medilink Project identifies parents and infants as the key stakeholders in the videoconferencing system, whilst failing to account for the significant implications that such a system may bear upon working practices within the NICU itself. In response to this, our research employed the principles of user-centered design (Norman & Draper, 1986) and focused on nursing staff within the NICU as main end users of the proposed videoconferencing system. This may be substantiated by their role as both the primary carers for the infants on the ward as well as the main point of contact for parents. As the Medilink Project incorporates several stakeholder interests, it is imperative to involve professional end users in the development process. Rector et al. (1992) argue that this form of user involvement may avoid serious
54 Qavi, Corley & Kay
design errors, which may in turn impair the usability of the system. This rationale justifies the focus of this research and the scope of this paper. Literature searches revealed the scarcity of similar studies in this area, establishing the Medilink Project as a pioneer in its field in Europe. The “Baby CareLink Project” based at the Beth Israel Deaconess Hospital in Boston, USA has already attempted to meet similar objectives in establishing a videoconferencing system for use in NICU environments, providing ample material for a case study. This research gauged nursing staff1 acceptance of a videoconferencing system within the NICU and identified a set of recommendations to be integrated into system design to maximize usability of the system by this group of end users.
Introducing Telemedicine The use of information and communication technologies in healthcare environments has seen a significant increase in recent years. Applications may be as diverse as communicating via e-mail, consultation by videoconference, or remote telesurgery, and this broad spectrum of functions may be encapsulated under the term “telemedicine.” Wooton (1996a) uses a working definition of telemedicine as “nothing more than medicine at a distance” but notes that this includes the activities of diagnosis, education, and treatment.
Healthcare Reform The phenomena of telemedicine is gaining increasing political utility in healthcare reform (Blanton & Balch, 1995; Guterl, 1994), where it derives strength from rapidly improving technology and falling costs to counteract the mounting pressure upon healthcare resources (e.g., Otake et al., 1998; Elgraby, 1998; Mitka, 1998). Both the US and British governments have launched initiatives to utilize information technology effectively in the support of improved patient care and population health (NHS Executive, 1998; Guterl,1994). In America, it is estimated that widespread use of telemedicine could cut healthcare costs by $36 billion (Blanton & Balch, 1995; Guterl, 1994), whilst the British government is investing £1 billion into their “Information for Health” strategy (NHS Executive, 1998). Videoconferencing in particular has been highlighted as bearing the greatest cost savings from the wealth of possible technologies, due to its functionality in reducing unnecessary travel and waiting time for both healthcare professionals and patients (Guterl, 1994).
Nursing Staff Requirements for Telemedicine 55
The Prevalence of Videoconferencing in Telemedicine Videoconferencing relies on using a video channel to bring remote participants together, so that both audio and visual information is transmitted in real time. The motivation behind videoconferencing lies in its potential to simulate the visual cues present in face to face interaction, thus increasing the productivity of the communication process by reinstating the factors of communication otherwise absent when using audio-only channels (Sellen, 1995). The benefits that videoconferencing can theoretically provide in medical applications are therefore significant, where care can be brought to the patient irrespective of physical distance, along with the consequential ease of pressure upon healthcare resources. Despite the obvious cost, time, and access benefits that telemedicine entails, its reception by the medical profession has been somewhat hesitant to date (Grigsby, 1995). The American Telemedicine Association estimated 25 active telemedicine projects in the US in 1995 (Blanton & Balch, 1995), whilst the figure in Europe was deemed to be only marginally higher, with 40-50 systems in various stages of development (Sund & Rinde, 1995). Such scepticism towards the advantages of telemedicine may be attributed to the lack of thorough research into its feasibility (Wooton, 1996b; US Dept. of Health & Human Services, 1997; Stahlman, 1998), particularly in light of the highly sensitive nature and internal diversity of the field in which use is propagated. Videoconferencing technology appears to have failed to meet the commodity status predicted by the Gartner Group in 1991 (Grantham, 1994). In particular, doubts have been raised about the medium’s capacity to appropriately communicate the non-verbal visual cues that underlie the main benefits of utilizing a visual link (e.g. Grantham, 1994; Scott, 1996; Melymuka, 1998). Indeed, Sellen (1995) cites the lack of evidence to support that access to visual information via video bears any significant improvement upon the quality of communication in comparison to audio-only information whatsoever. This lack of research may in turn underlie the factors of concern, which may be argued have contributed towards the reluctance of both the medical community and patients to embrace the opportunities offered by telemedicine applications. The prevailing attitude towards the adoption of telemedicine surely warrants the need for a user-centric approach to the integration of technology into healthcare environments, thus retaining focus upon care and enabling ownership of the functionality of applications to actual end users, (i.e., medical and clinical staff and their patients). Only then may system success be assured.
56 Qavi, Corley & Kay
Technology in the NICU NICUs are not unfamiliar with innovative uses of technology. Technological advances which facilitated the miniaturization of evaluation and treatment methods for very small subjects in the 1960’s revolutionized intensive care (Stahlman, 1989) and have been primarily responsible for reductions in both perinatal and neonatal mortality (Paneth, 1990). Technology has also enabled an increase in the number of beds on NICU wards and in profits through the reduction in the duties of medical personnel. However, Stahlman (1989) argues that this depersonalization of care has also resulted in conflict, whereby the increasing complexity of technology requires specialist personnel, thus fragmenting care of the infant. This may entail detrimental effects upon both care and the information provided to parents. Open communication and cooperation between parents and NICU personnel are highlighted as key facets to the success of family-centered neonatal care (Harrison, 1993), yet it could be argued that technology to date has been an impediment to this (Stahlman, 1989). The use of videoconferencing within healthcare environments has been shown to produce mixed outcomes in terms of the medium’s utility in supporting healthcare without being detrimental to its quality (e.g., Harrison et. al., 1996; Sund & Rinde, 1995; Otake et al., 1998). Success of videoconferencing as a telemedicine application may be determined to a large extent by the technology’s suitability to intended purpose and integration into the existing service structure (Wooton, 1996b; US Dept. of Health & Human Services, 1997). Most importantly, it has been argued that technology should not be adopted for the sake of the technology itself, but for the benefits it may provide by supplementing and supporting care. The design of a videoconferencing system for implementation in a NICU would have to be thoroughly validated in light of the inherent technological impediments to communication (Grantham, 1994; Scott, 1996; Melymuka, 1998). Moreover, conflicts between stakeholders–physicians, nurses, parents, and infants (Lee et al., 1991)–must be ultimately resolved in the design of the system to justify its use. If the system fails to holistically represent the requirements of all involved in the care giving process, then it seems likely that the technology will only act to further aggravate existing animosity and negatively impact upon patient health. Stahlman (1989) argues that technological failures within NICU environments have occurred primarily where the technology has been implemented before appropriate and definitive research has established its inherent benefits or potential “harm.”
Nursing Staff Requirements for Telemedicine 57
METHODOLOGY A combination of both interview and questionnaire methods were used in order to elicit nursing staff requirements for the videoconferencing system. These methods provided both qualitative and quantitative data which allowed a greater depth of analysis and contextualization of the environment into which the system was to be implemented. The interviews were semi-structured in nature, following a basic framework of key questions so as to create a level of consistency across the interviews. However, these questions were presented in an informal manner in order to allow the interviewee to expand on relevant issues. The researchers also posed follow-up questions where appropriate to encourage elaboration. Eight subjects were interviewed individually and were selected primarily on the basis of those who were available to take time out of their shifts during the interview data collection phase. The interviewees were presented with background information regarding the Medilink Project, which was followed by a short vignette. As the interviewees were unfamiliar with the aims of the Medilink Project, an introduction to the research was deemed necessary. Due to both the innovative nature of the Medilink Project itself, as well as an acknowledgments of the general inexperience with videoconferencing technology, it was imperative that an explicit introduction to both the project and the technology was given. A vignette was designed to provide a hypothetical scenario of how the videoconferencing system may eventually affect the role of nurses within the NICU. The vignette focused on potential uses of videoconferencing by nursing staff, specifically as a medium for conducting observational studies of infants. The vignettes were fundamental to ensuring both empirical validity and consistency in providing all subjects, regardless of professional status and experience, with a standard stimulus upon which responses could be based. Interviewees were also presented with hard color copies of the main Web pages available to families taking part in the Baby CareLink Project, Boston. With reference to the general unfamiliarity of nurses with videoconferencing systems, these provided subjects with a concrete example of how such a system may be used within the NICU context, particularly in the provision of services to parents and in facilitating communication between nurses and parents. Interviewees were asked to identify features of the Baby CareLink system which would be of both benefit and detriment to their roles, as well as features of neonatal care which the system had failed to consider. The key issues, which were made in response to the proposed videoconferencing system during the interview phase of data collection, were
58 Qavi, Corley & Kay
developed into a questionnaire. The primary purpose of the questionnaires was to statistically substantiate the issues raised in interviews, and assess the prevalence of those views expressed among the wider sample population. The questionnaires were distributed to all 49 members of nursing staff on the NICU at Hope Hospital. Twenty questionnaires were returned, 18 of which had been completed correctly for the purposes of the analysis. Confidentiality of responses was assured in the instructions for completing the questionnaire, and the questionnaire contained no key to enable the researchers to identify the respondents, so all responses were therefore anonymous. The questionnaires included an introductory letter, again establishing the aims of the Medilink Project and the objectives underlying this research. A hypothetical scenario of how the videoconferencing system may affect the role of the nursing staff on the NICU was described in a detailed vignette, built upon the previous vignette used in interviews. This covered the potential of the videoconferencing system to be used as a medium for observational study of the child; updating medical records; an e-mail-based message facility allowing written communication between nursing staff and parents; and videoconferencing with parents and GPs. Again, this vignette was intended to enable the respondents to envisage the impact that the system may have upon their working practices. The questions in the questionnaire were designed to indirectly reference the activities and potential uses of the system as described in the vignette. The questionnaire consisted of 10 questions using two 5-point Likerttype scales2 (Chang, 1997) in order to elicit responses and a section asking the respondent to provide personal information about job title, length of time spent working in the NICU, and space for any additional comments. The instructions also encouraged respondents to annotate questions where they felt elaboration was appropriate. These annotations enabled further qualitative substantiation of the statistical data derived from the questionnaire survey. The vignettes3 used as stimuli for both the interviews and the questionnaires described the potential for the proposed videoconferencing to be used by nursing staff as a means for observing infants in their care as opposed to being physically present by the bedside. This use of videoconferencing would have obvious implications upon the manner in which nurses use their senses when observing the baby. In acknowledgment of this, three questions on the questionnaire were designed to comparatively assess the importance of the five senses in clinical observation and gauge the impact that the use of videoconferencing would have on this. The questionnaire also asked respondents to rate the potential videoconferencing system, as described in the
Nursing Staff Requirements for Telemedicine 59
vignette, in terms of its importance in facilitating staff hand-overs and the need for staff to maintain control of the video links. Respondents were also asked to rate the effectiveness of videoconferencing as a medium for communication with parents, in comparison to other more conventional modes of communication.
RESULTS AND DISCUSSION The qualitative feedback from the interviews and the quantitative data from the questionnaires were analyzed in conjunction to enable a fuller analysis of nursing staff requirements for the proposed videoconferencing system. This mode of analysis was felt to be more potent given the depth of information gauged from the interviews, whilst acknowledging that these were limited in number. This allowed the issues raised during interview to be statistically substantiated through the questionnaire survey, whilst concurrently creating a qualitative context within which the results of the questionnaires could also be interpreted. The results and discussion of their analysis are consequently combined to allow the development of a cohesive argument.
Videoconferencing As a Medium for Observation The analysis showed significantly higher ratings of importance for the senses of touch, sight, and sound in comparison to smell and taste in the routine monitoring of infants. Although nurses on the NICU promote minimal handling of infants as excess handling may cause stress to infants (Anon, questionnaire), the use of touch does form an important part of the observation and care process. In particular, nurses will check whether the baby is well profused (Children’s Nurse, interview), or feel the baby’s chest to check respiration (Staff Nurse 1, interview). More importantly, unstable babies in particular may require closer physical monitoring: Quite a bit’s physical, especially if we’ve got an unstable baby, you know, you need to be there at the cotside with them. You know, a baby that’s prone to having, like apneas, or a drop in heart rate. A lot of the time, you need to simulate them physically to actually bring them back up out of it. (Neonatal nurse, interview) In turn, respondents were asked to rate the effectiveness of videoconferencing technology in monitoring conditions involving the use of the five senses. The mean effectiveness of videoconferencing in monitoring
60 Qavi, Corley & Kay
conditions involving the senses of smell and taste were both the lowest value on the Likert scale. The mean effectiveness for touch was also low (µ = 1.4375), whilst mean ratings for sight and sound were found to be significantly higher in comparison to all the other senses. Given the importance of touch cited by nurses in the care of unstable babies, this would have significant implications for the feasibility of videoconferencing as an observational tool, given that the system would be incapable of mediating touch whatsoever. No difference was found between the mean ratings for sight and sound themselves (t = 0.324, p = 0.750), showing that the effectiveness of videoconferencing technology in monitoring conditions involving the use of these senses would be equal. Although sight and sound are the only two senses that can be conveyed via the videoconferencing medium, the individual mean effectiveness ratings for both were still only around the mid-point of the 5point scale. These results indicate that there was still some hesitancy regarding the capacity of the technology itself to transmit visual and auditory information. The quality of the image itself would have to be extremely high in order to provide a viable medium for remote observation. During interview, nurses cited observation as their most critical responsibility in the care of infants on the NICU. In particular, the color of the infant is recorded on an hourly basis, along with other observations such as the baby’s position, movements, and whether they are active or asleep (Staff nurse 1, interview). These all rely on visual observation. Several nurses, both during interviews and on their completed questionnaires, raised doubts over whether image quality could ever meet observational requirements, particularly with regards to subtle color changes in the infant’s pallor. One nurse commented that it can often be difficult to observe color changes even when standing close to the cotside, whilst the consultant elaborated on the exact requirements of the system with regards to visual quality: If the quality of the pictures was high enough to be able to see exactly what the baby was doing, reds and blues and pinks are very important to these, so technical quality is very important if you’re going to make a judgement on a clinical sign from a VDU. (Consultant, interview) Sound also conveys vital information about the baby’s condition, such as respiratory changes (Anon: questionnaire). The ability of a videoconferencing system to filter out background noise to allow an accurate assessment of the
Nursing Staff Requirements for Telemedicine 61
individual infant was questioned. As with smell, the importance of sound was related specifically to the infant in question, whereby nurses could ascertain whether an infant was stressed through monitoring sound. This is particularly the case with the babies of drug dependent mothers, where sound then becomes a vital part of the observation process (Anon, questionnaire). In addition, the use of videoconferencing was consistently perceived as a threat to confidentiality, whereby the sound from neighboring cots during a video transmission between an infant and its parents could entail a serious breach of the confidentiality of other infants on the ward. Several comments, particularly alongside the scales for the five senses when eliciting the importance of these in the monitoring of infants, highlighted the failure of these traditional senses to account for intuition, or a “sixth sense”; Nurses’ intuition cannot be overstated, and this uses all senses and more. (Anon; questionnaire) Given the nature of care, both questionnaire respondents and interviewees argued that much of a nurse’s skill was derived from experience and familiarity with the infants in their care, which enabled an innate sense of perception regarding the condition of infants: A lot of neonatal nurses pick up on the subtle behavioral cues and pick up on, I suppose there may not be changes in an infant, but because she’s looked after that infant for a period of time, she will know whether things are about to change. They can anticipate, that’s experienced nurses, will be able to anticipate. (Sister 2, interview) Annotations and comments on the questionnaires indicated that nurses felt that the use of a videoconferencing system for observational purposes would disable their ability to “sense” that there may be something wrong with the infant, independent of what the vital signs may otherwise say. This would have obvious implications for the monitoring of infants over a video link, whereby nurses may feel less able to provide adequate care over this medium, and therefore be less accepting of the system as a consequence. Indeed it may be argued that the use of videoconferencing for this purpose may be of direct detriment to the care that the infants themselves receive. The majority of nurses who were interviewed could not envisage videoconferencing technology replacing current modes of observation. This was despite the
62 Qavi, Corley & Kay
admission that many of the actual readings which were taken as part of observation came from monitoring equipment as opposed to direct contact with the baby itself (Staff nurse 1, interview). However, nurses maintained the importance of physical proximity during observation to allow for extra cues to be picked up on and for quick responses to the baby’s condition. In particular, both interviewees and respondents indicated that they would personally be unable to trust the use of a videoconferencing system for monitoring purposes: No matter how clear the picture and sound, I would be very unhappy to monitor a baby from a distance. (Anon; questionnaire) It could therefore be surmised that staff acceptance of a videoconferencing system which was used for observational purposes would be low, and the use of the medium itself in such a scenario would be inappropriate and of possible detriment to the infants on the ward. In particular, the use of videoconferencing in observation challenges the tradition by which nurses are trained to deliver care. This questions whether, in this instance, the use of such technology could impede the quality of care by tangibly altering its nature.
Videoconferencing As an Aid to Staff Hand-Overs Respondents were also asked to rate how important the videoconferencing system would be in facilitating staff hand-overs if it was used to store medical records and video archives of the infant. The mean importance rating of responses was “3,” the mid-point on the Likert scale, whilst the Chi-Square test indicated that there was a significant concentration of responses, χ2 = 10.889, p = 0.028. It can therefore be inferred that a significant proportion of nurses recognized the importance of medical records and video archives in facilitating hand-overs. At present, the hand-over process is mainly done verbally, which as one interviewee described, can be problematic when trying to recall individual details of infants on the ward, especially during busy periods. This nurse believed that the system would facilitate the hand-overs, noting the fact that they would still have to keep a written record, as is presently done, which would serve as a back up to the existing computer system. The advantage of verbal communication may be seen in the ability of nurses to ask each other specific questions, which may be hindered by the system. However, the same
Nursing Staff Requirements for Telemedicine 63
nurse argued that the system would still be beneficial as a prompt and a swift retrieval facility: If they’ve got it there on the screen in a way to remind them of things, then I think it would be useful. (Children’s nurse, Interview)
Videoconferencing System for Storage of Video Archives and Medical Records If the system is to be used for the storage of medical records, ideally this should incorporate all other existing information systems. Both the interviews and questionnaires presented concerns regarding duplication of work. The process of documenting everything on the existing system can already be time consuming: When you’re on intensive care you have to put everything on it because it shows the graphs, you know like blood pressure and everything, so whenever you do anything to the baby, you have to go into the computer and type in what time you did this, and what time you did that. That’s a bit time consuming, you know when you’re busy. (Children’s nurse, interview) Therefore with nurses already working under time pressures, inputting more data to another system would not be appreciated, and many nurses expressed concerns over whether this new system would increase their workload: I question the use of it for updating medical records–if this is the case, do we stop using the existing care plans and the computer, or do we have all three which wastes nurses’ time? (Anon; questionnaire) It can be argued that the issue of time in the context of the design of the system is highly pertinent. The amount of time which it takes to input and update medical files is clearly a worry for the nurses, as this time may be better spent actually providing hands-on care. An increase in workload and the extra time which the videoconferencing system may demand was a general concern. The time required to organize and conduct the video links would further exacerbate existing time constraints, a point which was echoed in both interviews and questionnaires: Will this not just add more pressure on a nurses’ time, inputting and setting up appointments, etc.? (Anon; questionnaire)
64 Qavi, Corley & Kay
Control of Videoconferencing In continuation of these issues, respondents were asked to rate how important it would be for medical staff to be able to control when the video links were operating. This has underlying connotations regarding the timing of the video links and therefore incorporates both the time it takes for staff to utilize the system and also more general issues surrounding staff control of links and their scheduling. The mean importance rating was “4.5,” and the Chi-Square test confirmed that the distribution was significantly clustered, c2 = 22, p = 0.000. Thirteen responses from a total of eighteen rated the importance of staff control over the system as “very important,” the highest value on the scale. The clustering around the 5th point on the scale indicates that a significant proportion of nurses felt that it was very important for them to be able to control when video links were operating. These results are confirmed in the interviews and questionnaire comments which portray a general concern regarding this issue. Control of the system was questioned specifically. The main concern raised in the interviews and questionnaire comments relates to the level of control with regards to confidentiality. Nurses were worried that the system would serve as a method of surveillance and that they would be unaware of being watched. This worry was repeated several times over, for example: Some people, I suppose may be conscious of the care they give knowing that you’re being videoed. I think that would be the biggest issue. (Neonatal nurse 1, interview) Many interviewees held the belief that the video link may be constant, and this perturbed the nurses. In addition, there was a general feeling of unease regarding whether parents could simply link up to the ward without the staff knowing and therefore watch procedures which they otherwise would be excluded from. At present, if an infant’s condition deteriorates and requires a particular procedure, then parents are often asked to leave during this time. However, concern was expressed with regards to the issue of who has control when using the videoconferencing system, with specific reference to such procedures: I wouldn’t like to think that it was on all the time. I think you’d find it quite stressful, because there are some procedures that would happen that you just wouldn’t want anybody to actually sort of come in on. By accident or, you know if the parents came in for an extra session and sort of clicked in, or whatever you have to do, and found something going on that they really couldn’t cope with. I would hate
Nursing Staff Requirements for Telemedicine 65
to think that somebody was observing us resuscitating that baby. It isn’t that we’re doing anything wrong, it’s just that it’s not a nice procedure, and it looks really bad at the time. (Staff Nurse 1, interview) Anxieties over control of the system focussed on being able to “switch off” the video link if any complications with the infant occurred. Many nurses were worried that parents would have the power to view any situation that they pleased and were worried about any conflicts which may occur due to this perceived power of the parents as omnipresent voyeurs. Nurses also argued the need for support for the parents when using the video link, the reason for this being that the parents may not understand all that is happening, or to perhaps explain some of the clinical jargon that nurses may use. To have another nurse situated at the other end of the link with the parent was deemed necessary, particularly in such cases when the infant required treatment. One sister highlighted many of the “control” issues in such cases as when the parents “tune in” and see something that may be upsetting: Whereas it might be difficult if they’re [parents] just looking at a screen and there isn’t anybody there with them to explain what’s actually going on. It might actually frighten them. And if they try to access midwives who may not have an appreciation of what neonatal care is about, then they might have problems with communication, I think. (Sister 2, interview) Therefore the control of the system also impacts on the nurses’ workload, because ideally, as many nurses noted, there should be support at both ends of the video link, with experienced staff who have knowledge in this speciality, preferably another nurse from the NICU. Consequentially, there were many worries surrounding the control of this technology, ranging from nurses feeling they are being watched, plus confidentiality of information and access to it, to whether parents have the power to control the links, to the videoconferencing being used as legal evidence against nurses.
Effectiveness of Videoconferencing As a Communicative Medium The importance of being able to perceive parental reaction to information about their baby’s condition was also highlighted during the interviews as a key impediment to communication via video links, despite the transmission of sound and visual images. This not only again puts into the question the
66 Qavi, Corley & Kay
quality of visual images to adequately relay information, but also establishes the perceived difference between the nature of communication via face-toface methods in comparison to videoconferencing. Such issues were explored in the questionnaire, which asked respondents to rate how effectively they felt that they would be able to communicate with parents using five common mediums. This enabled a comparison of the perceived effectiveness in communication of telephones, e-mail, videoconferencing, face-to-face, and letters. The mean effectiveness rating for Face-to-Face (m = 4.7778) was found to be significantly higher than all other modes of communication, indicating the nurses consistently felt that this was the most effective way in which to communicate with parents. In turn, videoconference, e-mail, and letter were found to be the least effective means of communication, and the effectiveness of the telephone was found to be significantly higher than these in comparison. One interviewee substantiated these results by arguing: Oh, the optimum always is face to face isn’t it? But speaking on the phone is the next best thing … On the phone you can say “Now, do you understand what I’m telling you?” Whereas on there [e-mail facility, Baby CareLink study] you can’t, can you? (Neonatal nurse 1, interview) Letter received the lowest mean rating of effectiveness (m = 2.2778), and implications regarding the acceptability of the use of videoconferencing to communicate with parents may be ascertained from the fact that no significant difference was found between the mean effectiveness of letter and videoconference (t = 1.781, p = 0.095). However, the mean effectiveness rating of videoconference was found to be significantly higher than that of e-mail (t = -2.582, p = 0.022). This may be seen to relate to the previous criticisms made regarding the “message center” in the Baby CareLink study. During interview, nurses faulted the effectiveness of written forms of communication such as letters and e-mail due to the increased risk of the message being interpreted ambiguously, which could otherwise be accounted for in face-to-face scenarios: With regards to written documentation … it’s subject to misinterpretation … I think there might be areas of confusion regarding the baby, some upsetting comments really. They might be interpreted by mums and dads, not necessarily, but sometimes written things are not always interpreted the same way, are they, as when they’re spoken? (Sister 2, interview)
Nursing Staff Requirements for Telemedicine 67
However, despite the low mean effectiveness rating, the use of videoconferencing may provide the cues such as intonation which could remove the ambiguity of information. Although nurses acknowledged the capacity of videoconferencing to convey both sound and visual information in real time, thus enabling the ‘reactions’ of parents to be perceived, a distinction between videoconferencing and face-to-face was made on the basis of the personal touch inherent in face-to-face contact: I suppose it’s just being there in person to reassure somebody. There’s only so much you can do on a video link, isn’t there? If you’re [referring to the interviewer] upset now, I can try and reassure you … (Sister 1, interview) In addition, videoconferencing was also seen to detract from the intimacy of communication, whereby nurses anticipated that they would feel selfconscious when communicating with parents via the system and would feel less comfortable providing more personal feedback regarding the baby: [re: videoconferencing] … people feel self-conscious, is the true you coming over, do you know what I mean? And I’ll be more inclined to say just what I have to say … You know, “He’s been really good blah blah and likes having his tummy rubbed,” and you’d probably say that face to face, but think “Oh God, no, I sound like a prat …” (Sister 1, interview) This highlights the view that this technology should not replace face-to-face contact, but if this is not available, then it is better than no contact whatsoever: I think it’s an excellent idea. It must enhance bonding … It’s better than not seeing it at all. It’s better than just walking up with a Polaroid snap and saying “Look. This is your baby.” That’s not good either is it? Whereas they can see it on the screen, they can see it moving, they can see it breathing. And I think a lot of mum’s … imagine a lot worse scenario, by just thinking about it. And they actually come in and see the baby and think, “God, it’s not as bad as that” … So if they could see that before, they wouldn’t be frightened either, if they could see the equipment. Whereas they walk in and they’re daunted. You know, they just walk into this noisy, hot room and they’re like, “Oh God, what have I come to here!” (Neonatal nurse 1, interview)
68 Qavi, Corley & Kay
Other Implications of Videoconferencing Other issues were raised during interviews, which although they do not fit into any particular category in relation to the questionnaire or interview questions, they are, however, pertinent to the analysis of nurses’ opinions to the videoconferencing system. Whilst many concerns and disadvantages have been discussed, these comments were in praise of such a system and the advantages that it would bring to the NICU. These remarks focused on the system functioning as a teaching aid, and also its relevance to recruitment and public image of the NICU.
CONCLUSION The aim of the research was to elicit staff requirements for the proposed videoconferencing system at the Neonatal Intensive Care Unit at Salford Royal Hospitals NHS Trust. Analysis of the results from both the questionnaires and interviews provide a comprehensive insight into the perceived advantages and disadvantages of a videoconferencing system from the perspective of NICU nursing staff. Nursing staff comprise one of the key stakeholders in such a system, and in order to assure system usability, their requirements, as elicited in this study, must be incorporated into system design. Staff rejected the feasibility of videoconferencing as a medium over which infants may be observed. This was due to not only the inability of the system to convey sensory information such as touch, and the need to carry out physical procedures in the observation and care of some infants, but also the perceived unreliability of the system in its capacity to transmit sound and visual images of quality high enough to satisfy observational requirements. In particular, the substitution of physical bedside monitoring of infants by videoconferencing would disregard the experience of nursing staff which enables them to exercise a “sixth sense,” making the system unacceptable from this perspective. These findings bear significant implications for the feasibility of the proposed videoconferencing system when considering the unsuitability of the system for clinical purposes. The inclusion of all of the five senses in the scales used on the questionnaire enabled a comparative analysis of the utility of each within the care of neonatal infants. Smell and taste were deliberately included to allow an internal control of the distribution of responses for each of the five senses. In addition, the use of all five senses allowed the identification of the further “sixth sense,” which surpassed the limits defined within the traditional five senses.
Nursing Staff Requirements for Telemedicine 69
Face-to-face communication was highlighted as the most effective means of communicating with parents. In turn, videoconferencing was not perceived as particularly effective, due to the absence of a physical context which would enable staff to respond to non-verbal cues from parents and enable the appropriate reassurances to be made. However, when no alternative to face-to-face contact between parents and their infants was possible, due to distance or in case of emergencies, videoconferencing was seen as a viable means for facilitating the bonding process between parent and child. In turn, the proposed system itself is not intended to replace humans or indeed replace any existing method of parent-infant contact. Despite this, the controversial nature of the proposed system, particularly when acknowledging its lack of utility in the healthcare environment itself, may still impact upon usability for other stakeholder groups, such as parents. However, it may be argued that the lack of any experience with videoconferencing systems may have influenced low effectiveness ratings, whereas traditional methods of communication, such as telephone and faceto-face, received higher ratings because their benefits were more tangible. Nursing staff felt that medical records and video archive, stored on a videoconferencing system, could improve the comprehensiveness of handovers, by providing easily accessible information and material to support the verbal briefing. However, this was based on the premise that existing systems which record patient data would be incorporated into the videoconferencing system, so as to avoid the duplication of tasks and any strain upon staff time. The necessity for such integration is particularly critical when referring back to the motivations underlying healthcare reform policies, whereby failure to incorporate new technologies into existing systems will only increase pressure upon the NICU and its resources. Nursing staff vehemently argued for ultimate control when video links with parents were operating. The need for confidentiality, and the ethical and legal implications of running a system to relay such private and sensitive information, was repeatedly stressed. Therefore, if the system is developed with such issues in mind and are accounted for in the design process, then this should allay the fears of the nurses and thus mean that such a videoconferencing system is well received. These findings are summarized in the following list of recommendations which may be integrated into system specifications: • Videoconferencing should not substitute the physical tradition in which neonatal infants are monitored;
70 Qavi, Corley & Kay
•
If used for storing medical records, a videoconferencing system would have to be integrated into preexisting information systems on the NICU; • Medical records and video archives stored on a videoconferencing system could support verbal briefings during staff hand-overs; • Medical staff should have control over the operation of video links at all times; and • Videoconferencing may not be seen as a replacement for face-to-face communication; however, it may provide a workable substitute when face-to-face communication is not possible This research has validated the methodology of combining qualitative and quantitative forms of data collection in producing both empirically viable and contextually significant material from which end-user requirements may be extrapolated. The recommendations outlined above should be incorporated into the design of the videoconferencing system by the Medilink Project. In particular, both the success of this methodology and the significance of these findings should be taken into account when analyzing the needs of other stakeholder groups. The primary constraint of this study was the inability to gain access to the other stakeholder groups, in particular, the parents of infants in the NICU. This was due to delays in securing Ethical Committee approval and the need to receive police clearance before access to the patients and parents would have been permitted, which was not possible within the time frame of this research. Nevertheless, the success of this study in eliciting tangible requirements from the medical and clinical staff stakeholder group substantiate the utility of adopting a user-centric approach from the onset of system design. This research highlighted factors that were key to the ability of nurses to provide care to infants, which may have otherwise been overlooked had a traditional software development methodology been adopted. The slow adoption of telemedicine technologies (Grigsby, 1995) teamed with the drive to modernize and streamline the provision of healthcare with the use of ICTs (NHS Executive, 1998) under the “Information for Health” strategy may be effectively addressed with a user-centric approach to the research and integration of telemedicine. Acknowledging the requirements of key stakeholders, reflecting these in the design and functionality of the system, and thus enabling end-user ownership has been proven to be a critical factor in system success (Newman & Noble, 1990; Wastell & Sewards, 1995; Kontogiannis & Embrey, 1997). This will enable the true potential that telemedicine offers for patients and medical and clinical staff in revolutionizing healthcare to be realized.
Nursing Staff Requirements for Telemedicine 71
ENDNOTES 1 Medical and clinical staff were the only stakeholder group who were accessible to the researchers in this study due to the need for Ethical Committee approval and police clearance before access to patients and parents could be granted. 2 Two scales were used, the first to measure “importance” and the second to measure “effectiveness.” For both scales, “1” represented the lowest rating whilst “5” represented the highest rating. 3 It is important to note that vignettes were presented to subjects as opposed to any proposed solution developed by the Medilink Project team, so as not to confound the validity of the actual requirements which were to be elicited. As a requirements gathering exercise, it would have been erroneous to present a predefined solution to nurses, hence the use of vignettes which enabled a contextual forum for data collection, allowing nurses to understand and explore the implications of the system without influencing their actual requirements.
72 Chan
Chapter V
End-User Directed Requirements–A Case in Medication Ordering Stephen L. Chan Hong Kong Baptist University, Hong Kong
INTRODUCTION This paper presents a physician ordering entry system in the ward (for medication prescriptions) by using scanning and image processing. Important design and operational issues that need to be considered by developers of similar end-user computer systems are presented. Then the scanning and imaging processing system (SIPS) is described. SIPS was developed for the Hong Kong Baptist Hospital (HKBH), Kowloon, Hong Kong and has been in successful operation for over three years in the hospital. The development of SIPS was based on end-user directed requirements. SIPS makes use of and integrates different information technologies, including scanning, bar code and other marks recognition, intelligent image capturing, server database access and retrieval, and network communication and printing. We observe that the end-user context has directed the design and development of the system. On the other hand, the use of SIPS led to the implementation of new operational procedures, resulting in improved quality healthcare delivery in the ward and increased productivity of the medical personel.
End-User Context The end-user context of an end-user computer system is important. A recent study can be found in establishing the role of an end-user per se in strategic information systems planning (Hackney, Kawalek and Dhillon, Copyright © 2002, Idea Group Publishing.
End-User Directed Requirements 73
1999). There are studies in establishing the importance of the end-user context in identifying requirements in end-user systems development (Gammack, 1999) and in measuring end user computing success (Shayo, Guthrie and Igbaria, 1999). As discussed in the study in Komito (1998) of the use of a system of electronic files to replace paper files, the end-user considerations were identified as the difficulties for the transition. Paper documents are perceived to be “information rich,” providing control of information for occupational status and position. As a result, there is a perceived need for the user to defend “occupational boundaries,” and this discourages the use of electronic information. Indeed, in our effort of computerization of ward procedures, we found that the end-user context was very crucial in determining the available technical option we can use. More specifically, in developing a medication ordering system in HKBH, we have the following real-life scenario. It is a 700-bed private general hospital. The ordering of medications by the doctors is dominated by the practice of using the traditional paper-and-pen operations. For several reasons, it is considered not possible to replace this traditional way and to introduce a direct physician order entry (POE) method in which the doctors enter the medication orders directly into the computer. Firstly, there is a large number (1000+) of visiting doctors. These doctors have very different backgrounds and their age range spans over 40 years. Furthermore, some of the doctors visit the hospital only occasionally when their patients are admitted to the hospital. Therefore, it is not practical to hold training classes for these doctors. Even if they are trained, they may not be able to remember how to use a POE system in their occasional visits. Secondly, the doctors are specialists in their own medical fields, and many are not proficient in the use of the computer. For some individuals, even their typing skills are in doubt. (Typing skills were recognized as the biggest stumbling block in one hospital computerization effort, Blignaut and McDonald, 1999.) Nevertheless, their aim is in the practice of their medical specialties and would not see the need to learn to use the computer. Thirdly, many of the computer works are viewed as administrative and are considered to be the responsibilities of the hospital. Some doctors would be resistant to spend time to learn and perform the tasks that are perceived as administrative and the responsibilities of the hospital. Furthermore, there are also pragmatic considerations. Doctors visit their patients in the hospital outside the office hours of their clinics. They do not normally spend a lot of time at the hospital; and when they are at the hospital, their main concern is with the patients. They would prefer to use their most proficient (and efficient) way to place their medication orders, which is the paper-and-pen method.
74 Chan
For such hospitals under such situational necessity and with such pragmatic considerations, it is therefore necessary to assume the paper-andpen method as given. Nevertheless, such hospitals are to look for effective and efficient ways of handling the remaining business processes of filling doctors’ medication orders.
Medication Ordering Overview We break down the operational process of medication ordering into four sub-processes: order capturing, order sending, order receiving, and order processing. We summarize in Figure 1 our overview of several methods. Traditionally, our ordering method has been manual. Orders originate from physicians. These orders are captured by paper-and-pen operations performed by the physicians. Simply, they write down their orders on paper that may be specially designed forms and may be of multiple copies. A hospital staff would then hand carry these paper orders to the pharmacy, and the pharmacy staff would receive the paper orders and dispense medication accordingly. As facsimile machines are available, there are hospitals that use an “improved” manual method. With this method, the order capturing process continues to be manual, using paper and pen. However, the hospital staff sends the orders, and the pharmacy staff receives the orders by using the facsimile machine. The computer-based Direct Physician Order Entry (POE) System has been put forth for over 20 years (Sittig and Stead, 1994). Based on the above Figure 1: Methods of order entry–overview Method Traditional Manual Improved Manual Direct Physician Order Entry
Order Scanning and Imaging System
Order Capturing Paper and Pen Operations by Physicians
Order Sending
Paper and Pen Operations by Physicians Computer Entry by Physicians into Database
Delivery by Hospital Staff via Fax
Paper and Pen Operations by Physicians
Computer Scanning of Orders at Ward into Database & Information Processed for Network Printing
Manual Delivery by Hospital Staff
Computer Processing of Database Information for Network Printing
Order Receiving Manual Receipt by Pharmacy Staff Fax Printing at Pharmacy Fax Machine Order Printing at Pharmacy Printer Order Printing at Pharmacy Printer
Order Processing Medication Dispensing by Pharmacy Staff Medication Dispensing by Pharmacy Staff Medication Dispensing by Pharmacy Staff with Database Information Medication Dispensing by Pharmacy Staff with Database Information
End-User Directed Requirements 75
breakdown of operational sub-processes, the direct POE system would require physicians to use the computers to enter their orders into the computer database. The order would then be transmitted via the network for the pharmacy staff to receive the orders. One way is for the order to be printed at the pharmacy printer with relevant patient and order information. Sittig & Stead provided a good review of the state of the art and reported on POE implementation cases (Sittig and Stead, 1994). They also elaborated on the rationale for, the sociologic barriers to, and the logistical challenges involved in successful implementation of a direct POE system. System design issues were also raised. An evaluation of POE implementation in one hospital can be found in Lee et al. (1996). A recent study recommended the implementation of provider order entry systems (especially for computerized prescribing) for the reduction of frequency of errors in medicine using information technology (Bates et al., 2001). In another recent study, it was reported that POE requires substantially more time (2.2 minutes more per patient overall) than traditional paper-based ordering methods. It was also argued, however, that after considerations of duplicative and administrative tasks, the difference was reduced to 0.43 minute (Overhage et al., 2001). We have developed a fourth method: the Scanning and Image Processing System (SIPS). This is the result of an innovative effort to meeting end-users requirements with information technology.
THE SCANNING AND IMAGE PROCESSING SYSTEM (SIPS) FOR MEDICATION ORDERING The operation of SIPS makes use of specially designed medication order forms for doctors to write their orders. The forms are to be scanned by ward staff (clerk or nurses). The computer then performs recognition of barcode and marks. Hand-written orders in the scanned forms are captured as images. Orders are generated that include images as well as patient information (from the form as well as from the Hospital Information System [HIS] databases) and are transmitted to the pharmacy printer via the network. Additionally, their order images are kept in the SIPS database for ready reference by means of unique order numbers generated by the system. The system attempts to minimize the introduction of drastically new operational procedures. As far as the doctors are concerned, the operation of this system is very similar to the usual practice, i.e., they continue to provide handwritten orders. The system reduces human effort by eliminating the sending of the orders to the destination by hand. The system also has built-in features to detect and avoid possible scanning and processing errors.
76 Chan
SIPS was developed to tolerate faults for a smooth operation in the ward where staff are not normally highly proficient in handling the computer. For example, SIPS checks for the readiness of the scanner and printer. It detects and corrects skewing. This becomes even more important because of hardware aging and fatigue. The handwriting by the doctors may be too light, and the images captured may not be legible. SIPS sets a high scanning intensity level so that such handwriting can be captured and legibility ensured. On the other hand, the barcode recognition requires SIPS to set a lower scanning intensity level. Finally, in order to avoid small dirty spots to be recognized as marks, SIPS allows a user-specified black value threshold for these marks.
SIPS Design and Operational Requirements In developing an order entry system using scanning and imaging, we observe that there are design issues and operational requirements for the system in order that the system may serve as an effective supporting tool in real-life operations. The following lists these design and operational requirements (which may reflect our lessons learned): • Paper-and-pen operations: There should be minimal impact on the physicians. In other words, it was considered a requirement that physicians should not have to be trained to use the system, nor should they be required to change their ways of doing things. • Automation: It was considered important that as much automation as possible should be used for efficient and accurate operations, as long as a high reliability is maintained. • User Friendly: The system must be simple and easy to use. Supporting staff (nurses and ward clerks) have a varied degree of computer proficiencies. Since this is a 24-hour operation at times, the system has to be used at night when other supports are scarce. Thus, computer proficiency should not be required. Furthermore, there are always staff turnover. It is important that the system can be learned easily by new staff, with minimal on-the-job training. • Assured Quality: Quality must be assured. A goal was set such that it is 100% reliable. By this, we mean when the system declares a job completed, it is completed totally correct. In other words, we need a highquality system that can detect errors and, if possible, correct them. If the error cannot be corrected, the users should be notified and helpful information provided for the users to make the corrections. The information handled by the system must be accurate and complete. Our effort in reaching this goal was evolutionary. All anticipated errors were programmed with error capture, exception handling, and graceful degrada-
End-User Directed Requirements 77
tion. Then, subsequent unforeseen errors were debugged and appropriate measures were built into the system. We then observed, after the first six months of operations, that we were able to account for all errors and failures. For example, due to scanner usage fatigue, at very rare occasions, the scanner returned a scanning success while failing to capture the image. The result was a successful scan but then the recipient was sent a blank order. To handle this, we added a test that would require the size of the scanned image to be larger than a certain minimum. If the size were to be below the minimum, while receiving a successful scan from the scanner, the program would declare a failure and would prompt the user to scan again. Furthermore, it seems obvious that recognition of hand-written orders would be disastrous and the error rate would be operationally unacceptable. One recent study reported 87%-93% accuracy in two handwriting recognizers of letters under specially constrained environment (MacKenzie and Chang, 1999). Even with such level of accuracy, errors would be disastrous in a hospital environment, and any system with handwriting recognition would fail operationally. • Fault Tolerance: The system should have a high degree of fault tolerance. It should cater to as many exceptions as possible. Appropriate automated exceptions handling procedures should be built into the system, and user intervention should be the last resort. • Contingency: A graceful degradation should be achieved in case of system failure. The contingency plan should aim for “business as usual” backup procedures.
SIPS System Architecture The overall system architecture of SIPS is shown in Figure 2. At each nurse station, there is a personal computer (PC), a scanner, and a printer installed. The scanner is used to scan medication order forms to be sent to the pharmacy. The printer is used for local printing of scan reports. The computer (PC) communicates with the SIPS server and accesses data through a network. It also accesses the hospital’s information system database via the Hospital Information System (HIS) server by calling SIPS/HIS interface routines. The SIPS server stores and retrieves image data of patients (from the SIPS database) and redirects output images to remote pharmacy printers for printing. These printers, physically located in the pharmacy, are connected to the system via the Novell network. At the same time, order information is
78 Chan
Figure 2: System architecture of SIPS SIPS Database
HIS Database
SIPS Server
HIS Server
Nurse Station
Scanner
Pharmacy Printer
Local Printer
stored in Oracle tables in the SIPS database that is stored in the SIPS server hard disks. The PC at the nurse station serves also as a client to the HIS server (run by the IBM RISC 6000 machine on AIX). In operating this system, data will have to be uploaded and downloaded between the HIS and SIPS databases. This is accomplished by various HIS/SIPS interface routines and procedures.
SIPS Functional Features The following functional features of SIPS are identified to be necessary system capabilities for an operational ordering system using scanning and image processing. • Recognition and Image Processing: SIPS performs bar-code recognition to determine the patient-owner of the form. There are specially designed marks to be detected for skew detection and for relative location identification. There are boxes to be recognized (to determine if they are marked) for specific actions, such as if the order is an urgent order and/ or a discharge order. There are three image sections on the form. The first section contains patient information. This is always captured and included in the printed
End-User Directed Requirements 79
•
•
•
order at the pharmacy printer. The second section is for the doctor to write clinical/diagnostic information. For this section, a mark box is provided for the user to indicate whether this image section should be used in the printed order and saved for future used. Otherwise, the stored image should be used instead. Finally, the third section is for the doctor to write the medication order. Since the forms are used for multiple visits and orders, the relevant order image has to be identified by the user before each scan. A line-used column is available for the user to mark the portion of the page that is not relevant to the current scan and the beginning and the ending of the relevant image. Based on these markings, SIPS would intelligently capture the image that is relevant to the current order. Information Identification and Validation: The form owner (patient) is represented in the database by a unique number for the current admission. This admission number is bar code printed on the order form. After SIPS recognizes the value of the number, validation is required. The recognized admission number is used to search the hospital database. The admission number must represent an admission with a patient who is still not yet discharged. However, this is not sufficient because a slight error in recognizing the admission number may lead to a different but not yet discharged patient. Another validation process was implemented. Two check digits are used for this validation. Each digit of the admission number is weighted by a prime number, and the check digit is determined to be the remainder of the modules for a certain value of the weighted sum. Since a juxtaposition of two digits with a slight change in their values may result into the same check digit, another set of prime number is used for a second check digit. Fault Detection and Correction: Forms are either placed on the flatbed of the scanner or are fed into the scanner by an automatic document feeder. The primary fault is form skewing, including translation and rotation. Skewing are detected and the degree of skewing determined in SIPS. Position and angle corrections are performed. But for large skewing which may induce significant errors, total rejection action is taken and the users are advised to rescan such forms. Another possible fault may be due to hardware aging and failure. Various equipment fault detection and corrective action are built into the system. Database and Image Storage: SIPS is integrated with the hospital’s pharmacy system via a unique order number generated each time an
80 Chan
•
order is successfully scanned. A record is created in the database so that subsequent query can be made to retrieve captured images that were stored in the database. Security: The security for program access, database access, and network access are maintained.
SIPS Modules SIPS is composed of two modules: Ordering Module and Order System Maintenance Module. The Ordering Module provides all necessary facilities for the performance of necessary activities in sending requests of medications for the patients, from the wards to the pharmacy. This module encompasses all the activities related with the ordering of medication. The basic process is consisted of the preparation of specially designed forms, the completion of these forms, the scanning of these forms, the printouts at the receiving end, and the managing activities of these orders subsequent to scanning. The Order System Maintenance Module provides needed facilities for maintaining SIPS.
COMMENTS SIPS has been successfully implemented and has been in operation for over three years. The following observations can be made: • Improved Operations: There are, on the average, 500 scans per day that are performed primarily in the morning. SIPS has been running very well and improving the operations. There is no change to the doctor’s paperand-pen operations. Since each form can be marked in a matter of seconds and scanned in less than a minute, the orders can be sent quickly to the pharmacy. At times, follow-up actions (if needed) such as the substitute of a drug in shortage can be initiated quickly. Furthermore, there is no need for making copies or sending the original order to the pharmacy. SIPS has been operated mainly by ward clerks, and thus valued medical personnel (doctors and nurses) are freed from performing the ordering task. We observed that SIPS indeed has improved the operations with much reduced efforts. We observe users satisfaction by the continued usage of SIPS without complaints or requests for an alternate procedure. As a matter of fact, it was asked how the hospital could “live without” SIPS. • System Integration of Information Technologies: SIPS is a system based on the integration of different information technologies in-
End-User Directed Requirements 81
•
•
•
cluding scanning, bar code and other marks recognition, intelligent image capturing, server database access and retrieval, and network communication and printing. Furthermore, we also have to integrate the printing and image processing of the Chinese characters used in the forms and the order images. Business Process Reengineering (BPR): The process of medication ordering was reengineered. We used the “clean-slate” approach (Hammer & Champy, 1993) to develop SIPS. We first developed the operations and procedures for the system before defining the system specifications. The operations are concerned with the “what’s” that have to be performed. The procedures specify the “how’s” in accomplishing the operations. These have to be specified in detail in order that the system may work in live operations. Many details are defined in the system manuals (Chan, 1995). As to the “clean-slate” approach, we deviated from it in the implementation of the redesigned process. As discussed earlier, it was deemed necessary to assume the paper-and-pen method as given. On this basis, we looked for effective operations and efficient procedures for handling the business processes of filling doctors’ medication orders with the aim of improvement in meeting patients’ healthcare needs. We have had a significant “process simplification,” resulting in “substantial improvement of patient safety” (as suggested in Bates et al., 2001). Referral Letters: There is an important complementary component to SIPS for it to work effectively in live operations in a hospital. Many patients to be admitted to the hospital carry with them a referral letter with medication and investigation orders for immediate action upon admission. In order that these medication orders can be ordered from the ward using SIPS, the medication orders in a referral letter must be “transcribed” onto the order forms. Instead of “transcribing” by hand, we have a system that scans and captures these referral letters as images and has them printed on the order section of the order form. Then, upon admission to the ward, SIPS can be used for placing these orders immediately. The basic features of this system are the capturing of a referral letter as image and appropriately reducing the image size, and then printing the image to fit into the order form. Extension: SIPS has been designed to include scanning and image printing to various supporting units for a total of fifteen different investigation order types such as operating theatre (for booking), laboratory, X-ray, ultrasound, magnetic resonance imaging (MRI) orders, etc. There are additional designs such as a section for clinical and
82 Chan
diagnosis information and the mark boxes for different destinations. Many more detailed operations and procedures have been specified.
CONCLUSION In this paper, we presented SIPS, a system that integrates different information technologies in a way to meet the needs of a particular enduser group. We presented different design and operational requirements of SIPS that could be considered for the design and development of a similar system. The different functional features of SIPS presented should also be of relevance. SIPS indeed meets the need of a particular type of end users that require paper-and-pen operations. With the improvement of computer competency of the medical profession, we foresee the need diminishing (perhaps slowly). But with the improvement of information technology, we conjecture the concept of “bringing the computer to the bed.” There will no longer be a need to enter orders and placing orders separately. Physician orders will be directly entered into the computer and communicated immediately to the service organizations (such as the pharmacy). In the future, it may be pen computing, using a pen tablet or a Palm. It may be voice input. These may all be running over a wireless network.
VDT Health Hazards 83
Chapter VI
VDT Health Hazards: A Guide for End Users and Managers Carol Clark Middle Tennessee State University, USA
INTRODUCTION Managers must strive for a healthy and productive working environment for end users. Eliminating or reducing lost work days and improving worker productivity in turn relates to the organization’s profitability. VDT related health issues are important to end users, managers, and the organization as a whole. End user computing is becoming commonplace in most businesses. In 1993 more than forty-five percent of the US population used computers at work according to the US Department of Labor (1997). The proliferation of end user computing creates a host of management issues. One such issue involves the potential health hazards associated with video display terminal (VDT) use. Both managers and end users must address this issue if a healthy and productive work environment is to exist. Government bodies have been addressing the VDT ergonomics issue. Legislation has been created or is in the process of being created to protect VDT end users. VDT-related ergonomics should be proactively approached by management. In fact, some companies have ergonomics plans in place that specifically include the use of VDTs. One such company is Sun in Mountain Copyright © 2002, Idea Group Publishing.
84 Clark
View, California. Sun has reduced its average repetitive strain injury (RSI) related disability claim from a range of $45,000 to $55,000 to an average of $3500. They address not only equipment issues but also behavioral changes like taking frequent breaks (Garner, 1997). The good news is that many recommendations, such as this, are relatively simple to implement into any organization. This article outlines major health issues (e.g., vision problems, musculoskeletal disorders, and radiation effects on pregnancy), as evidenced by the literature and medical research, associated with VDT use. It provides practical suggestions for both end users and managers to help eliminate or reduce the potential negative health effects of VDT use.
MAJOR VDT-RELATED HEALTH ISSUES Vision Problems Vision problems related to VDT use have raised concerns for both end users and managers for some time. How extensive is the problem? “A survey of optometrists indicated that 10 million eye examinations are given annually in this country, primarily because of vision problems related to VDT use” (Anshel, 1997, p. 17). In addition, seventy-five to ninety percent of all VDT users have visual symptoms according to a number of investigators (Anshel, 1997). The term computer vision syndrome (CVS) has emerged. CVS is defined by the American Optometric Association “as that ‘complex of eye and vision problems related to near work which are experienced during or related to computer use’” (Anshel, 1997, p. 17). The symptoms included in CVS are “eyestrain, headaches, blurred vision (distance, near, or both) dry and irritated eyes, slowed refocusing, neck ache, backache, sensitivity to light, and double vision” (Anshel, 1997, p. 17). Most of these symptoms have been a cause for concern for some time. But the development of a specific ailment, i.e., CVS, has solidified the concern. What causes CVS? A variety of factors include an individual’s visual problems, poor workplace conditions, and incorrect work habits (Anshel, 1997). An individual’s visual problems, like astigmatism, are clearly medical concerns beyond the present scope. However, workplace conditions and work habits are directly germane to VDT use in the office environment and are appropriate to this discussion. The problem of glare produced by traditional office lighting on VDT screens is well known. This lighting is suited to white paper work and not
VDT Health Hazards 85
computer screens. The point-of-view has changed for the user. Instead of looking down (on the desk surface), one looks directly ahead at the screen. (Bachner, 1997). This change in work environment must be addressed and modifications made to accommodate the end user. One study addressed ocular surface area (OSA) as a contributing factor in vision distress. The subjects performed wordprocessing as a task for ten minutes. The front of the eye was video taped with a small TV camera. VDT work involves a high gaze angle which induces a large OSA. VDT users look at both the screen and the keyboard. The study looked at this screen and keyboard distance and the related OSA (Sotoyama et al., 1996) “A large OSA induces eye irritation and eye fatigue because the eye surface is highly sensitive to various stimuli” (Sotoyama et al., 1996, p. 877). Another study addressed the relationship between visual discomfort (asthenopia) and the type of VDT work activities performed by over ten thousand VDT operators. The types of VDT work were data entry, data checking, dialogue, word processing, enquiry, and various services. The number of hours of application was also included. The findings indicated that the type of VDT work was not a significant factor in visual discomfort. The results suggested that the main factor is the amount of time spent at the VDT. (Rechichi, DeMoja and Scullica, 1996) This further indicates that visual discomfort can be present among several groups of workers with different VDT-related responsibilities. Managers will need to address the possibility of vision-related ailments across various end-user areas.
Musculoskeletal Disorders Musculoskeletal discomfort and related problems are another concern for VDT end users. These problems include carpel tunnel syndrome that affects the hand and wrist. Working with VDTs usually requires small frequent movements of the eyes, head, arms, and fingers while the worker sits still for some time. Muscle fatigue may result after maintaining a fixed posture for extended periods of time. Muscle pain and injury can result. Musculoskeletal disorders are caused or made worse by work-related risk factors. These disorders include injuries to muscles, joints, tendons, or nerves. Pain and swelling, numbness and tingling, loss of strength, and reduced range of motion are early symptoms of musculoskeletal disorders (US Department of Labor, 1997). A review of musculoskeletal problems in VDT work was done by Carter and Banister (1994). They conclude that “musculoskeletal discomfort associated with VDT work is attributable to static muscular loading of the system, biomechanical stress, and repetitive work” (Carter and Banister, 1994, p.
86 Clark
1644). However, they indicated that a conclusion cannot be made about whether the frequency of occurrence of musculoskeletal problems is higher for VDT workers than non-VDT workers (Carter and Banister, 1994). Carpal tunnel syndrome is one of the top workplace injuries that results in lost work time (Quintanilla, 1997). The Bureau of Labor Statistics recorded almost 32,000 cases of carpal tunnel syndrome in 1995 (Quintanilla, 1997). The National Council on Compensation Insurance reported in 1996 that “carpal tunnel syndrome cases are a growing cause of workers’ compensation claims” (Gjertsen, 1997, p. 8). In addition, more work days are missed as a result of CTS than from other injuries (Gjertsen, 1997). Missed work days mean reduced productivity, which is, of course, of major concern to managers. Reaching across the desk to use a mouse, squeezing a mouse hard, and button-pressing for long periods can affect the musculoskeletal system. Mouse-related disorders ranged from 0 in 1986 to 216 in 1993. This is a relatively small number, but in terms of repetitive stress injuries (RSIs) from computer use, it represented six and one-tenth percent (Lord, 1997). The potential for more cases is great given the proliferation of use of mouse devices among VDT users today. A report from the Office Ergonomics Research Committee in Manchester Center, Vermont, says that many factors contribute to RSIs. They include a lack of training on prevention of injuries, physical differences among people, and medical histories as well as psychological stress that may lead to physical pain in some (Gjertsen, 1997). Of the annual $60 billion workers’ compensation costs, RSIs are one third, according to Federal OSHA (Occupational Safety and Health Administration) (Rankin, 1997). The potential impact on end users in terms of health and productivity is immense. From an organizational perspective, employee morale and productivity are at stake, as well as profits. The Bureau of Labor Statistics figures for repetitive motion injuries (RMIs) related to repetitive typing or keying that resulted in missed work days show a drop of six percent from 1995 to 1996. There has been a total drop of seventeen percent since 1993. Also, it is reported that repetitive typing or keying accounts for only six-tenths of a percent of the total reported cases of RMIs. P.J. Edington, Executive Director of the Center for Office Technology, notes that “clearly, most repetitive motion injuries are not occurring in the offices of America” (Center for Office Technology Page, 1998a). Even
VDT Health Hazards 87
though there is a decline in RMIs, some people are still being affected. The issue must continue to be addressed by end users and managers. An estimate by the U.S. Department of Labor is that about fifty percent of the US workforce will have some kind of cumulative trauma disorder by the year 2000 (Garner, 1997). So, no matter what the cause this issue needs to be addressed. Musculoskeletal disorders should not be ignored by either end users or managers. It is evident that some of the potential factors can and should be addressed at work to help to prevent the occurrence of or reduce the negative health effects associated with VDT work.
Radiation Effects on Pregnancy The issue of radiation effects on pregnancy has caused concern among VDT users. This has prompted much research in the area. A debate has existed as to whether the use of a VDT during pregnancy results in adverse health risks, such as increased spontaneous abortions. In 1991, the New England Journal of Medicine published the results of one such study. The final study group consisted of directory assistance and general telephone operators. There were 730 subjects with 882 eligible (as defined by the study) pregnancies during the 1983-1986 time period. Some of the subjects had more than one pregnancy. There were also sixteen pregnancies with twins. The directory assistance operators used VDTs while the general telephone operators did not. This represented the primary difference between the two groups. The study concluded that VDT use (and the electromagnetic fields) was not related to an increased risk of spontaneous abortion (Schnorr et al., 1991). A second part of the study focused on the association between VDT use and risk of reduced birth weight and preterm birth. The results of this study were that the risks of reduced birth weight and preterm birth did not increase with occupational VDT use (Grajewski et al., 1997). The general results and conclusions by the authors of one study state that there is no increased risk of spontaneous abortion with VDTs. The study group consisted of 508 women who had a spontaneous abortion and 1148 women who gave birth to healthy infants from 1992-1995. However, in the area of VDT use for word-processing activity and spontaneous abortion, a borderline statistically significant risk was found (after considering other potential confounding effects) (Grasso et al., 1997).
88 Clark
ADDRESSING VDT HEALTH ISSUES: PROGRAMS AND LEGISLATION Ergonomics Programs Ergonomics programs can address the issue of musculoskeletal disorders. A report entitled “Worker Protection: Private Sector Ergonomics Programs Yield Positive Results” was produced by the US General Accounting Office in 1997 (Fletcher, 1997). It looked at five private companies to identify key components that are critical to ergonomics programs. The core elements to protect workers are “management commitment, employee involvement, identification of problem jobs, development of solutions for problem jobs, training and education for employees, and appropriate medical management” (Fletcher, 1997, p. 71). The report suggests allowing flexibility in designing a program that depends upon the worksite. This suggestion was aimed at the continuing efforts by OSHA to develop ergonomics standards (Fletcher, 1997). It is possible to reduce or eliminate potential health hazards of VDT use. Managers should initiate a new ergonomic program or revamp an old one with these suggestions in mind.
Current VDT Legislation A current legislative action that addresses ergonomics issues was finalized in 1997 in the state of California. In July of 1993, the governor of California signed a law requiring the development of statewide ergonomics requirements by the end of 1994. This mandate resulted in the proposal of the Occupational Noise and Ergonomics Hazards rule (Williams, 1997). The rule exempts employers with less than 10 workers (Ceniceros, 1997). It applies to workers who use computers for more than 4 hours a day or 20 hours a week (Williams, 1997). The rule addresses six areas: chairs, workstations, workstation accessories, lighting, screens, and work breaks (Williams, 1997). The California ergonomic standard on repetitive motion injuries was passed in 1997 (Guarascio-Howard, 1997). While it is enforceable, a series of legal challenges are currently aimed at the standard. Managers and end users will want to keep a close watch on the outcome of the legal process and the ultimate enforceability of the standard. The introduction of this ergonomics standard will potentially play a role in future standard development. Managers may want to take a voluntary approach to ergonomics to protect end users.
VDT Health Hazards 89
RECOMMENDATIONS FOR END USERS AND MANAGERS Recommendations: Vision Problems Care should be taken to reduce the glare on the screen. Monitors that tilt are the norm today. However, office lighting should be addressed and modified to fit the new office environment that goes beyond traditional white paper work and includes the use of computers. Dennis Ankrum, a certified ergonomist, says that the previously suggested computer monitor distance of 20-26 inches is too close. He suggests the distances of 45 inches for looking straight ahead and 35 inches for looking downward (at an angle like reading a book). He also suggests increasing the typeface size instead of getting closer to the monitor. He suggests that the eye muscles are most relaxed at these distances (Verespej, 1997). In some cases, corrective lenses are needed to avoid eye strain and headaches associated with VDT work. Taking a rest break every hour or so can reduce eyestrain. Also employees can simply change their focus (for example, looking across the room) to give eye muscles a chance to relax (US Department of Labor, 1997). Eye irritation and fatigue are induced by a large OSA (ocular surface area). Two recommendations are made. One, adjust the desk height to suit the user’s height. Two, place the monitor closer to the keyboard. The latter suggestion reduces the OSA (Sotoyama et al., 1996).
Recommendations: Musculoskeletal Chairs Work equipment should accommodate the human body and not vice versa (Fisher, 1996; Kamienska-Zyla and Prync-Skotniczny, 1996). For example, the purchase of adjustable chairs is important because chairs are not one size fits all (Rundquist, 1997). Research in chair quality and adjustability is needed. This, of course, will require time and effort. In addition, the purchase of ergonomically designed chairs can be costly. However, one company considers the purchase of a $500 chair to be a minimal expense when compared to a disability claim. This is especially true when the cost is amortized over five years (Garner, 1997). If the results could lead to happier, healthier, and more productive employees, it would be time and money well spent.
90 Clark
The actual implementation of this process could be facilitated by an ergonomic task force made up of a cross-section of employees (Rundquist, 1997). This approach would incorporate various end-user areas and perspectives, perhaps resulting in a more thorough evaluation. Keyboards A recent trial against a keyboard manufacturer (Digital Equipment Corporation) found in favor of the defendant. A similar judgement has resulted in many other cases. It was determined that “keyboards do not cause musculoskeletal disorders.” This is a part of a trend recognizing that injuries are not caused by keyboards (Center for Office Technology Page, 1998b). However, position and level of force while keyboarding may contribute to RSI (repetitive stress injuries) symptoms. These are factors that can be controlled by the end user. Users should maintain a straight natural posture. The wrist should not bend up or down or twist left or right. The keyboard level should be adjusted also. It should be positioned near or below the elbow level (Figura, 1996). Should ergonomic keyboards be purchased? A study by NIOSH (National Institute for Occupational Safety and Health) reported no difference among subjects using standard and ergonomic (split design) keyboards. The study took place over a two-day period. However, the report suggests further research over a longer period of time and with subjects who have symptoms of fatigue or discomfort (Gjertsen, 1997; Swanson et al., 1997) Other new keyboard designs include fully adjustable tent-shaped models and those with rearranged keys to reduce finger reaching. However, it is not known whether or not these new designs reduce RSIs (Figure, 1996). Mouse Many end users use a mouse at the computer. Place the mouse so that the wrist floats, i.e., position the arm at a 90 degree angle. The mouse should be within easy reach of the user. Do not reach across the desk. The forearm may get stressed after long stretches of button pushing. One suggestion is to switch between a trackball and a touch pad to vary the movements required (Lord, 1997).
Recommendations: Radiation Effects on Pregnancy Even though VDT emission levels are low, radiofrequency and extreme low-frequency electromagnetic fields are still being debated. The relationship between VDT use and pregnancy continues to be addressed. Some workplace designs are increasing the distance between the operator and the workstation
VDT Health Hazards 91
and between workstations to reduce the potential electromagnetic field exposures to VDT users (US Department of Labor, 1997).
Recommendations: General One prevalent theme among ergonomic recommendations is for users to take brief breaks and exercise (for example, stretch). Software is available to remind workers to take a break. Two such programs are ScreenPlay and PowerPause. They pop up on the screen at scheduled intervals (Lord, 1997). Software may either monitor the passage of a specified time period of computer use or count keystrokes before it prompts the user to take a break. Other programs go beyond this and provide stretching exercise instruction or feedback related to the level of effort intensity. More advanced software prepares images that indicate postural risk to body parts. This would help health and safety managers and ergonomists to assist users in preventive measures (Guarascio-Howard, 1997). Other recommendations, in addition to breaks, include stretching, exercising, and improving posture (Garner, 1997). Tips for reducing the discomforts also include training and good workstation and job design (Carter & Banister, 1994). Most of these are common sense approaches that are not costly. Workers should be encouraged to include these in their work habits.
Recommendations: Laptops There are additional recommendations for laptop computer users. Try to lighten the load. Do not carry extra equipment (e.g., extra battery) unless necessary. Shift the load from one side of the body to the other. Use a padded shoulder strap (LaBar, 1997; Center for Office Technology, 1997). When using the laptop at the office, a full-size monitor and keyboard may provide more comfort to the user from both a visual and musculoskeletal perspective (Center for Office Technology, 1997). The mobile end user should follow the general guidelines as well as the specific ones given when using a laptop. Managers should not assume that end users will be aware of these recommendations. Managers should explicitly address the issues with laptop users to minimize the potential risks.
CONCLUSIONS End users should stay abreast of the ergonomic recommendations made regarding VDT use in the work environment. Many such recommendations were given here. However, continued research will provide ongoing informa-
92 Clark
tion pertinent to this issue. In addition, end users should take an active role in the development and implementation of office ergonomic programs. This will make end users aware of the potential VDT-related problems and will involve them in the development of solutions for the problems. Managers should be proactive in addressing the end user health issues described. The report on good ergonomic programs provides a basis for developing an ergonomic program in any organization. Managers should include current and future VDT ergonomic recommendations in their organization’s ergonomic plan. Many of the current recommendations are not complex and can be easily incorporated. Other recommendations may require more time and effort on the part of both managers and end users.
VDT Health Hazards 93
Section II The Role of End-User Interface, Training, and Attitudes toward Information Systems Success
94 Shayo & Olfman
Chapter VII
The Role of Training in Preparing End Users to Learn Related Software Packages Conrad Shayo California State University, USA Lorne Olfman Claremont Graduate University, USA
The aim of this chapter is to determine what types of formal training methods can provide appropriate “mapping via training” of a new but related software application (in this case, a database management system [DBMS]), given that “mapping via analogy” is also taking place. To this end, trainees’ existing mental models were measured, and then the trainees were exposed to one of the three training methods. Training methods were varied in an experimental setting across two dimensions: the training task context (generic versus relevant), and the number of DBMSs demonstrated (one versus two). Outcomes were measured in terms of learning performance and perceptions of ability to transfer skills to a new but related DBMS. The results indicate that both task context and the number of software packages learned are important training variables that influence trainees’ mental models of the software, their transfer self-efficacy expectations, and their perceptions about the usefulness of the training. Copyright © 2002, Idea Group Publishing.
The Role of Training in Preparing End Users 95
INTRODUCTION Organizations always acquire new or related versions of end-user software packages, but little is known about how existing mental models (knowledge) acquired through previous training prepare them to learn a related package. For example, an organization that previously used a DOSbased database management system (DBMS) package like dBase IV might switch to a Windows 95 environment using Microsoft Access. The two DBMSs perform similar functions, but the interface and specific features are different. To what extent can previous knowledge or mental models of the dBase IV software package help or hinder users to learn the Microsoft Access software package? Moreover, what kinds of training methods can best build these mental models? Answers to these questions will help organizations determine their long-term software training strategy. An end-user’s mental model of a software package is defined as the existing structure of knowledge (declarative and procedural) which is activated into working memory at any one time when the end user thinks about using or learning a target package to accomplish some task (cf. Rumelhart, 1980; Wilson and Rutherford, 1989). It is a mental representation of existing knowledge about the software package. Mental models are acquired and reinforced through a set of processes called “mapping.” Mapping is accomplished by formal training, usage, or calling upon other mental models. This later process, that is, “calling upon other mental models,” is termed “mapping via analogy” (Sein et al., 1987, 1993). For example, computer directory structures can be thought to be analogous to filing cabinets, or word processing is analogous to typing. The aim of this paper is to determine what types of formal training methods can provide appropriate “mapping via training” of a new but related database management system given that “mapping via analogy” is also taking place. To this end, trainees’ existing mental models were measured, and then the trainees were exposed to one of three training methods. Training methods were varied in an experimental setting across two dimensions: the training task context (generic versus relevant), and the number of DBMSs demonstrated (one versus two). Outcomes were measured in terms of learning performance and perceptions of ability to transfer skills to a new but related DBMS. The next section outlines the background to the study, relevant previous literature, and research hypotheses. This is followed by a description of the research methods, a presentation of results, and a discussion and conclusion of the value of these results.
96 Shayo & Olfman
BACKGROUND With rapid changes in hardware and software technology, organizations are constantly upgrading and changing their end-user software suite (Lassila and Brancheau, 1999; Benamati and Lederer, 2001). For example, the introduction of Windows 95, Windows 98, and Windows 2000 created a new set of applications that were packaged with new computers (e.g., Microsoft Office software). Users cannot expect to operate only one version of a particular application over their working life, but instead must confront multiple packages that are similar but not the same. As such, there is need for organizations to consider the best way to train users so that the conversion process is most effective. Successful formal software training should lead to an accurate mental model of the software (Sein et al., 1987; 1993). The mental model construct is defined in detail in Appendix A. An accurate mental model is expected to increase the trainees’: (a) learning performance, (b) attitude toward using the software, and (c) attitude toward using the mental model of the target software to learn new but related software packages. This last outcome has not received attention in software training research. The outcome is important because the mental model of the package formed as a result of the training will most likely influence future learning of similar packages. According to Gick and Holyoak (1987): any salient similarity of two situations [software packages] will influence their [trainees’] overall perceived similarity, which will in turn affect retrieval of the representation of the training during the transfer task—the greater the perceived similarity of the two situations, the more likely it is that transfer [will be] attempted ... the direction of transfer [will be] determined by the similarity of the two situations with respect to the features causally relevant to the goal or required response in the transfer task” ( p. 16). It is reasonable, therefore, to expect trainers to strive to influence the trainees’ beliefs about their ability to apply current concepts to future learning of related packages. This study intends to explore how and to what extent mental models formed by end users in one training program influence their current learning performance and their attitude toward learning a related package. Learning performance is measured in terms of perceived ease of learning (also called self-efficacy1 of learning), perceived usefulness of learning, satisfaction from learning, and behavioral intention to learn. This study extends previous
The Role of Training in Preparing End Users 97
research by Bostrom, Olfman, and Sein (1990) on software training by adding attitude toward learning a related software package as one of the important outcomes of transfer of knowledge and skills acquired from software training.
RELEVANT PREVIOUS LITERATURE Previous research in human factors and software training indicates that any study that investigates transfer of knowledge from one software package to another should consider both the training task context and the number of software packages used as critical variables.
Training Task Context Two studies identified in the literature, Olfman and Bostrom (1991) and Lehrer and Littlefield (1993), manipulated the training task context while maintaining the content of the training. Olfman and Bostrom’s study was based on subjects who learned to use a Lotus 123 software package. Lehrer and Littlefield’s study was based on teaching students the Logo programming language. Olfman and Bostrom (1991) conducted a field experiment that compared construct-based versus applications-based training. The task context for the construct-based trained subjects consisted of generic problem solving exercises using the language of a reference manual. Such context was considered less relevant to the trainees. Applications-based trainees were provided with a non-generic training task context (that was more personally relevant) by asking trainees to focus on what the software could do for them. They reasoned that providing a meaningful or relevant training task context helps adult learners to not only link the knowledge to the familiar, but also helps them to make the familiar one instance of a more general case (Gick and Holyoak, 1987). This also increases their motivation and their perceived confidence (self-efficacy) to perform similar tasks (Carroll, 1990). Their study found limited direct effect between application-based training and later usage of Lotus 123. In the Lehrer and Littlefield study, one group of subjects was provided with only generic examples and ideas found in Logo-related activities (microcontext-based instruction), and the other group was provided not only with examples and ideas found in Logo, but also relationships between Logo and non-Logo problems. The authors hypothesized that a more meaningful training task context provides “the likelihood for ‘mindful abstraction’ from Logo [the target software package] to other related contexts” (Lehrer and
98 Shayo & Olfman
Littlefield, 1993, p. 319). Their results showed that effective exercises are the ones that provide a context for trainees to establish specific links between their conceptual and operational knowledge when faced with a problem situation.
Number of Software Packages A review of the literature on transfer of learning in human factors shows that a significant number of researchers have investigated transfer of learning by studying how trainees’ knowledge of one “device” or “simulator” facilitates the learning of another new (related or unrelated) “device” or “simulator.” Dixon and Gabrys (1991) found significant transfer effects of devices that were procedurally similar but insignificant effects in devices that were conceptually similar. Lintern et al. (1990) found that trainees who received both simulator plus conceptual training performed better than those who received conceptual training only.2 Studies in the area of software training that varied the software packages followed the human factors tradition. Three research studies used more than one software package in a laboratory experiment (Gist et al., 1989; Satzinger and Olfman, 1997; Singley and Anderson, 1989). Gist et al. selected software packages that were similar in their training content but different in training methods (behavioral modeling versus computer-aided instruction [CAI] tutorial). They found that behavioral modeling trainees yielded better performance than CAI trainees. Satzinger and Olfman (1997) conducted a study that manipulated two software packages specifically designed for the experiment. The study was a 2 x 2 factorial design with two independent variables (command language and screen appearance), each of which had two levels (consistent versus inconsistent command language; consistent versus inconsistent screen appearance). They found that although there were no differences in user knowledge about the applications, there were differences between inconsistent command languages across the packages. Theye also found that when both command language and screen appearance were inconsistent, the trainees’ perceptions of ability to transfer their knowledge were lower. Singley and Anderson (1989) used a set of three commercially available text editors: UNIX “ED,” VMS “EDT,” and UNIX “EMACS.” The first two were line editors and the latter was a screen editor. The command structure differed markedly between the line editors and the screen editor. All subjects were novices. Initially, subjects were taught a minimum core set of commands for each editor. The experiment was a 2 x 2 factorial design with two control
The Role of Training in Preparing End Users 99
groups. The first factor was the number of line editors the subjects learned to use (one and/or two), and the second was the initial line editor used (ED and/or EDT). The control groups only learned to use the screen editor UNIX “EMACS” and did not learn to use the line editors. One control group went directly to use the screen editor; the other group practiced typing at the terminal before editing with “EMACS.” The authors found that there was near total transfer of skills among line editors, moderate transfer between the line editors and the screen editor, and slight transfer from typing to the screen editor.
HYPOTHESES Since relevant training contexts and exposure to more than one software package have been shown to enhance teaching outcomes, then it is expected that3: H1a: Given their pre-existing mental models of specific and general database management systems, subjects who receive training on both Paradox and dBase IV using a relevant training task context will have more adequate database management system mental models than those who (a) receive training on Paradox only using a relevant training task context or (b) Paradox only using a generic training task context. H1b: Given their pre-existing mental models of specific and general database management systems, subjects who receive training on Paradox only using a relevant training task context will have more adequate database management system mental models than those who receive training on Paradox only using a generic training task context. H2: Given their pre-existing self-efficacy with respect to database management systems operation, subjects who receive training on both Paradox and dBase IV using a relevant training task context will have a higher post training self-efficacy than those who (a) receive training on Paradox only using a relevant training task context or (b) Paradox only using a generic training task context. H3: Given their pre-existing expectation of transfer to any other database management systems operation, subjects who receive training on both Paradox and dBase IV using a relevant training task context will have a higher post-training expectation of transfer than those who (a) receive training on Paradox only using a relevant training task context or (b) Paradox only using a generic training task context. H4: Subjects who receive training on both Paradox and dBase IV using a relevant training task context will have a higher post training expectation
100 Shayo & Olfman
of transfer to Microsoft Access than those who (a) receive training on Paradox only using a relevant training task context or (b) Paradox only using a generic training task context. H5: Subjects who receive training on both Paradox and dBase IV using a relevant training task context will have a higher perceived usefulness of the training than those who (a) receive training on Paradox only using a relevant training task context or (b) Paradox only using a generic training task context.
RESEARCH METHOD Research Design The study was conducted as a laboratory experiment. Originally the experiment was designed as a 2 (generic or relevant) x 2 (one or two packages) factorial. However, due to an administrative error, a full data set was not collected, and therefore the data were analyzed as a one way design with three independent variables. Generic task context with paradox, relevant task context with paradox, and relevant task context with paradox and dBase IV. There was no expectation that subjects receiving a generic task context with two packages will provide different results than those receiving a relevant task context and one package. While two packages should enhance outcomes over one, a relevant task context should enhance outcomes over generic. Either combination would be more effective than generic and one package, but less effective that relevant and two packages.4 So, a 1 x 3 design using either of the two methods could capture the expected interaction effects of the two “mediocre” methods.
Subjects Undergraduate seniors enrolled in a required Introductory Management Information Systems course in the business school of a southwest state university participated in the experiment. To enroll in this class, students had to complete a prerequisite course in fundamentals of data processing or its equivalent and pass a competency exam that evaluated their ability to use word processing, spreadsheet, and relational DBMS packages. All had some previous exposure to Paradox for DOS. Students completed the training described in the experimental procedures below as part of their course work. They received 10% credit toward their final grade for participating in the study.
The Role of Training in Preparing End Users 101
VARIABLES AND MEASURES Control Variables and Measures Background Variables. Students completed a “background” questionnaire prior to the experiment. Items measured included past training, study and use of DBMSs, past experience with instructor/video training and instruction, previous computer experience, age, employment, and GPA. Pre-existing Mental Model. Prior to the experiment, subjects completed two quizzes that were used to assess their mental models of DBMS. The first quiz was a “teach-back” quiz. This method was used by Sein and Bostrom (1989). It is a meaningful way to determine accessibility and general accuracy of the mental model. In this method, subjects are asked to explain (teach) someone not knowledgeable about the target software. Here, subjects were asked to explain the basics of DBMSs as well as those of Paradox. The answers provided by the students were coded with respect to four categories,5 each on a scale of 1 to 7. Thus a subject’s mental model could be rated as low as 7 and as high as 28. The first author and an independent coder both scored the teach-back quizzes. An 88% agreement was reached in the independent round of scoring, and a 95% agreement was reached in the second round after discussions and negotiations. Where agreement could not be reached, scores were averaged. The second quiz was a true-false objective test aimed at measuring conceptual and procedural knowledge as per the definition of mental model of Rumelhart (1980) and others. Satzinger and Olfman (1997) used this method. It consisted of 30 questions covering both general knowledge about DBMSs and specific knowledge about Paradox. For each question the students had to give an answer (true or false) and then rate his or her confidence in the answer on a scale of 1 to 5. The answer was weighted by the confidence rating. Students could score between -120 and +150 on this quiz. The scores on the teach-back quiz and the objective quiz were then recoded according to a seven point scale (see Table 1). The seven point coding scale was devised in order to group the possible scores on the quizzes into a specific number of equivalent categories so that they could be equally weighted. Once the quiz scores were re-coded, the codes were summed to form a total score that represented the quality of the student’s mental model as measured by level of understanding, knowledge of DBMSs generally and knowledge of Paradox in particular. The possible range of mental model scores was 2 through 14. Self-Efficacy. Relational DBMS (RDBMS) self-efficacy is defined as the belief in one’s capability to perform a specific task using a RDBMS package.
102 Shayo & Olfman
Table 1: Ranges for coding quiz scores Category 1 2 3 4 5 6 7
Range of Scores Applicable Teach-Back Quiz Objective Quiz 4 to 7 -120 to -81 8 to 11 -80 to -41 12 to 15 -40 to 0 16 to 19 1 to 40 20 to 22 41 to 80 23 to 25 81 to 120 26 to 28 121 to 150
The questions are aimed at the general concept, RDBMSs, rather than a particular RDBMS. The goal was to learn how well a student believed he or she could transfer existing skills to using a similar package. Self-efficacy assessment is not based on the ease or difficulty of tasks, but on general beliefs about one’s ability to use the package. Self-efficacy is one of the key constructs of social learning theory developed by Bandura (1991). The self-efficacy instrument used in this study was based on Compeau (1992; Compeau and Higgins,1995). The instrument asked about the subject’s ability to use an unfamiliar RDBMS. Subjects were told to imagine that they are given a new RDBMS to do some aspect of their work, and that it did not matter specifically what the particular system would be, only that it was intended to make the job easier. Then 10 scenarios were provided to be rated. Subjects can rate their self-efficacy for each scenario on a scale from 0 to 10. The reliability Cronbach’s alpha coefficient for pre-test, post-test RDBMS self-efficacy is indicated below. Perceived Ease of Transfer to Any RDBMS.6 Perceived ease of transfer is defined as the degree to which the trainee expects transfer of existing knowledge of RDBMSs to be free of effort. This is an adaptation of the concept of perceived ease of use as developed by Davis (1989). The Davis instrument as well as the instrument developed by Moore and Benbasat (1991) to assess adoption of technology were used to devise a measure of “ease of transfer to a related package” rather than “ease of use of a specific package.” Because this measure was built from existing formally validated instruments, further validation was not undertaken. The reliability Cronbach’s alpha coefficient for pre-test, and post-test RDBMS perceived ease of transfer to any RDBMS is indicated below.
Dependent Variables and Measures Mental Model. The same teach-back and true-false objective quizzes were administered after the training, and the same procedure as used for the
The Role of Training in Preparing End Users 103
pre-existing mental model scoring was used to calculate the score for the posttraining mental model. Similar to the measurements conducted for the preexisting mental model, the researcher and another person scored the subjective answers given by the students using a 7-point Likert scale. The scores were taken to be a surrogate indication of the adequacy of the mental model formed by the student. A 71% agreement was reached in the first round. A 95% agreement was reached in the second round after discussions and negotiations. Where agreement could not be reached, scores were averaged. Self-Efficacy. The same instrument as defined above was used to measure post-training self-efficacy with respect to using a RDBMS. The reliability Cronbach’s alpha coefficient for pre-test, post-test RDBMS self-efficacy was 0.81. Perceived Ease of Transfer to any RDBMS. The same instrument used to measure pre-test perceived ease of transfer to any RDBMS was used to measure post-training perceived ease of transfer to any RDBMS. The reliability Cronbach’s alpha coefficient for pre-test, post-test ease of transfer to any RDBMS was 0.77. Perceived Ease of Transfer to Microsoft Access. A more specific instrument than the one used for assessing perceived ease of transfer to any RDBMS was constructed for assessment of this perception with respect to Microsoft Access. Perceptions of Usefulness of Training. The Davis (1989) and Moore and Benbasat (1991) instruments were used to devise a measure of perceived usefulness of the training. Instead of measuring perceived usefulness of a target software package on the job, this instrument measured perceived usefulness of the training in facilitating learning-related packages. Learning Performance. Learning performance was operationalized in terms of changes in the mental models of the students before and after the experiment. The reliability Cronbach’s alpha coefficient for pre-test, post-test learning performance was 0.70.
Independent Variable Training Method. The training methods differed in terms of the nature of the instructor demonstrations and the training content, as well as the number of software packages demonstrated. The two software packages were Paradox (demonstrated in all cases) and dBase IV (demonstrated in one case). The Paradox video demonstration used a fish classification table, a customer table, and an order table. The dBase IV video demonstration used a stock table and a plant classification table. Experience in teaching students was the basis for believing that these were not considered relevant to the students. Postexperiment interviews indicated that this was in fact the case.
104 Shayo & Olfman
Generic Training Task Context with Paradox. The instructor demonstration showed how to query an inventory database that comes with the Paradox software package. The students had been asked to review this database prior to the experiment and were expected to carry out similar queries when they performed their post-training exercises. The inventory database was not considered relevant to the students, and again this was confirmed by postexperiment interviews. Relevant Training Task Context with Paradox. The instructor showed how to query a dating clients’ database. This was considered relevant to the students; they confirmed this to be true in post-experiment interviews. The students were expected to carry out similar queries when they performed their post training exercises. Relevant Training with Paradox and dBase IV. The instructor’s demonstration also showed how to query the dating clients’ database.
Procedure Three experimental sessions were held, one for each of the training methods (treatments). The first author conducted all sessions. Students completed the background questionnaires and a review of Paradox prior to arriving at the experimental session. During each session they were shown a video of the target software package and thereafter were provided with a demonstration of that package. For the session where two packages were presented, the Paradox video and demonstration were followed by the dBase IV video and demonstration. Both the video and the demonstration used specific examples as described in the “Independent Variable” section above. After the session, students completed the dependent measures. Two weeks after the training, six subjects were selected at random from each of the treatment groups for a post-experiment debriefing. The sampling was stratified based on the subjects’ computer experience.
Data Analysis An analysis was conducted to check the consistency of the “average” backgrounds of the various student groups. Analyses of mental model, selfefficacy, expectations of transfer with respect to any RDBMS were carried out using one-way (1 x 3) Analysis of Covariance with the pre-test as the covariate and the training method as the factor. Analyses of expectations of transfer to Microsoft Access and perceptions of the usefulness of the training were carried out using one-way (1 x 3) Analysis of Variance with the training method as the factor. The critical value for significance (alpha) was set at 5%. All statistical analyses were done using the SPSS/Windows (Narusis, 1993).
The Role of Training in Preparing End Users 105
RESULTS Description of Subjects Originally 59 subjects participated in the experiment. Due to administrative errors and lack of cooperation, between 36 and 44 subjects completed all relevant dependent and control measures, depending on the set of measures. An examination of demographic and background data for equivalency of the experimental groups using two tailed tests (for interval level data) and Chi-square tests (for nominal level data) at a = 0.05 revealed no significant differences.
Results of Statistical Tests Six hypotheses were proposed for testing. Since equality of variance is an assumption for using ANOVA, Levene’s test for equality of variance was performed to check if the assumption was met. Although some of the experimental groups initially displayed a tendency for a larger variance, Levene’s test was not significant for any of the three experimental groups. Therefore, the assumption of equality of variance was supported. Performance Outcome Cell data for the independent variable measure of pre-training mental model are shown in Table 2, and cell data for the dependent variable measure of post-training mental model are shown in Table 3. The means, standard deviations, and cell sizes are shown, with standard deviations in parentheses. Recall that the possible range for the mental model scores was 2 through 14. Analysis of covariance resulted in a highly significant effect of training (F2,35 = 29.27, p < 0.001: Table 4). The covariate, pre-training mental model was significant (p< 0.001). Table 2: Descriptive statistics for pre-training mental model (pre-MM) Mean S.D. Number
Group X1 7.5 2.0 10
7
Group X2
Group X3
Total
7.1 1.7 10
7.3 1.5 16
7.3 1.7 36
Table 3: Descriptive statistics for post-training mental model (post-MM) Mean Adjusted Mean S.D. Number
Group X1 7.5 7.4 1.6 10
Group X2 9.2 9.3 2.0 10
Group X3 11.2 11.2 1.1 16
Total 9.3 9.7 1.6 36
106 Shayo & Olfman
A Scheffe’s paired comparison of Group X1 versus Group X2 was significant (p < 0.002). Another Scheffe’s paired comparison of Group X2 versus Group X3 was significant (p < 0.001). Therefore, hypotheses H1a (that trainees receiving a relevant task context and two software packages will have more adequate post-training mental models than those receiving one software package regardless of the task context), is supported. H1b (that trainees receiving one software package using a relevant training task context will have more adequate post-training mental models than those using a generic task context) is also supported. Perception Outcomes Hypothesis H2 examines the influence of relevant training task context and two packages (Paradox and dBase IV) on the post-training self-efficacy of the trainees, given their pre-training self-efficacy with respect to database management systems operation. Cell data for pre-training self-efficacy and post-training self-efficacy are shown in Tables 5 and 6, respectively.8 Analysis of covariance revealed no significant effect of training task method (F2,43 = 0.14, p < 0.874: Table 7). The covariate, pre-training self-efficacy was significant (p< 0.001). H2 is therefore not supported. Hypothesis H3 examines the influence of relevant training task context and two software packages (Paradox and dBase IV) on post-training expectations of transfer to any other RDBMS given their pre-training expectations of transfer. Cell data for pre-training expectations of transfer and post-training expectations of transfer are shown in Tables 8 and 9, respectively. Analysis
Table 4: ANCOVA of post-MM by training method (TM) with pre-MM as a covariate Source of Variation
SS
DF
MS
F
p
TM
92.62
2
46.31
29.27
<0.001
Pre-MM
25.06
1
25.06
15.84
<0.001
Total
168.31
35
4.81
Table 5: Descriptive statistics for pre-training self-efficacy expectation (preSE) Mean S.D. Number
Group X1 51.5 21.8 10
Group X2 59.2 20.7 16
Group X3 65.3 18.5 18
Total 60.0 1.7 44
The Role of Training in Preparing End Users 107
Table 6: Descriptive statistics for post-training self-efficacy expectation (post-SE) Mean Adjusted Mean8 S.D. Number
Group X1 54.6 58.9 16.7 10
Group X2 60.9 60.6 22.8 16
Group X3 65.9 62.0 16.0 18
Total 61.5 60.5 19.0 44
Table 7: ANCOVA of post-SE by TM with pre-SE as a covariate Source of Variation
SS
DF
MS
F
p
TM
59.34
2
29.67
0.14
0.874
Pre-SE
6598
1
6598
29.99
<0.001
Total
15458.98
43
359.51
Table 8: Descriptive statistics for pre-training expectations of transfer to any RDBMS (pre-any) Mean S.D. Number
Group X1 28.0 7.2 10
Group X2 31.2 10.0 16
Group X3 33.4 9.4 18
Total 31.4 9.2 44
Table 9: Descriptive statistics for post-training expectations of transfer to any RDBMS (post-any) Mean Adjusted Mean S.D. Number
Group X1 33.4 34.6 6.4 10
Group X2 33.4 33.2 7.1 16
Group X3 37.0 37 7.3 18
Total 34.9 34.9 7.1 44
of covariance revealed no significant effect of training method (F2,43 = 0.89, p < 0.420: Table 10). The covariate, pre-training expectations of transfer was significant (p< 0.001). H3 is therefore unsupported. Hypothesis H4 examines the influence of relevant training task context and two software packages (Paradox and dBase IV) on post-training expectations of transfer of skills to the Microsoft Access, a DBMS that none of the students had used. Cell data for post-training expectation of transfer to the Microsoft Access DBMS are shown in Table 11. Analysis of covariance
108 Shayo & Olfman
Table 10: ANCOVA of post-any by TM with pre-any as a covariate Source of Variation
SS
DF
MS
F
p
TM
61.04
2
30.52
0.89
0.420
Pre-Any
722.1
1
722.1
20.96
<0.001
Total
2161.18
43
50.26
Table 11: Descriptive statistics for post-training expectations of transfer from paradox or dBase IV to MS Access (post-access) Mean Adjusted Mean S.D. Number
Group X1 31.3 31.8 4.6 10
Group X2 33.0 33.0 4.3 16
Group X3 37.3 37.0 5.2 18
Total 34.4 33.7 5.3 44
resulted in a highly significant effect of training method (F2,51 = 7.95, p < 0.017: Table 12). The covariate, pre-training self-efficacy expectation was also significant (p = 0.012). H4 is therefore supported. Hypothesis H5 examines the influence of relevant training task context and two software packages (Paradox and dBase IV) on the trainees’ perceived usefulness of the training in helping the trainees to transfer their knowledge to learning other similar packages. Cell data for perceived usefulness of the training are shown in Table 13. Analysis of variance revealed no significant effect of the training method (F2,43 = 0.12, p < 0.887: Table 14). H5 is therefore unsupported.
DISCUSSION, LIMITATIONS, AND CONCLUSIONS Discussion The mean scores of subjects who viewed two software packages and received a relevant training task context were relatively higher on all dependent measures compared to those who did not receive such treatment irrespective of the adequacy of the mental model they brought to the training. The conclusion is that trainees shown a demonstration of the same task being performed using two related but different software packages will most likely capture the general structural rules that are common to both. A relevant task
The Role of Training in Preparing End Users 109
Table 12: ANOVA of post-access by TM with pre-SE as a covariate Source of Variation
SS
DF
MS
F
p
Pre-SE
149.9
1
149.9
6.86
0.012
TM
196.34
2
98.17
4.5
0.017
Total
1586.92
43
31.17
Table 13: Descriptive statistics for preceived usefulness of the training (PUT) Mean S.D. Number
Group X1 33.6 1.7 10
Group X2 32.7 3.3 16
Group X3 33.2 6.5 18
Total 33.1 4.6 44
Table 14: ANOVA of PUT by TM Source of Variation
Sum of Squares
DF
Mean Square
F
Sig of F
Training Method
5.3
2
2.65
0.12
0.887
Total
905.64
43
21.96
context will most likely help the learners to increase their self confidence to perform similar tasks as well as their motivation to transfer their knowledge when faced with the task of performing such tasks using a related package. Hence, the treatment group was more likely to have higher self-efficacy expectation, higher perceptions of transfer of their knowledge to other software packages of the same family, and a more accurate post-training mental model. Where some of these expected differences were non-significant, means were in the expected direction. Trainees who viewed two software packages and received a relevant training task context scored significantly higher on the teach-back and objective quizzes, thus exhibiting a more accurate post-training mental model, compared to those who did not, irrespective of their pre-training mental model. These subjects seemed to have used existing cues to make more sense of the training. In the debriefing interviews, trainees who had poor performance on the pre-training teach-back and objective quizzes blamed their poor scores on forgetfulness and the lapse of time since they last learned DBMSs and RDBMSs.
110 Shayo & Olfman
Trainees in the relevant training task context and two software packages treatment exhibited significantly higher expectations of transfer of their knowledge from Paradox or dBase IV to MS Access. This result is significant since as noted by Ajzen (1991): there is a distinction between a person’s attitude toward an object Ao and the person’s attitude toward behavior involving that object AB. In this study perceived ease of transfer to any RDBMS was included to represent A o, i.e., the subject’s ability to use RDBMSs in general, while perceived ease of transfer to MS Access was included to represent AB, i.e., the subject’s evaluation of actual behavior in using a specific RDBMS. As postulated by Ajzen, AB> Ao, which holds in this instance. The main effects of training method on post-training self-efficacy and perceived usefulness of the training were non-significant though the means were in the expected direction. Follow-up debriefing interviews found that some students were more interested in figuring out how to complete their class assignment rather than learning about how to use RDBMSs in general. According to Carroll (1990), subjects come to software training with their own goals and expectations and learn to fulfill the same. Such personal agendas and concerns could have potentially structured the trainees’ selfefficacy expectation and their perceived usefulness of the training, as well as their use of the training materials. Carroll also found that trainees who have an incomplete knowledge about a particular domain, “reason on the basis of poor information” and thus tend to overestimate the amount of effort required to perform in that domain. It seems that is what took place here. Therefore, the results reported in this study should be taken with caution. The trainees do not seem to appreciate the overviews, reviews, and previews. They want to do their own work. They come to the training with their personal agenda of goals and concerns that can structure their use of the training materials… (Carroll, 1990, p. 26).
Limitations Statistical power was a problem for this study because of the small cell sample size (Stevens, 1990). The statistical power of all the nonsupported hypotheses, i.e., H2, H3, and H4 was relatively low, as were the effect sizes. Although the threats of maturation, instrumentation, mortality, and compensatory rivalry by respondents receiving less desirable treatments can be ruled out, the other internal validity threats of history, testing, statistical regression, selection, and diffusion or imitation of treatments could not be ruled out completely.
The Role of Training in Preparing End Users 111
The construct validity threats of experimenter expectancies, confounding constructs with levels of constructs, and interaction of different treatments, could also not be ruled out completely. Only generic and relevant training task contexts were investigated, and no attempt was made to operationalize a range of tasks in the generic <———> relevant task continuum. In addition, trainees did not have hands-on experience, which is typical of many training settings. The external validity threats of interaction of selection and treatment may be ruled out because most students qualified as end users, since they had regular jobs that required them in part to use computers. The external validity threat of the interaction of setting and treatment cannot be ruled out because the context of the experiment was a college course. Also, since learning-related packages are ongoing processes for most end users, the external validity threat of the interaction of history and treatment could not be ruled out.
CONCLUSION The thrust of this research was to develop a better understanding of the influence of mental models on learning performance and motivation to learn related software, specifically, RDBMS. The general question was: how can the effectiveness of formal software training be enhanced for learners of packages that belong in the same family? The findings indicate that the accuracy of the mental model end users bring to formal software training has an impact on learning performance and motivation toward learning a related package. Moreover, providing trainees with a demonstration of more than one package does reinforce their transfer expectations. The practical implications of this research are that providing trainees with a demonstration of more than one package and a more relevant task context produces higher learning outcomes and increases the trainees’ perceptions of ease of transfer and self-efficacy expectation. The theoretical implications are that the findings are generalizable to mental model theories, such as schema theory, and behavioral theories, such as the theory of planned behavior. Three directions for future research are suggested by this study. First, there is need to replicate the study across different settings and populations in order to establish the generalizability of the findings of this study. Second, a longitudinal study should be carried out to establish whether the mental model and behavioral theories used here do in fact enhance the effectiveness of formal software training. Third, an extension of this study should include
112 Shayo & Olfman
a 2 (generic task context vs. relevant task context) x 2 (goal-directed error recovery vs. non-goal-directed error recovery) factorial study. Such study will examine whether a goal directed error recovery strategy (Sein and Santhanam, 1999) complimented with a relevant task context will enhance the effectiveness of formal software training more than other formal software training strategies.
APPENDIX A: MENTAL MODELS Successful formal sofware training should lead to an accurate mental model of the software (Sein et al., 1987, 1993). The mental model construct is defined in detail in Appendix A. One objective of a formal software training program is to embed knowledge and skills (declarative and procedural) of the target software package in the form of accurate mental models (Borgman, 1986; Bostrom et al., 1990; Norman, 1983; Staggers & Norcio, 1993) or schemata (Kieras & Polson, 1985; Rumelhart, 1980). According to schema theory (Rumelhart, 1980), the schemata are used to interpret incoming sensory data, retrieve information from long-term memory into working memory, and organize actions and strategies. Each schema is nested into a network of sub-schemata that form the individuals’ knowledge and skills about the specific software domain. A schema is activated only when the incoming information maps to the existing declarative and procedural knowledge embedded in the schema. Once activated, the schema network (subschemata), is able to make inferences to explain additional information by invoking other associated sub-schemata that map the external information. A mental model of an external situation is the set of all sub-schemata activated at any one time (Rumelhart, 1980; Rumelhart & Norman, 1981) to be used to address the specific situation. It is a dynamic expression of the culmination of all processes taking place in the schemata (Sein, 1988). End users’ use of their mental models as reasoning aids in using or learning a target software package is referred to as transfer of training9 (Allwood, 1990; Waern, 1985; Brooks & Dansereau, 1987). When end users attend formal software training, they bring with them their existing mental models, which can enhance or hinder the learning of another software package. Trainees’ current mental models can have a positive, negative or zero impact on new learning (Ellis, 1965). Software trainers should design software training programs that facilitate positive transfer. Positive transfer can be facilitated by providing the trainees with relevant and accurate conceptual10 models (Bostrom et al., 1990). Conceptual models are the “instructors’ ... invented model of the system created for design or instruction purposes ...”
The Role of Training in Preparing End Users 113
(Staggers & Norcio, 1993, p. 590). A number of studies have found that trainees perform better when they are provided a conceptual model before using a system (Mayer, 1979; Moran, 1981).
APPENDIX B: PERCEPTION MEASURES B.1 Perceived Ease of Transfer from Paradox or dBase IV to Any RDBMS Use the following statements to evaluate your perceived ease of transfer of your knowledge from Paradox (or dBase IV) to any another RDBMS you are unfamiliar with. For each statement, place an X in the box that most closely describes the likelihood of that statement being true in the future… Scale: Agree (Extremely, Quite, Slightly, Neutral, Slightly, Quite, Extremely) Disagree 1. 2. 3. 4. 5. 6. 7.
I believe that transferring my current knowledge of Paradox (dBase IV) to the learning of any new RDBMS will not be cumbersome. I will be easy for me to use my knowledge of Paradox (dBase IV) to figure out how to perform tasks using any new RDBMS. Using a new RDBMS to accomplish a task I can accomplish using Paradox (dBase IV) will be frustrating. Given my knowledge of Paradox (dBase IV), my interaction with any new RDBMS will be clear and understandable. I believe that it is easy to use my knowledge of Paradox (dBase IV) to get any RDBMS to do what I want it to do. Overall, I believe that my knowledge of Paradox (dBase IV) makes any RDBMS easy to use. Given my knowledge of Paradox (dBase IV), learning to operate any RDBMS will be easy for me.
114 Shayo & Olfman
B.2 Perceived Ease of Transfer from Paradox To MS Access Use the following statements to evaluate your perceived ease of transfer of your knowledge from Paradox (or dBase IV) to MS Access. For each statement, place an X in the box that most closely describes the likelihood of that statement being true in the future… 1. 2. 3. 4. 5. 6. 7.
I believe that transferring my current knowledge of Paradox to the learning of MS Access will not be cumbersome. It will be easy for me to use my knowledge of Paradox (dBase IV) to figure out how to perform tasks using MS Access. Using MS Access to accomplish a task I can accomplish using Paradox (dBase IV) will be frustrating. Given my knowledge of Paradox (dBase IV), my interaction with MS Access will be clear and understandable. I believe that it is easy to use my knowledge to get MS Access to do what I want it to do. Overall, I believe that my knowledge of Paradox (dBase IV) will make MS Access easy to use. Given my knowledge of Paradox (dBase IV), I believe learning to operate MS Access will be easy for me.
B.3 Perceived Usefullness of the Training Use the following statements to evaluate your perceived usefulness of your experience in this training. For each statement, place an X in the box that most closely describes the likelihood of that statement being true in the future… 1. 2. 3. 4. 5. 6.
Using this training approach would enable me to transfer knowledge across different RDBMSs in future more quickly. Using this training approach would enable me to have more meaningful learning of RDBMSs like MS Access. Using this training approach would reduce the time needed to learn new RDBMSs like MS Access. Using this training approach would enhance my effectiveness as a learner of new RDBMSs like MS Access. Using this training approach would make it easier to learn new RDBMSs like MS Access. I would find this training approach useful in my future learning of MS Access.
The Role of Training in Preparing End Users 115
ENDNOTES 1 Self-efficacy refers to “beliefs in one’s capabilities to mobilize the motivation, cognitive resources, and courses of action needed to meet given situational demands” (Gist & Mitchell, 1992, p. 184). The construct is based on the premise that if “people believe they can take action to solve a problem instrumentally, they become more inclined to do so and feel more committed to this decision” (Schwarzer, 1992, p. xi). Instrumentality refers to expected performance for a given amount of effort. 2 For more examples of studies in human factors that used more than one device, see Simon and Roscoe (1984). 3 Hypotheses are stated in directional rather than null form. Descriptions of each of the independent and dependent varaibles and measures are provided in the “Research Method” section. 4 We used Generic Task Context with Paradox, Relevant Task Context with Paradox, and Relevant Task Context with Paradox and dBase IV. 5 The four categories were: 1) level of understanding of definition of DMBS, 2) level of understanding of definition of RDMBSs, 3) clarity and comprehensiveness in describing a specific RDBMS to a naive colleage who does not understand computers, 4) clarity and comprehensiveness in describing a specific RDBMS to a colleage who knows how to use spreadsheet and work processing systems. 6 This instrument and all other instruments measuring trainees’ perceptions are included in Appendix B. 7 X1 = Generic Training Task Context, Paradox; X2 = Relevant Training Task Context, Paradox; and X3 = Relevant Training Task Context, Paradox and dBase IV. 8 Adjusted means were calculated by the SPSS Analysis of Covariance procedure. 9Accroding to Fleishman (1987), the terms “transfer of training” and “transfer of learning” are used interchangeably. 10 Herein, the term “conceptual model” implies both declarative (content dependent) as well as procedural (content-dependent) information about the software package.
116 Lehaney, Clarke, Spencer-Matthews & Kimberlee
Chapter VIII
The Human Side of Information Systems Development: A Case of an Intervention at a British Visitor Attraction Brian Lehaney Coventry University, UK Steve Clarke The University of Luton Business School, UK Sarah Spencer-Matthews Swinburne University of Technology, Australia Vikki Kimberlee The University of Luton Business School, UK Information systems (IS) are growing in importance within the tourism industry, where one key application is database marketing. Evidence from the IS domain suggests systems failure may be due, at least in part, to concentration on technical rather than human issues in the development process. Through an empirical study of visitor attractions in the United Kingdom, the need for a more human-centered approach to IS development is supported, and an example of such an approach is outlined. Both in-depth focus group analysis and a broader questionnaire survey are used and lend weight to the human-centered arguments. Copyright © 2002, Idea Group Publishing.
The Human Side of Information Systems Development 117
From the analysis of a failed tourism database marketing information system, and from evidence of similar successful systems, the value of technology-enabled database marketing within the sector is demonstrated, but its success is seen to rest on participative, human-centered approaches to development.
INTRODUCTION This paper investigates the use and success of information systems in the tourism industry. As a fairly young growth industry, tourism may be able to learn from some of the pitfalls already experienced in other sectors, where “analysis has been driven by what is technically possible rather than by what is organizationally desirable. The consequences of this include a number of failed investments in information systems, the disenfranchisement of management, and an accepted use of developmental methods that are insensitive to the social and political contexts within which the information systems are to be used” (Lewis, 1994, p. 2). This paper suggests that the information systems within tourism may reduce the possibility of failure by the use of the participative and holistic approaches to development which address end user issues through the so-called “soft,” or human-centered methods. The advantages to be gained can be judged from a brief analysis of the value of tourism to the UK economy. In 1991, 16.6 million foreign visitors are estimated to have spent £7,168 million within the UK, whilst domestic tourism accounted for 94.4 million trips and an additional £10,470 million of expenditure (British Tourist Authority, 1994, p. 1). In 1995 tourism produced 5% of Gross Domestic Product (1% up on the previous year’s figure), providing £25 billion to the economy (The Times, 1996). The World Travel and Tourism Council (WTTC) state that the industry is a “key economic driver,” and by the end of 1997, they expected it to be generating 11.6% of Britain’s Gross Domestic Product (The Times, 1996). In the same article the WTTC said, “Travel and Tourism is a key to future economic growth” (The Times, 1996, p. 12). By any standards, travel and tourism are now major components of the UK economy, and information systems play an important role in its future success. The paper begins with a general review of information systems within tourism, taken from the relevant theoretical and empirical literature. The incidences and causes of information systems failure are then outlined and are seen to indicate human-centered methods as relevant within this domain. An analysis of human-centered approaches to information systems design, devel-
118 Lehaney, Clarke, Spencer-Matthews & Kimberlee
opment, and implementation leads to the chosen methodologies for the study, which are strongly participant focused. The specific area addressed by this paper is visitor attractions—a significant growth segment within the UK tourism industry. The empirical study is composed of two components. Firstly, focus group sessions are used at a major visitor attraction, which has been the subject of a failed database marketing system. Secondly, a self-administered questionnaire was sent to the top twenty visitor attractions in the UK eliciting a 65% response, from which a broader perspective on the use of database marketing within the visitor attraction sector was elicited.
INFORMATION SYSTEMS AND THE TOURISM INDUSTRY Electronic media are emerging as channels with potential for an important dialogue between buyer and seller, technology permitting information to flow in both directions, between the customer and the company. They help create the feedback loop that integrates the customer into the company, allows the company to own a market, permits customization, creates a dialogue, and turns a product into a service and a service into a product. Database marketing is widely used within the tourist industry. It may be defined as “the ability to use the vast potential of today’s computer and telecommunication technology in driving customer-orientated programs in a personalized, articulate and cost-effective way” (Fletcher, 1995, p. 301). With the technology available today, organizations can hold electronic information concerning potential customers and use it for marketing purposes. Kotler (1991, p. 54) defines a marketing database as “an organized collection of data about individual customers, prospects, or suspects that is accessible and actionable for marketing purposes.” Database marketing provides an edge in finding out more about customers and gives the opportunity to build customer loyalty: “A growing number of marketers are investing heavily in creating databases that enable them to determine who their customers are, to record details of their preferences and behaviors, and to serve them in ways that may create long-term loyalty” (Berry,1996, p. 422). Database marketing uses statistical and modelling techniques both to support the development of cost effective marketing programs that communicate directly with targeted customers, and to track and evaluate the results of specific promotional efforts. The aim is not simply to sell, but to build up a relationship with existing and potential consumers (Fletcher, 1995). Com-
The Human Side of Information Systems Development 119
panies therefore need to collect quite detailed information about clients on such a database system. Implicit in this is the need to know customers in a personalized way: each record on the database contains information about the customer’s name and address, but also contains information about customers needs, characteristics, and previous buying behavior. According to Fletcher (1995), the features of database marketing are that • advertising and selling are combined; • the results are measurable and therefore effectiveness can be tested; • it is selective, assuming a suitable list or customer database is available; and • it is flexible, in both timing and objectives, and therefore controllable. The marketing database is complementary to other elements of the promotional and marketing mix, allowing a planned and integrated campaign, and, as such, it must be part of an information system. An up-to-date database allows for more informed strategic marketing decisions, allowing organizations to be more outward looking, and enabling future marketing strategies to be based on an exact population, reducing the need for ad hoc market research. Selective database marketing has, for example, transformed airline companies: having discovered that 80% of their business was coming from 20% of their customers (the “frequent flyers”), airlines developed databases to capture information on individual travellers, offered “frequent flyer” rewards, and thereby developed brand loyalty. It is evident, therefore, that the tourism industry could benefit from the effective and efficient use of database marketing, which must be part of a reliable information system. However, before developing an approach for the design and implementation of such an information system for application to visitor attractions, the next section first considers information systems in general, and investigates reasons for failure, thereby providing a sound theoretical and practical basis for the study.
INFORMATION SYSTEMS FAILURE An information system can be defined as: “a system to collect, process, store, transmit and display information” (Avison, 1990, p. 3); or “any system which assembles stores, processes and delivers information relevant to an organization (or to society), in such a way that information is accessible and useful to those who wish to use it, including managers, staff, clients, and citizens. An information system is a human activity (social) system which may or may not involve the use of computers” (Buckingham, 1987, quoted in Avison, 1990, p. 5).
120 Lehaney, Clarke, Spencer-Matthews & Kimberlee
Information technology is concerned with technical perspectives and includes telecommunications, computers, and automation technologies. Peppard (1993, p. 5) defines it as “the enabling mechanism which facilitates the processing and flow of this information, as well as the technologies used in the physical processing to produce a product or provide a service.” This is an important distinction: information systems may depend upon information technology, but from a business perspective the technology should be considered only in so far as it better enables the information system to function. According to a study carried out in the UK by OASIG (1996) (a special interest group of the Operational Research Society in the UK concerned with the organizational aspects of IT), up to 90% of information technology projects fail to meet their goals, 80% are late and over budget, and 40% are abandoned. This view is supported by Griffiths (1995): “Major IT projects offer the potential for huge returns, yet very few seem to come to successful conclusions.” Further evidence comes from Hayes (1996, p. 1): “Many IS functions are trapped in a downward spiral characterized by deteriorating relationships with users. These relationships consume valuable time and energy in resolving conflict, leaving fewer resources to work on actual services. The result is a negative state of hostility, conflictual interactions, and lower productivity.” Sauer (1994) highlights an example of information systems failure within the tourism industry. Swissair wanted a personnel and payroll system, and the system developers were required to liaise with two departments: personnel and finance. But the two departments had different objectives, which led to conflicting requirements of the system. After a review of the situation by the company, the initial budget of 5.3 million Swiss francs needed to be supplemented with a further 3.6 million Swiss francs, reducing the return on investment by 3% to zero. Finally, the whole project was terminated. Lyytinen and Hirschheim (1987, p. 299), from a comprehensive survey of the empirical literature concerning information systems failures, conclude that: “many reports show that somewhere between one-third to half of all systems fail, and some researchers have reported even higher failure rates.” The reasons behind these IS failures vary enormously, but Lyytinen and Hirschheim (1987) argue that they can all be seen ultimately as failures of expectation: that is, the final system did not meet the expectations of the stakeholders. The inference to be drawn from this and other studies (see Clarke and Lehaney, 1999 for a summary), is that human-centered methods have much to offer in such situations.
The Human Side of Information Systems Development 121
In any human activity system, it is argued, the interest of the participants has a dominant influence in determining the success or otherwise of the system. Ownership of the system conveys a willingness to support the system but does not ensure its success. This paper argues that a prime reason for “failure” is that IS development methodologies do not place enough emphasis on human interests. This view is supported by the OASIG study which suggests that “the performance gap is rarely caused by the technology itself. The heart of the problem is the lack of attention given to the crucial role played by humans organizational factors in shaping the outcomes of information technology developments” (OASIG, 1996, p. 12).
HUMAN-CENTERED APPROACHES Soft or human-centered systems thinking emphasizes the “system” as a device for thinking about part of the world. This contrasts with so-called “hard” or technology-based thinking, which sees its objects of study as a “true” description of part of the “real” world. Human-centered methods are intended to avoid confrontation by facilitating discussion and learning and by incorporating values and beliefs within an enquiry, and are therefore an approach to learning rather than problem solving, which “declines to accept the notion of ‘the problem’” (Checkland 1993, p. 279). They work with the notion of a situation in which various actors may perceive various aspects to be problematic and try to provide help in getting from a position of finding out about the situation to a position of taking action in the situation “emphasis is thus not on any external reality but on people’s perceptions of reality, on their mental processes rather than on the objects of these processes” (Checkland 1993, p. 279). This represents a major shift in analysis, arguing that, when studying organizations as human activity or social systems, a fundamentally different approach is required, since human beings have consciousness and free will, and place meanings and interpretations on their world. Therefore, in a study of human organizations, the meaning created by those involved, and the perceptions that arise from them, cannot be excluded from analysis. Within the domain of information systems, human-centered methods have been applied extensively during particularly the last twenty years. The focus of such methods has been to find out, by a process of debate, the needs of participants in the IS development process (for a deeper discussion of these issues see Clarke and Lehaney, 1998). Approaches to this include the use of evaluation methods such as focus groups, lateral thinking, and brainstorming (de Bono, 1977), as well as fully developed methodologies such as ETHICS (Mumford, 1985) and multiview (Wood-Harper et al., 1985).
122 Lehaney, Clarke, Spencer-Matthews & Kimberlee
The value of a human-centered perspective is clear. How such a perspective was used in developing a better understanding of the failure of a marketing information system in a visitor attraction is investigated below.
VISITOR ATTRACTIONS Visitor attractions “are provided by a diverse group of organizations, both in terms of ownership and size, and offer a variety of experiences to the visitor that includes sheer hedonistic pleasure, entertainment, and education” (British Tourist Authority, 1994, p. 2). There is no one accepted definition of a visitor attraction, but the following two definitions convey the notion well: “A designed permanent resource which is controlled and managed for the enjoyment, amusement, entertainment, and education of the visiting public” (Middleton, 1988, p. 7) and “a permanently established excursion destination, a primary purpose of which is to allow public access for entertainment, interest or education, rather than being principally a retail outlet or a venue for sporting, theatrical, or film performances. It must be open to the public without prior booking, for published periods each year, and should be capable of attracting tourists and day visitors as well as local residents” (Scottish Tourist Board, 1991, p. 12). Swarbroke (1995, p. viii) identifies a number of key themes that are central to the understanding of managed visitor attractions: • all attractions exist within a rapidly changing business environment which requires them to be constantly vigilant so that they can anticipate and respond to these changes; • attractions exist within a competitive market, even if it is not always easy to identify the competitors; • marketing is at the core of their success. One of the values of information technology to such attractions lies in its ability to better enable the use of database marketing to better segment and target the relevant markets; and consequently, database marketing within UK visitor attractions is the focus of the empirical work undertaken in support of this study.
METHODOLOGY FOR THE STUDY The practical research for this paper was undertaken in two stages: a selfadministered postal questionnaire to determine the usage of database marketing in the visitor attraction sector, and focus groups, which were used to help surface end-user needs for a marketing database of a “typical” visitor
The Human Side of Information Systems Development 123
attraction. The focus groups were also used to examine the case of a failed information system (a database system) in the chosen visitor attraction. The organization is representative of the sector because it contains the features of a visitor attraction outlined above, and (from the questionnaire results) it is similar to other visitor attractions in its database needs. The organization wished to remain anonymous and is therefore not mentioned by name in this paper. The visitor attraction concerned has an unused database which was expensive to introduce: it has collected a wealth of information, but this has yet to be entered on to databases which would enable direct marketing to take place. The staff within the organization do not know how to make effective use of either the system or the information, and it is proposed that a new system be developed. The focus groups were used to gain an understanding as to why the users felt the system had failed and to examine the user requirements for a proposed database system. The aims of the focus group were to investigate • why the previous database system had failed; • user requirements of the proposed new database system; and • organizational issues related to old and the proposed new systems. The focus groups involved three key staff members from sales, media relations, and marketing. The focus group sessions were conducted in the offices of the visitor attraction around a meeting table, and the discussions were taped on audio cassette. Each session lasted approximately one hour, according to time limitations imposed by the management of the visitor attraction. Moderator involvement for the focus group sessions was low, but a guide list of questions was used to help channel the discussions. A self-administered questionnaire was chosen to complement the focus group sessions, and this was used to obtain additional information from a number of different sources. Reply paid envelopes were included with the questionnaires, and the respondents were offered a copy of the results. The questionnaire comprised two sides of backed A4 paper. A convenience sample was used, consisting of the top twenty visitor attractions charging admission in the United Kingdom. Without a complete sampling frame of all the visitor attractions in the UK there was no way of specifying the probability of each unit’s inclusion in the sample (the only such list requires voluntary enrolment). The questionnaire was designed to examine • the extent of usage of database marketing in the top twenty visitor attractions in the United Kingdom; • the business motives for developing a database system; and • whether end-user requirements had been taken into account during the development of the database systems. All respondents were assured that they would not be identified in any publications.
124 Lehaney, Clarke, Spencer-Matthews & Kimberlee
FINDINGS Analysis of Questionnaire The questionnaire was distributed to the twenty largest visitor attractions in the United Kingdom and elicited a return of thirteen: a response rate of 65%. The respondents also provided further material about their organization that made the data received from them richer than at first anticipated. Out of the thirteen responses twelve are from organizations with over 101 employees and the remaining organization has 71-80 employees; neither figure alters in peak season. Some results of the questionnaire are presented in Table 1. From the thirteen responses, eleven of the organizations hold a database that contains customer information, and the other two stated that they did not hold a customer database. Of the eleven respondents who have databases, all use them for marketing purposes, six find their database essential for marketing campaigns, four find it helpful, and only one organization finds their system to be not very helpful.
Table 1: Summarized questionnaire results Use of Database for Marketing
Totals Used for Marketing Did not use for Marketing
11
Total Respondents
13
Find Database: Essential Number Percent 6
55%
Helpful Number
Percent
4
36%
Not Very Helpful Number Percent 1
9%
2
Staff Consultation Total Organisations Finding Database:
Essential * Helpful Not Very Helpful
6 4 1
Total Respondents
11
Consulted Staff Number
Percent
4 3
67% 75%
* The other two staff felt that the database would have been more effective if staff had been consulted.
The Human Side of Information Systems Development 125
Of the six who find their system to be essential, four had directly consulted the staff who actually use the database (the database operators or the staff who use the database for information). Of the remaining two organizations, one had not consulted staff before development of the database, although it was considered essential for marketing campaigns. However, both organizations did think that the database would have been more effective had the staff been consulted before development of the database system. Of those four organizations that found their database system to be “helpful” for their marketing campaigns, three had consulted their staff prior to the development, and the remaining one did not have access to this information. The questionnaire elicited valuable information regarding the extent of usage of database systems in visitor attractions within the tourism industry. In summary: • the majority of respondents held a database; • the majority of respondents used their database for marketing applications; • only one organization found their system to be not very helpful; • of the six who find their system to be essential, four had directly consulted the staff who actually use the database, and the other two believed that the database would have been more effective had the staff been consulted before development; and • three of the four organizations that found their database system to be “helpful” for their marketing campaigns had consulted their staff prior to the development. The result of the questionnaire can be extended to include the visitor attraction used for the focus group. Although not figuring in the top twenty visitor attractions, it displays similar characteristics to the attractions who answered the questionnaire. The staff at the focus-group site had not been consulted before database development.
ANALYSIS OF FOCUS GROUPS The focus groups were conducted within a visitor attraction in which the management had attempted to implement a marketing database but had failed. The group discussion began by addressing the problems of this previous information system, which was developed by head office and did not provide marketing information specifically collected for the visitor attraction, but rather generic information intended for all visitor attractions within the group. This did not lend itself to being productive for a specific visitor attraction: “Perhaps the main reason why the system is not utilized is that it was
126 Lehaney, Clarke, Spencer-Matthews & Kimberlee
introduced at head office with little input from us, and little attention given to our requirements. It certainly did not match the requirements or the knowledge that we have.” Generally, IT decisions were controlled by head office, although each attraction within the group had different target markets and essentially a different product to market. The marketing manager of the visitor attraction was prepared to tackle the information technology planning process: “I also want us to consider how the new system will fit into our IT strategy as a whole.” This was an important point that emerged from the discussion: the desire to identify opportunities where IT and IS could contribute effectively to achieving business objectives in the organization. Another important discussion that developed within the focus groups was that of end-user involvement and the knowledge base that this implies. One of the members of the focus group state: “I am not sure I know what functions I want it to perform, do you all?” The feeling amongst the staff was not to allow the development of a new system to be based purely on a technical perspective: their input would be necessary to ensure that the system served the users and the organization this time. What was also apparent in these discussions was that each member of the group wanted and expected different things from the proposed system. If these differences are not accounted for during systems development, it is reasonable to anticipate that the system will not meet the expectations of the users. It also suggests that an element of change needs to be built into the proposed system and reaffirms the notion of adopting a human-centered approach to systems development. The members of the focus groups agreed that a database would be useful for their marketing campaigns. However, this view did not receive unanimous support from focus group members, and one employee in particular acknowledged her fear of technology: “I also guess part of my problem is not feeling that confident with computers. I am not sure its going to be an asset to me or the organization, if you understand what I mean.” This is something that the management would need to explore in more depth. Money would need to be invested not only in the system, but also in staff involvement, motivation, and training to ensure that the system received the full support of the staff. Their attitude towards the system would have a significant impact on the success of the system. In summary: • the failed information system was developed to a head office specification, outside the users’ control, and with little reference to the environment in which the system would be operating;
The Human Side of Information Systems Development 127
• • •
the system failed to meet user expectations. This proved true even with a marketing manager who was actively looking for ways in which it could be used to provide benefits to the visitor attraction; each member of the group wanted and expected different things from the proposed system; and user issues had not been adequately considered or training offered.
CONCLUSIONS The purpose of this paper has been to consider the application of information systems to the tourism industry in general, to review issues from wider information systems theory and practice which might inform this process, and to conduct empirical research to test these findings. It is suggested that the tourism industry has much to gain from the use of IS and that in one application, database marketing is of particular importance. An analysis of the failure of IS strongly supports the view that, both in a generic sense and within the tourism industry, the risk of failure is increased by over concentration on technology at the expense of human-centered issues. In summary, the reasons for failure are • many investments in IT are technology led; and • inadequate attention is given to the human and organizational factors that are vital in determining the ultimate effectiveness of new systems. A human-centered approach is seen to rest on a different view of the domain, whereby the ‘problem’ to be addressed resides in the views and opinions of participants, leading to a need to investigate the problem context via participant views, rather than through a search for a technical solution. Evidence from both the questionnaire survey and the focus groups held to discuss the failure of a marketing database at a major UK visitor attraction has demonstrated the improved perceptions to be gained by human-centered analysis. The questionnaire examined the extent and use of database marketing in the visitor attraction field. The response rate to the questionnaire was good (65%), and it highlighted some previously unrecognized issues. The questionnaire indicates a strong correlation between user involvement and the success of database marketing systems, whilst the focus groups go some way to determining the cause of failure as an over reliance on technical analysis. Findings from the focus groups indicate specific issues seen to be raised as a result of the participatory analysis. In particular, the different needs and expectations of participants, stifled within the earlier (failed) development, surfaced in the participative study and pointed to issues of motivation, training, and functionality not previously identified.
128 Lehaney, Clarke, Spencer-Matthews & Kimberlee
Finally, in summary: • tourism organizations need to accept the political nature of information systems and act accordingly; • the industry needs to be guided by prior analysis of the information systems process and not develop IS and IT in a vacuum; • it is important for organizations to learn from their experience. One way to do this is to conduct post-project reviews as in the case study for this research; and • all aspects of an organization need to be involved when developing an IS, particularly human activity. In essence, a soft approach to systems development needs to be adopted. The clear message is that the tourism industry has much to gain from the use of marketing databases but must take great care to consider adequately the perceptions, needs, and expectations of users in any such development. Failure to do so risks failure of the information system and threatens the success of the organization.
Institutional Context, User Participation, and Organizational Change 129
Chapter IX
Exploring the Relationship between Institutional Context, User Participation, and Organizational Change in a European Telecommunications Company Tom Butler and Brian Fitzgerald University College Cork, Ireland While much is known about the general process of user participation in information systems development, its effect on organizational change surrounding systems implementation has not been the subject of systematic, empirical investigation. With some notable exceptions, researchers have chosen to adopt variance- rather than process-based approaches to the study of these phenomena and have, therefore, failed to capture the complex interrelationships that exist between them. This study addresses these deficiencies and makes several important contributions to the literature. First, it describes the results of a process-based case study that illustrates how one large organization’s institutional context shaped and influenced the content and process of user participation and associated management of change Copyright © 2002, Idea Group Publishing.
130 Butler & Fitzgerald
around the development and implementation of two operational support systems. Second, it presents a theoretical model that captures the institutional and development-related contexts which shape and influence the processes of user participation and management of change. Third, this study’s findings indicate that institutionally mediated factors exert a major influence on the level of user acceptance of systems, especially in relation to (a) the expected change wrought by the new system; (b) user influence and power relationships; and (c) user commitment to development-related change. Finally, the model, framework, and findings provide a useful point of departure for future research in the area.
INTRODUCTION Information systems development is a multi-dimensional change process that presents itself simultaneously within several related social environments—as a reality, it is socially constructed (Visala, 1991; Butler, 1998). The conventional wisdom within the information systems community argues that user participation is a core ingredient in this change process and is vital for successful outcomes in terms of both the development process and its product (see Ives and Olson, 1984). However, two comprehensive reviews of research on the phenomenon of user participation reveal that the relationship between user participation and successful systems development is neither grounded in theory nor substantiated by research data (see Ives and Olson, 1984; and Cavaye, 1995). A recent meta-analytic study by Hwang and Thorn (1999) argues that system success is positively correlated with user participation. Nevertheless, Lin and Shao (2000) maintain that the positive relationship between user participation and system success should not be taken for granted, as the contextual environment must be considered. Accordingly, Jiang et al. (2000) suggest that factors which influence users’ power, social status, and job satisfaction should receive attention as they impact on user resistance to systems implementation. It is evident then that the controversy and doubt surrounding the outcomes associated with user participation continues. To address this dilemma, Cavaye (1995) argues for qualitative and process-based studies in order to deepen the field’s understanding of the phenomenon. In a process-based interpretive study of user participation, Butler and Fitzgerald (1997) illustrate that the relationship between system success and user participation is moderated by factors not captured in variance-based studies, the most notable of which are those surrounding the management of change. This paper therefore argues that insufficient attention has been paid to the
Institutional Context, User Participation, and Organizational Change 131
relationship that exists between user participation and the management of organizational change surrounding the development and implementation of information systems. Consequently, that relationship remains ill defined and little understood. In order to address this deficiency, this study maintains that it is only by conceptualizing information systems development as a change process (Boland, 1978; Lyytinen, 1987), and by adopting an institutional perspective that acknowledges both user participation and management of change as being instrumental in determining the ultimate success of developed systems, that the relationship between these two concepts and their consequences can be evaluated, explained, and understood. Support for this position comes from recent studies by Butler and Murphy (1999) and Avgerou (2000), who adopt institutional perspectives when examining the relationship between IT implementation and organizational change. It is suggested that studies of the organizational consequences of information technology should not rely on simple causal models. Rather, theoretical models and research approaches that capture the social and technical complexity of IS-related phenomena are advocated (Markus and Robey, 1988; Robey and Azevedo, 1994). A conceptual model that incorporates the institutional context or framework within which the IS development subprocesses of user participation and management of change are effected, and which attempts to capture the interrelationships between factors that are posited to constitute these sub-processes, is presented herein. The model’s central perspective is drawn from (a) new institutional theory, notably that of North (1990) and DiMaggio and Powell (1991); (b) Cavaye’s (1995) analytic framework on the phenomenon of user participation in information systems development, which is extended and elaborated on by Butler and Fitzgerald (1997); and (c) from seminal contributions of previous research on the phenomena of user participation and systems implementation (see Boland, 1978; Mumford, 1979; Ives and Olson, 1984; and Orlikowski, 1993). The model helps generate appropriate research questions to guide and direct the case description, report its findings, and derive significant conclusions. As Orlikowski’s (1993; p. 310) ground-breaking investigation of the relationship between the introduction of CASE and organizational change reveals, a process-based, interpretive research approach, incorporating grounded theory, “allows a focus on contextual and processual elements as well as the actions of key players associated with organizational change.” The constructivist paradigm not only incorporates a grounded theory perspective, it also offers researchers added rigor by providing an ontological, epistemological, and methodological framework from which to conduct qualitative
132 Butler & Fitzgerald
research (Erlandson et al., 1993; Guba and Lincoln, 1994). Mature disciplines within the social sciences accept the need for a philosophical as well as a methodological rationale to underpin research. Hence, a constructivist approach to research is adopted in order to apprehend the socially constructed reality that is information systems development in organizational contexts (Visala, 1991; Butler, 1998; 2000). The primary objective of this study, then, is to identify the critical factors that shape and influence the relationship between user participation in the development and implementation of organizational information systems and the process of change management surrounding the successful implementation of such systems. The research data presented herein is drawn from a case study of the development and implementation of two operational support systems in a large telecommunications company. Before presenting this study’s analysis of the case data, the model and its analytical perspectives are first discussed.
A MODEL OF USER PARTICIPATION AND MANAGEMENT OF CHANGE IN THE INFORMATION SYSTEMS DEVELOPMENT IS researchers have adopted several theories from the new institutional economics to inform their studies of IS-related phenomena; the most notable of these theories are bounded rationality, transaction cost theory, and, more recently, the resource-base view of the firm. However, institutionalist perspectives in sociology have not exercised much influence on the IS research agenda, especially in relation to organizational phenomena. Examples of studies that have gravitated toward the latter school of thought include contributions from Kling and Iacono (1989), King et al. (1994), and, more recently, Avgerou (2000, 2001). Of particular relevance to the present research endeavor is an interpretive study by Orlikowski (1993), which provides a graphic example of the role of institutional contexts—environmental, organizational, and IS—and their impact on organizational change associated with the introduction of a CASE application. Also significant in this regard is a recent study by Butler and Murphy (1999) which illustrates the benefits of marrying institutional perspectives in economics and sociology when examining the process of IT-enabled organizational change. In light of the clear benefits associated with such an approach, this study adopts a similar research strategy.
Institutional Context, User Participation, and Organizational Change 133
The role played by institutional frameworks in shaping organizational behavior has been the subject of study and debate by economists and organizational theorists for some time now (see Rowlinson, 1997). Paul DiMaggio and Walter Powell are organizational theorists who maintain that organizations operate within an “organizational field.” According to DiMaggio and Powell (1991; p. 64), an “organizational field” is comprised of: “Those organizations that, in the aggregate, constitute a recognized area of institutional life: [it consists of] key suppliers, resource and product consumers, regulatory agencies, and other organizations that produce similar services or products.” DiMaggio and Powell argue that “coercive,” “mimetic,” and “normative” forces govern the interactions among participants in an ‘organizational field’ and thereby act to shape it. By “coercive” force is meant the exogenous formal or informal influences exerted by legislative or regulatory agencies that shape the structure, process, and products or services of an organization. “Mimetic” influences refer to the changes wrought in structure, processes, and products/services of an organization when it imitates firms in its “organizational field” who possess a competitive advantage. “Normative” mechanisms that shape structure, process, and products/services arise through the interactions of social actors—managers, professionals, occupational groupings, and so on—who share a “community of practice” with formal and informal cognitive and regulatory norms. It is clear from DiMaggio and Powell that external agencies which constitute an “organizational field” exert a significant influence over an organization’s structures, policies, practices, and procedures—that is, its endogenous institutional context. Hence, DiMaggio and Powell (1991) argue that the combined influence of coercive, mimetic, and normative influences from the wider institutional environment shape what North (1990) refers to as the formal and informal “rules of the game” by which organizations play. North, who is an institutional economist, points out, however, that the way organizations “play” is constrained, but by no means wholly determined, by such rules. It may therefore be concluded that exogenously influenced directives, policies, and practices mold the immediate organizational context within which an organization’s business routines are formulated and executed. On the other hand, it must be noted also that organizations do exert some influence on the construction of their “organizational field” and are not entirely at the mercy of exogenously determined factors. Drawing on insights provided by North (1990), it is argued that an organization’s policies on systems development, particularly those related to issues of change management and employee participation, constitute a set of
134 Butler & Fitzgerald
formal and informal criteria, the so-called “rules of the game,” that help to mold human interaction in the pursuance of organizational goals and objectives in relation to the development and implementation of IS. Accordingly, Aaen (1986) has underlined the importance of managerial policy making in shaping the trajectory of the development process and its outcomes (see also Ives et al., 1980). Given the foregoing observations, this study argues that an organization’s policies on user participation in systems development and implementation are shaped by its institutional framework or context and exert a formative influence on (a) organizational culture and climate (Robey and Azevedo, 1994); (b) the type of participation (Mumford, 1979); (c) the degree of participation (Ives and Olson, 1984); (d) content and extent of participation (Hirschheim, 1983); and, finally, (e) the formality and influence of participation (Mumford, 1979) in systems development. In regard to the “rules of the game” in place within an organization’s institutional context, the manner in which systems development proceeds is argued to be directed and effected by 1. Project-related factors; 2. Process-related factors; and 3. User-related factors. Hence, the content and conduct of project-, process-, and user-related activities is guided by the institutional context, framework, or field in which the organization operates (see, for example, Orlikowski, 1993). The dimensioned set of factors that appear in the model illustrated in Figure 1 are based on Cavaye’s (1995) analysis which has been extended and elaborated by Butler and Fitzgerald (1997). The interaction of institutional contexts, the “rules of the game” that are in operation, and project-, process, and user-related factors are argued to determine system success in terms of user perceptions of system quality and user acceptance of the developed system. User acceptance of the implemented system is argued to be particularly dependent on the manner in which change is managed. The model will, therefore, help illuminate the relationship between an organization’s institutional framework, project-, process-, and user-related factors, and development outcomes such as product quality and acceptance. Boland’s (1978) seminal work illustrates that a change approach (a bottom-up, participative strategy as opposed to the top-down traditional approach) to systems development attempts to have developers and users participate in joint problem solving with the objective of arriving at a systems solution through consensus. Resistance to change surrounding the implemen-
Institutional Context, User Participation, and Organizational Change 135
Figure 1: A model of user participation and management of change in the systems development process Institutional Context or Framework: Institutional influences from the organizational field: Including government agencies, regulative bodies, labour unions, and other Organisational policy on development-related change and user participation in systems development Organisational culture/climate Type of participation Degree of participation Content and extent Formality and Influence
S ystems Development Process User Participation and M anagement of Change Sub-Processes Project-Relate d Factors Initiator of the project Top management commitment Type of System Project complexity Complexity of task structure Time for development Financial resources available Expected change wrought by the new system
Process-Relate d Factors: User/analyst relationships Influence and power relationships Communication
User-Related Factors: Participation vs. involvement User perception of organizational climate Willingness to participate Ability to participate User characteristics and attitudes User commitment to development-related change
S ystem Success Product quality Product acceptance
tation of the new system is thereby negated (Zmud and Cox, 1979). However, with few exceptions (see, for example, Ginzberg, 1981; Tait and Vessey, 1988; Krovi, 1993), the relationship between user participation in systems development and issues of organizational change does not appear to have been the subject of explicit, systematic investigation. Hence, the relationships that exist between user participation, the management of change, and successful systems development and implementation are often puzzling and difficult to interpret. This paucity in extant research begs to be addressed. The primary objective of this study is to identify the critical factors and processes that shape and influence the relationship between user participation in the development and implementation of organizational information systems and the process of change management surrounding the successful introduction and use of such systems.
136 Butler & Fitzgerald
RESEARCH QUESTIONS The first two of four research questions that help achieve the stated research objective are now articulated with reference to the model illustrated in Figure 1: RQ1: What impact, if any, does an organization’s institutional framework have on the type, degree, content, extent, formality, and influence of user participation in systems development? RQ2: What role does the organization’s institutional framework play in the resolution of change management difficulties? Successful systems development is a nebulous term; hence, it eludes direct evaluation. Accordingly, IS researchers employ surrogate measures to measure the success of development outcomes. Ives and Olson (1984), for example, propose system quality and system acceptance as appropriate “outcome variables.” Nonetheless, user satisfaction with developed systems has been widely employed by researchers as a surrogate for system success (Gatian, 1994; Cavaye, 1995). The dominant focus on user satisfaction and system success seems to capture implicitly, rather than explicitly, the fact that significant change has often taken place once a system has been implemented. Unfortunately, change management problems that often arise due to user resistance are generally ignored. Nevertheless, it is difficult to ignore the obvious, and several studies have underpinned Ives and Olson’s contention that user participation may lead to increased user acceptance, a conceptual analogue for users’ attitudes to the degree of change wrought by the introduction of new systems, by • allowing users to develop realistic expectations about system capabilities (see Lawrence and Low, 1993); • providing an arena for bargaining and conflict resolution about design issues (see Euchner et al., 1993); • leading to system ownership by users (see Kozar and Mahlum, 1987); • decreasing user resistance to change (see Kozar and Mahlum, 1987; Hirschheim and Newman, 1988; Krovi, 1993); and • committing users to the system (see McKeen et al., 1994; Barki and Hartwick, 1994). The forgoing points underline Regan and O’Connor’s (1994) assertion that user resistance (or acceptance) centers more social change surrounding the introduction of systems, rather than on technical (quality-based) factors. The third and fourth research question emerge from this discussion: RQ3: What critical project-, process-, and user-related factors act to shape and influence (a) product quality and (b) product acceptance? RQ4: Which of these factors related to user participation, management of change, or both? These then are the research questions which help guide this study.
Institutional Context, User Participation, and Organizational Change 137
Table 1: Research strategy Activity
Description
Site Selection
Telecom Éireann was purposefully selected for study because it provided a typical example of a European company that had institutionalized the practice of employee participation and involvement in decision making; particularly in the area of the introduction of new technology and in information systems development. Hyman and Mason (1995) provide a thorough analysis of current thought and practice on this topic, and their work is supportive of this paper's contention that T elecom fits the profile of a typical European company operating in the private as well as the public sector. (US and non-European readers may not be familiar with the level of employee participation and involvement in European companies. The high level of employee involvement results from the degree of influence exercised by European institutions such as the European Community and the governments of individual countries through legislation that govern employee relationships.) However, Telecom is unique in that it is one of several major European telecommunications utilities that was then undergoing a significant degree of IT enabled organizational change centered on meeting the demands of market deregulation and increasing competitive pressures. As a practitioner, one of the authors is intimate with the telecommunications industry in both Europe and the US and recognized that Telecom offered a uniquely accessible site where the phenomena of interest could be observed and investigated fully. The two systems development projects studied (the embedded units) were also purposively selected because of the type and degree of user participation and involvement in evidence, and because both encountered significant change management problems. It was hoped that because of this they would provide fertile examples of the phenomena being investigated.
Data sources
Research into the selected case and its embedded units was conducted through the use of individual interview and documentary sources. Because the development teams on both projects were relatively small, it was possible to select for interview all team members— developers and user representatives. The model presented previously indicated the contexts and processes of interest to this study; hence, in order to fully investigated these dimensions a total of twenty-one interviews took place with key social actors from: (a) both development projects (e.g., development project managers, systems analysts, and programmers); (b) the development environment (e.g., as described by the IS executive and his senior management team); and (c) the organizational environment (e.g., user representatives and user project managers who were considered to be representative of “ world views” in the relevant user constituencies). This focus on multiple environments helped capture the institutional contexts and attributes indicated in the model. Each interview was tape recorded and lasted up to 2 hours in length. The initial study was conducted over a period of one month in the company’s head offices. The subsequent follow-up study was more general in nature and involved two formal interviews with senior business managers, multiple informal conversations with a cross-section of managers, union officers and users, and comprehensive documentary analysis.
Data Analysis
The qualitative data analysis techniques of content and constant comparative analysis provided the necessary mechanisms for the required structural analysis (Miles and Huberman, 1994; Patton, 1990). So as to provide structure for the comparative analysis, a set of initial (general) seed categories were first drawn from the model’s component dimensions and the preceding content analysis. These seed categories enabled a relatively holistic description of the phenomenon to emerge. However, as the analysis progressed these were refined into tighter categories as the thoughts, ideas, and statements of individual actors were compared, and conflicts and difficulties that arose in the categorization exercise were addressed. T riangulation techniques were also extensively employed to provide insights into events, relationships, etc. between primary data sets (Erlandson et al., 1993; Patton, 1990). Several within-case analysis strategies described by Miles and Huberman (1994)—e.g., checklist matrices and network analyses, etc.—were used to identify saturated categories and, hence, complete the structural analysis. Descriptive matrices adapted from the model were used also to present and analyze categories in a condensed format; extended narratives were employed to provide additional detail and context. Being a member of the organization chosen for study, one of the authors was what Bødker and Pedersen (1991) have termed a “ cultural insider.” Hence, as a member of the general business/user constituency, the company’s largest labor union, and presiding officer of one of the company's participative forums, he was intimate with several of the sub-universes of reality that comprised the overall institutional reality (Berger and Luckmann, 1966). This provided the researchers with valuable insights into the organization's culture and climate and greatly aided in the interpretation of the case.
138 Butler & Fitzgerald
RESEARCH PHILOSOPHY AND STRATEGY In keeping with prescriptions of the constructivist paradigm and its hermeneutic method, a qualitative, interpretive, case-based research strategy was adopted for the study (see Lincoln and Guba, 1985; Guba and Lincoln, 1994; and Butler, 1998). This strategy involved an exploratory, single instrumental case study (Stake, 1994) with two embedded units of analysis (Yin, 1989)—that is, two systems development projects. Purposeful sampling was employed throughout (Patton, 1990; Marshall and Rossman, 1989). The initial study was conducted in 1996, with a follow up study in 1998 and a review in 2000, to evaluate the outcomes of this organization’s approach to the development and implementation of corporate IS. The case design utilized has been described by Yin (1989) as both “post-hoc longitudinal”—in respect to the original site visits—and longitudinal—in respect of the overall study. Table 1 describes the study’s research strategy in detail.
CASE DESCRIPTION AND RESEARCH RESULTS At the time of this study, Telecom Éireann was the Republic of Ireland’s major telecommunications utility. As such, it provided a universal telecommunication service and enjoyed a monopoly in many areas of its business. Being a state-owned company, Telecom Éireann’s majority shareholder was the Irish Government, which retained a 50.1% majority stake in the organization when it was part-privatized. The other shareholders at this time included Telecom’s employees, who obtained a 14.9% stake as part of Telecom’s overall change management strategy, and two European telecommunications operators, jointly known as Comsource—KPN (PTT Telecom BV) of Holland and Telia (AB) of Sweden—who had acquired 35% of the company. Telecom Éireann entered into a strategic alliance with these companies in January of 1997. Telecom’s labor unions favored this alliance because both KPN and Telia possessed institutional frameworks based on the tenets of industrial democracy, as did Telecom Éireann. Based as they were in Holland and Sweden, respectively, both organizations operated in similar “organizational fields,” or institutional frameworks, as these countries were renown for their policies on social inclusion. Telecom’s own institutional context underwent radical change in 1998 when employees acquired a 14.9% share in the ownership of the company in exchange for increased levels of participation and agreement on all change-related issues dealing with the introduction of new information systems and the future transformation of the company. As of 1998, there were ten companies in the Telecom Group, the majority of whom were wholly-owned subsidiaries. At that time, the company em-
Institutional Context, User Participation, and Organizational Change 139
Table 2: Telecom Éireann’s Institutional Context for Systems Development Instituti onal Conte xt
Case Findi ngs
Organizat ional Field
Union power grew from 1984, when the company changed st at us from being one-half of a government department t o a st ate-sponsored body under direct control of a team of professional managers and, with a formal board of direct ors, t wo of who were union represent atives. Also, in the lat e 1980s the Irish Government was keen to bring the country’s finances int o rein and to avoid consequent ial industrial unrest ; accordingly, it instit ut ed a national participat ive forum that included all of the count ry’s labor unions and employer groupings. This had a significant impact on Telecom Éireann’s labor unions and t he approach of its management t eam, who had to report to the board of directors appointed by t he Minister for Communications. All this, coupled with the cold winds of deregulation being driven by directives from the European Commission, had the company’s management take a company-wide part icipatory approach to organizational decision making. This approach lasted well into the new millennium, despite the tot al privatizat ion of the company and t he continued downsizing by voluntary means. All of this was effected on a part icipat ive basis, wit h, by 2001, the employees set to own 30% of the company, t hen called eircom, and which, after just t wo years as a publicly traded company, was to be acquired by private consort ium.
Organizat ional policy on change
Since it s inception in 1984, the organizat ion has maintained a participat ive approach to decision making and change. In t erms of bot h struct ure and process, various participat ive fora exist to give them effect . The Joint Technology Committee (JT C) and the Comput er Liaison Committee (CLC) are joint fora that oversee t he introduction of new IT syst ems. In regard to systems development, user represent atives are seconded to project teams, while user groups and individual users part icipate at JAD sessions.
Organizat ional policy on user participation
Telecom Éireann's policy on user part icipat ion reflected it s commitment to participative decision making, and Telecom’s implement at ion of this policy lies in t he manner in which it struct ures its development teams into user and IT project managers, developers, user represent at ives, and user groups for JAD sessions.
Type of user participation
All aspects of Mumford's (1979) typificat ion of user part icipat ion (e.g., consultative, represent at ive, and consensual) were in evidence in both project s. Individual users were by and large consulted on development issues; user representatives and user groups played a more active role that was both representative and consensual; overall, consensus on development outcomes was obt ained at the level of the CLC.
Degree of user participation Ives and Olson (1984) argue that there are several degrees of part icipation: these range from no participation at all, to symbolic part icipation, participat ion by weak control, part icipat ion by doing, and part icipation by strong cont rol. Both business managers (as users) and employees (through t he CLC) exercised st rong control; user represent atives actively participat ed in both GAS and GIS. Weak control was exercised by user groups, while the remainder of users did not participate and were, according to Barki and Hart wick’s (1989) definit ion, involved users. Content and extent of participation
User represent atives were present throughout the syst ems development . Individual users and user groups part icipat ed at key points in the development process: e.g., analysis and design, t esting and implement at ion.
Formality and influence of participation
User represent atives were co-opt ed into the development project team. Development steering groups were formed from management users, and user groups were formed from the general constituencies to participat e in requirement s analysis, design verificat ion, and testing. Significant user influence was exert ed, especially through labor unions and joint management/union forum.
Organizational culture and clim ate
The organizat ion's cult ure emanates from it s status as a state-sponsored organization. This cult ure changed as competit ive pressures, market deregulation, and the move t o privat ization as a quot ed company, all act to change process and structure. The climate is reflected in the partnership approach, with workers being well remunerated and with excellent t erms and conditions of employment . There were no incident s of industrial unrest in Telecom since the 1970’s. The shared organizat ional culture of developers and users ensured that the team subculture was receptive to user part icipat ion in systems development.
ployed in excess of 11,000 staff. Being a large company, operating in highly competitive national and international environments, it had dynamic information systems requirements; these needs, were fulfilled by its in-house information systems function—the Information Technology Directorate (ITD),
140 Butler & Fitzgerald
later to be known as Group IT. The ITD was a centralized functional unit whose chief responsibility was the development, integration, maintenance, and support of all corporate information systems. Based in Dublin, the ITD then had a staff of over 280 spread among its eight divisions. In 1994, however, the IS function in this company was an obscure department within the Finance Directorate. The appointment of a new CEO saw the IS function elevated to directorate status in 1995 so that it could play a pivotal role in the planned IT-enabled transformation of the company. As a consequence of this change, the power asymmetry that existed between the then IS Department and its business clients within the organization was effectively mitigated. The advent of this change in status, coupled with other related events, allowed the IS function to effectively manage its relations with business units and associated user “communities of practice” within the organization. Senior IS management were, for the first time, able to take full advantage of the company’s participative policies and set about building new participative processes and structures that employed a mixture of participatory design (PD) and joint application design (JAD) in order to maximize the benefits associated with user participation for systems development. The development projects described herein were two of the first to be developed by the IS function using this novel approach. Certainly, other systems had or were then being developed using a weak form of PD coupled with JAD. However, these systems had little impact on underlying business processes or on the role-related responsibilities and remuneration of staff; in these projects, the process of user participation, although important, was not deemed to be critical. Subsequently, the company drew on its experiences with IT development projects, such as those described in the case, to refine its participative strategies in the development of its present portfolio of corporate IS. Of particular note here is that these experiences highlighted the need to ensure that change associated with the introduction of IS was managed effectively. As of 1998, the organization has modified significantly the participative dimension of its institutional contexts in light of these experiences. While user participation as described in the case remains central to Telecom’s approach, the company’s employees, through the auspices of the labor unions, have agreed to accept changes to fundamental business processes facilitated by new IT systems in exchange for a 15% stake in the company’s equity. The remainder of this paper describes the context and process of user participation and its relationship to issues of change management with reference to two systems development projects—the Generic Appointment System and the Geographic Information System. Elements of the proposed model (see Figure
Institutional Context, User Participation, and Organizational Change 141
1) are employed as an analytic framework and reporting mechanisms for the empirical findings.
TELECOM ÉIREANN’S INSTITUTIONAL CONTEXT Table 2 summarizes the salient dimensions of this organizations institutional framework as it applies to systems development. Each of the following sub-sections describes the components in greater detail.
Influences from Telecom Éireann’s Organizational Field The legislative and regulatory dimension to Telecom Éireann’s organizational field was not unlike those of continental Europe. It was, however, radically different that in existence in Great Britain where the Tory government led by Margaret Thatcher had blunted significantly union power in the 1980’s and had effectively dismantled participative approaches to decision making across British industry. This excerpt from The Sunday Times newspaper illustrates graphically the Irish context: Unlike Britain, where Margaret Thatcher tamed the unions after a bitter battle in the 1980’s, Irish industrial relations were unreformed. If they did not join efforts to redirect the economy, the unions could prove dangerous. [Bill Attley, then general secretary of the Federated Workers Union of Ireland], mindful of the battering that British counterparts had just endured, agreed to give partnership a try. “The subtext was that we could either be part of the solution or part of the problem,” he later recalled. Talks [with the Irish Government] were conducted throughout the summer of 1987, and at the end of that year the Program for National Recovery was signed. That capped wages and brought stability to the labor market, a factor that encouraged foreign investment and made Irish companies more competitive.1 It is, perhaps, ironic that this partnership approach to industrial transformation was brought about in part by measures introduced by the British government. Nevertheless, while a partnership approach had been instituted at a national level that involved the Irish government, trade unions, and employer groups, it must be pointed out that Telecom Éireann had already instituted significant participative policies from 1984, when it made the transition from the Telecommunications arm of a government department to a state-sponsored body under the control of the Minister for Communications.
142 Butler & Fitzgerald
It must be noted, however, that much of the impetus for all this came from the largest labor union involved—the Communication Workers Union. The union was also instrumental in securing two seats on the company’s board of directors and had the company institute several joint fora to accommodate participative decision making. As a member of the European Community, the Irish Government has an obligation to implement commission directives. The most significant directive to be implemented in the early 1990’s was that which deregulated the telecommunications industry and introduced competition into a monopolydominated sector. In response to this dramatic change in its organizational field, Telecom Éireann’s management realized that it had to transform the company. It chose IT as the vehicle by which it would change the company’s operations and instituted innovative participative policies to help ensure that this vehicle reached its destination.
Organizational Policy on Change Since its inception as a state-sponsored organization, Telecom has adopted a participative approach to the implementation of organizational policies and decisions. This was recently underlined when the company reiterated its position, viz. “The process of consultation with unions in regard to all the implications for staff of technological change, is one to which the company remains fully committed.”2 To give effect to this policy, the company has instituted several joint bodies; for example, the Computer Liaison Committee (CLC), whose members are drawn from both company management as well the labor unions, dealt exclusively with issues surrounding the introduction of information systems within the organization. Prior to the development of information systems in Telecom, the business owners/ initiators submitted and presented their proposals to the CLC. The CLC would then establish a broad framework and terms of reference for the development and implementation of new systems. Here, the labor unions arranged for user participation and involvement at various levels and stages of a development project. Issues related to management of change were also highlighted at this stage. The level of importance attributed to such issues depended on union representatives’ knowledge of the impact which the new system would have on their members’ work and remuneration, etc. and also on the willingness of business managers to provide an accurate impact assessment. There was also the matter of “unintended consequences” surrounding the outcomes of development plans; these matters would be addressed at later meetings of the CLC at the behest of either the IS function, business owners, or labor unions.
Institutional Context, User Participation, and Organizational Change 143
Organizational Policies on the Type, Degree, Content, Extent, Formality, and Influence of User Participation In adherence to the company’s participative approach, each systems development project within Telecom had a designated business owner or project sponsor. For large projects, a development steering group (DSG) was formed from the constituencies of interest within the organization; managers from relevant business areas and the IT Directorate (ITD) normally comprised these groups. Two project managers jointly managed each project: a user project manager drawn from the business constituency and a development project manager drawn from the ITD. The latter managed the physical development of the system; the former managed business user input into the project in areas such as the provision and management of user representatives, user groups, user test teams, and infrastructural resources, etc. The development team normally consisted of one or more user representatives from interested constituencies within the business and a team of developers from the IT Directorate. User representatives actively participated in most development activities, apart from programming and the technical aspects of systems development. Although key users were interviewed to elicit system requirements, user groups were also formed to provide the development team with a core group of users for further requirements analysis and to verify and ensure that the system, as developed, would meet these requirements. Because of the difficulty in involving all interested parties directly in systems development, the company utilized both PD and JAD approaches to the development process. The participatory mechanisms employed within the organization therefore provided users with opportunities to express their “world views,” have political conflicts resolved, and helped negate potential power asymmetries between developer and user. As Table 2 indicates, users participated and were consulted about design issues through project-based mechanisms such as JAD, the objective being to arrive at a consensus on such matters (see Mumford, 1979). User participation in the GAS and GIS development processes, for example, ranged from “participation by advice” to “participation by strong control” (Ives and Olson, 1984), depending on the organizational status of the end users participating. User representatives on the development teams participated as support for analysts in the requirements elicitation exercises with individual users and user groups. In both projects, user representatives were trained in the CASE tools and techniques and participated in the use of these tools; Kozar and Mahlum (1987) comment on the importance of this aspect of user participation. User representatives also took an active role in the implementation of these systems. Users who did not participate on development teams did so at
144 Butler & Fitzgerald
individual interviews and at group sessions with the systems analysts and user representatives during the requirements analysis phase. In the GAS project these users also participated in prototyping activities. In both projects, users participated in testing the systems once developed. Users from the management constituency also played an active role either as project managers or as project sponsors (cf. Land and Hirschheim, 1983).
PROJECT-RELATED FACTORS As previously indicated, two systems development projects formed the embedded units of analysis in the study: the Generic Appointment System and the Geographic Information System development projects. Table 3 describes the project related-factors that the model in Figure 1 suggests as being relevant to the processes of user participation and change management. The following narrative discusses these issues and provides a description of important processual features in both projects.
General Project Characteristics: From Project Initiation to Implementation-Related Change The Generic Appointment System (GAS) grew out of a business need in one key area of the company’s operations—its telephone repair service. Table 3: Project-related factors Project-Related Factors
Case Findings
Initiator of the project
In both the GAS and GIS, the initiators of the projects were senior, but not executivelevel, business managers in the Dublin operational area.
Top management commitment
In respect of the GAS, a high degree of support existed from organization and IS function management. With the GIS, on t he other hand, a high degree of top management support existed in the first phase, but this waned in subsequent phases. There was also a lack of support from senior IS function management for the GIS.
Time for development
Although there was a very tight schedule set in both projects, it did not impact negatively on the degree of user part icipation.
Financial resources available
Budgetary resources did not affect the degree or quality of user participation in either project.
Type of system under development
Both the GAS and GIS are operat ional support sub-systems.
Project complexity
The GAS was a complex project; several funct ional groups were involved. The GIS, on the other hand, was a highly complex project and several functional boundaries were crossed.
Complexity of task structure
Both systems supported operational activities that exhibited medium-level task complexity and which were part of moderately defined business process.
Expected change brought about by the system
The implementation of both systems meant a high degree of change for particular user constituencies in the operational areas concerned as new business processes were supported.
Institutional Context, User Participation, and Organizational Change 145
Business managers across the organization recognized the need to make efficient the manner in which repair service workloads were managed and associated service appointments were made with customers—more importantly, they recognized that there was “a desperate need to radically transform [Telecom’s] fundamental business processes, introducing the GAS was another step in that direction” (senior middle-manager). A senior, nonexecutive business manager based in the company’s Dublin HQ was this project’s sponsor and initiator. However, according to a senior IS manager, the request for the system only received a response from and the support of the company’s IS function when IS managers needed to choose “a small, well bounded system, so that [they] could introduce and pilot-test [their] new application development environment (ADE), called IEF.” Commitment from senior business and IS managers to this project was therefore quite high. One of the goals to be achieved by introducing this new system was the elimination of unproductive visits by operational staff to customer premises when customers were absent. The GAS also assisted supervisors in their task of allocating workloads to their repair teams, which consisted of telecommunications technicians. At a corporate level, the GAS supported the operation of the company’s ten fault-handling and repair centers and the telecommunications technicians employed there. These internal and external groups therefore had a keen interest in the development and implementation of this system as it impacted on a range of their basic functions. A development team that consisted of a user project manager, a development project manager, two analysts, the CASE vendor consultant, one programmer, and a user representative, carried out the development of the GAS. Three user groups and several individual users formed the bulk of participating users from the constituencies of interest. A CASE-enabled rapid application development (RAD) approach saw development take place within a three-month time period. With that said, the implementation of the first phase of the GAS took a further six months. The GAS project operated within fixed budgetary and time constraints; however, neither of these materially affected the process or content of user participation. As a distributed IS, the GAS comprised 8 relational databases that served up to 180 Windows-based PC terminals in fault-handling centers around Ireland, and a further 400 terminals in operational depots nationwide. The GAS project came in on time and budget. The Geographic Information System (GIS) was developed to provide a graphical database of the telephone network in the general Dublin area. Prior to its implementation, the planning and drawing office functions manually recorded network-related details using paper-based records and maps. The
146 Butler & Fitzgerald
business manager responsible for this project recognized that there would be significant improvements, in terms of economic and operational efficiencies, to be gained in using a GIS in this area of the company’s operations. However, the implementation of the GIS meant that a radical change had to take place in one of Telecom Éireann’s operational business processes. Accordingly, the development of the GIS posed significant challenges to the business sponsor, users, and developers alike. On the one hand, there was the issue of change management associated with the radical change in work practices/roles of the users in operational units who performed telephone network mapping, planning, and record handling duties. On the other, there was the challenge of developing a highly complex and sophisticated information system within a proprietary application development environment. A high degree of top management support existed in the first phase of the project, but this waned in subsequent phases. As one business manager put it: “This project became a political ‘hot potato’ because of the amount of hassle coming from the unions. I guess we should have foreseen it, you can’t just do away with the draughts men, and expect them to go quietly … eventually [so and so] just wanted the whole thing to go away.” The complex technical, processual, and political factors were recognized from the outset by IS managers, and in an effort to avoid tying up scarce developer resources to a project that had all the potential to fail, they offered what the GIS project manager described as token developer resources to staff the project. The GIS development team consisted of a user project manager, a development project manager, two analysts, three programmers, two user representatives, and a team of ten end users whose primary role was to input graphical data and carry out test functions. User groups were also drawn from the two constituencies of interest—the drawing and planning functions. Consultants from the software vendor also participated in the development process. The GIS was built around a proprietary graphical database engine that served up to 40 high-end workstations. The first phase of the GIS development took almost two years to complete. The implementation and rollout of the first phase took a further year. The project failed to meet the scheduled completion date and also exceeded budget. As with the GAS project, there appeared to be little if any constraint on user participation placed by time or financial considerations.
Dealing with Project and Task Complexity While the previous sub-section has described several of the projectrelated factors described in Table 2, this sub-section focuses on the impact of two in particular—project complexity and task structure/complexity—which
Institutional Context, User Participation, and Organizational Change 147
have been posited as having particular influence on the process of user participation. Recent research questions the need for comprehensive levels of user participation across the SDLC (Guimaraes and McKeen, 1995); instead, Guimaraes and McKeen argue that there is little need for user participation when task (business process) and project complexity is low (see also, Cavaye, 1995). In many respects, the GAS was characterized by a low to medium degree of complexity in relation to task structure and a moderate level of project complexity. In the GAS system, task structure refers to the operational task of making appointments with customers for equipment repairs or installation and associated work scheduling. GAS “system complexity” was low to medium from a user perspective; however, developers found certain aspects of the physical detailed design to be problematic—notably, projectrelated problems with network interface protocols that the GAS was using to communicate with the existing Fault Handling System (FHS). The GAS crossed several functional boundaries, and it therefore led to a high degree of social, rather than technical, complexity in relation to task and project factors. IS and business managers were of the opinion that this necessitated a high degree of user participation. Also, due to a scarcity of inhouse developers, users were encouraged to become more actively involved in systems development, particularly at the design and implementation stages. The GIS was a highly complex system in terms of system functionality and the nature of the business tasks it supports. The IT project manager commented on this: “Without user participation on this project, well we just couldn’t have done it, the requirements were so complex; mapping the network to the level of detail required, and then representing it graphically, and then updating and using the maps to plan the network and provide customer service, grappling with this level of detail and trying to computerize it was a horrendous task.” A GIS developer also commented on the technical aspects of the project: “Working with the GIS vendor’s proprietary development approach and programming language was bad enough, but add the complex requirements and the technical headache of representing them, and you get some idea of the challenge this system posed for us … look, there’s a telco in [the US] doing the same thing for the past couple of years with a project team ten times as big, and we are trying to do the same with 5 people, crazy.” The practice of comprehensive user participation across the SDLC in the GIS and GAS projects was of obvious help to developers in coping with both project and task complexity, and it was greatly facilitated through the policy of on-site development at the business users’ offices. Prior to the development of the GIS and GAS, most systems development took place off-site, that is, within the IS function’s own business accommodation. Senior IS function
148 Butler & Fitzgerald
management and development project managers recognized that there were significant benefits to be gained from on-site development at the users’ place of business. It was thought that this policy would provide additional opportunities for informal and indirect user participation, thereby improving user/ developer communication and fostering good relations at all levels. In each project, coordination and control of developer and user activities was highlighted as being of particular importance in addressing issues of project complexity. Developers and users on the GAS and GIS project teams considered regular project meetings to be an important mechanism in the achievement of this goal. As expected, such fora helped developers to keep abreast of each others’ progress and activities; however, the joint nature of such meetings provided user and development project managers with an opportunity to keep user representatives and developers abreast of external issues such as industrial relations problems. Nevertheless, developers commented that the informal grapevine was far more informative with respect to keeping up to date with user-related problems on both projects. User representatives felt that taking part in these project meeting made them feel part of the development team, as the user representative on the GAS protect commented: “For me, the project meetings were the ‘icing on the cake’ when it came to being on his project. While I got on with the developers, it gave me an opportunity to air my views and discuss issues with all of the IT people at one sitting; I also took this as an opportunity to sort out problems with [the user project manager].”
PROCESS-RELATED FACTORS Table 4 presents the factors within the participation process that impact the degree and effectiveness of user participation in systems development. Table 4: Process-related factors Process-Related Factors
Case Findi ngs
User/analyst relationships
Very good across both projects. Relat ionships were enhanced by the existence of a common organizational cult ure and favorable development climate in project teams.
Influence and power relationships
Several instit utionalized checks and balances existed which countered any power asymmetries or political opport unism that may have arisen. This was due to the implementation of organizational policy by all the constituencies involved in syst ems development. Positive management attitude toward and acceptance of user input was also of help here.
Communication
High degree of user/analyst communicat ion was in evidence across both projects. In the GAS project this was greatly enhanced by (a) on-site development, (b) training the user represent ative in IS development method and tools, and (c) the prototyping approach adopted. In the GIS project, however, there was only some improvement in communication brought about by user t raining in SSADM.
Institutional Context, User Participation, and Organizational Change 149
User/Analyst Relationships The high level and quality of participation in the GIS project was commented on by one developer: “The team greatly benefited from the presence of user representatives. I was up to speed with user needs all the time.” These sentiments were strongly endorsed by developers in the GAS project also. Participating users were fully aware of the favorable attitude that developers had towards their contribution and responded accordingly: as the user representative on the GAS put it, “the lads here really made me feel welcome and part of the team from the outset … I worked with John on the requirements elicitation in both the individual interviews and the user work groups … and was trained up on IEF, just like the rest of the development team.” Developers in both projects also articulated a need for more active participation by certain users, such as the draughts men on the GIS and faulthandling center technicians on the GAS, as it was felt that an increased level of participation by such users could have helped mitigate some of the contentious change management issues surrounding the implementation of the systems in the organization.
Influence and Power Relationships Developers on both the GAS and GIS occupied positions of relative seniority to the operational staff who acted as participating users in both projects. There was no evidence of the “not invented here” syndrome among developers; certainly, if it did exist, users did not mention it. The climate and culture of the IS function appeared to be egalitarian in the main, and users’ opinions were treated with respect by developers. The GAS user representative captured this point succinctly: “To be honest, I was surprised at being treated like an equal, people at that level on my side of the house tend to be standoffish at best … the lads outside [in the user groups, etc.] feel the same and get on well [with developers]. I guess the reason is that John and Don came from the technical side [originally], and know the score.” One of the GIS user representatives also commented on this: “There is a lot of ill-feeling out there, but none of it is aimed at the IT guys; the staff on the ground know that they are just doing their jobs, and although it is a bit like ‘turkeys voting for Christmas’ with the draughts men, most of them will give all the help required … it’s the union’s job to sort out the problems with the company and get the best deal out of this [for all concerned].”
Communication Because of the on-site development approach in both projects, the presence of full-time user project manager and representatives, including the
150 Butler & Fitzgerald
data capture team of ten planners and draughts men on the GIS project, and regular project meetings, communication between developers and users was not an issue here. There was on-site access to both business managers and users at all levels. Formal and informal communication mechanisms abounded as one developer on the GIS project attested: “I can step outside there and call on any one of the user representatives, or get the data capture people in here any time during the day and have them clarify things for me, or I can waltz off down the corridor and speak to a planner or a draughtsman, no problem. If I feel lazy, I can always use e-mail, and if people are out or at a meeting, I know I can always catch them in the canteen upstairs.” Both projects differed in the level of support for user/developer communication that their respective development approaches and CASE tools offered. In the GAS, the rapid application development (RAD) approach with its prototyping tools significantly enhanced user/developer communication at user representative level and at the JAD sessions. The GIS did not have the same capability, and it was only during the testing and data capture that this type of helpful feedback occurred. It is clear that the process-related factors mentioned in the model operated to produce systems that matched business needs, user requirements of a technical nature, and system preferences that did not impinge on the business objectives set for the GAS and GIS.
USER-RELATED FACTORS Beynon-Davies et al. (1997) argue that the choosing the “right type of user” to participate in systems development is critical to both the development process and its product; Leonard-Barton (1995) has also commented on this aspect of participation within wider organizational contexts. Choosing the ‘right type of user’ to participate in systems development was a problem that exercised the minds of business and IS managers alike prior to the commencement of systems development on the GIS and GAS projects. IS managers and developers were eager to secure the most knowledgeable and proficient user project managers and representatives in order to make their “lives that much easier” in arriving at a full set of user requirements and in converting these requirements into a system that would be accepted by the business constituency. At a time when developer resources were scarce, issues like developer productivity and project life span were uppermost on the minds of IS function managers. This led one senior manager in the ITD to argue: “if the ITD were going to commit scarce and valuable resources to a project, then business
Institutional Context, User Participation, and Organizational Change 151
Table 5: User-related factors User-Related Factors
Case Findings
Participation vs. involvement
In each project, only a sample of users from affected constituencies actively participat ed in systems development, i.e., user represent atives and user group members. These users had strong favorable attitudes toward the processual and technical features of the system they helped create. Following the distinction made by Barki and Hartwick (1989), the subjective psychological st ate that reflects the level of importance and personal relevance of the informat ion system to users reflects their involvement in the development process. By this definition all users felt involved because of the high level of representation on the project team and user groups, but also through the auspices of the CLC.
User perceptions of organizational climate In relation to the GAS project, users felt that a favorable development climate existed. However, users involved in the GIS project were generally of the opinion that the organizat ional climate was negative; however, they felt that a favorable development climate existed. Willingness to participate
In both projects, users were eager to participate for several reasons, viz. self advancement, political power interests, and technical curiosity.
Ability to participate
The use of dual project development teams (user and developer) and the use of user groups in JAD workshops greatly facilitated users’ ability to participate.
User characterist ics and attitudes
Users generally held posit ive attit udes toward the social and technical aspects of systems development. However, user computer literacy posed problems in relat ion to their ability to fully participate. It was clear that the shared organizational culture was of benefit in accommodating different “ world views”; however, users of both syst ems were clearly from different constituencies within the organization and therefore possessed different characteristics and attitudes, e.g., int ernal and external technicians in the GAS and planners and draughts men in the GIS.
User commitment to development-related Depending on the user constituency concerned, the level of commitment to change change differed among users. In the GAS project the internal and external technicians exhibited varying levels of commitment and enthusiasm as both groups endeavored to steer system features in particular directions, which favored one group over the other. In the GIS, the draughts men and planning technicians were polarized in their commitment. The latter were highly committed, due t o t he increase in status and power bestowed upon them by the change; the former were less than happy because their roles and remuneration were going to be significantly affected.
managers should do likewise.” Table 5 illustrates the user-related factors of found to be of importance in the development projects studied. In the GAS and GIS projects the selection of user representatives was perceived as a key issue due to the active role that they were expected to play throughout the development process and in the subsequent testing and implementation of the developed systems. Given the potential for change management problems surrounding both systems, users’ perception of the prevailing organizational climate, their ability and willingness to participate, their characteristics, attitudes, and commitment all proved pivotal to the eventual success of these development projects. As the unions had to be consulted with on all issues, business managers could not appoint the individual user of their choice. The union had an active role in selecting participating users, as there was a fear that if left to business managers, they would select only those who would, for whatever reason, align themselves
152 Butler & Fitzgerald
with the management agenda during systems development and thereby disenfranchise the broader user constituency. Surprisingly enough, the IT project teams ended up with user representatives and user groups who collaborated wholeheartedly in the “technical” development of the systems concerned, as Table 5 illustrates. In both projects, the development-related workshops normally consisted of developers from the relevant project team and end-users from only one of the user constituencies participating in the development process. The manner in which this participative mechanism was implemented possessed certain flaws, however. For example, group workshops on the GAS project tended to be used as a platform for political infighting between two different user constituencies—the fault handling center technicians and the repair team supervisors and team members. Here users from one operational area would introduce arguments to oppose or alter system features favored by users in other operational areas who were not present at the workshop sessions. User groups also tended to play on the stated objections of absent groups in order to influence the trajectory and outcome of the development process in their favor. For example, the user representative on the GAS project reported that “Staff at the fault handling centre felt that their jobs/roles were being whittled away and that the control of the fault handling system was being shifted to the repair teams.” This situation engendered a negative attitude towards the new system within one user constituency and strongly influenced the deliberations of the Computer Liaison Committee (CLC). Because of the high degree of political conflict between the various groups, the user representative on the GAS project observed there was a need in future “to have all the user groups affected by the systems development present at each of the workshops; this avoided the emergence of a ‘them and us’ situation between users … both the unions and management are to blame here.”
CHANGE MANAGEMENT IN THE GAS AND GIS The issue of change management associated with the implementation of both the GAS and GIS was found to exert a critical influence on the trajectory and outcomes of the development process. Even though the development project teams were embedded within the user community, and user groups were employed in the elicitation and verification of requirements in what could be described as a fully participative development exercise, change management problems arose in both projects during development, and, in particular, during implementation. The findings of previous studies indicate that the level of participation observed in the case should have resulted in
Institutional Context, User Participation, and Organizational Change 153
positive user attitudes to system quality and full acceptance of the developed systems (see Ives and Olson, 1984; Hwang and Thorn, 1999, for example). Nevertheless, as Lin and Shao (2000) and Jiang et al. (2000) forecast, this study reveals that other development-related aspects of an organization’s institutional framework come into play and exert considerable influence on the development process and its product. Ever before systems were developed in Telecom Éireann, formal negotiations were entered into with the unions at the level of Computer Liaison Committee (CLC) regarding the implications of the systems for users and the level of input required from user constituencies in developing systems. Business managers who initiated and had ownership of a project would approach the CLC to have relevant issues surrounding the proposed development project discussed. Negotiations would then take place, and union members on the CLC would subsequently report to their executive committees, who would in turn inform the local branch organization(s) affected. The union executives would usually agree in principle and would nominate to certain delegates the responsibility of liaising with local union branches to provide users to participate in the project. Once formal agreement was reached and user resources allocated, development could begin. This process was, however, dependent on two things: (a) business managers presenting a full description of the system and its implications for the business process it would support to the CLC, and (b) the absence of subsequent inter- and intra-union conflict and political infighting regarding changes to the user constituencies affected. Both of these factors came into play in the cases described herein. Although the GAS had been accepted as developed by all the constituencies of interest, the CLC over rode decisions taken and agreed by the user group. This situation arose despite the fact that a CLC member had been involved throughout the development process as a member of the project’s user group. A developer on the project provided an explanation for this: he reported that influential users who did not participate in the development process—i.e., technicians at the fault handling centres—had voiced their “unhappiness with system features, [and this] prompted the CLC to say no to the implementation of the system.” Hence, prior to its implementation at a trial site, several modifications had to be made to the GAS in order to address these objections. The problem here was that while the CLC agreed up front to the need for such a system, detailed requirements only emerged as the project unfolded and changes to previously agreed functionality were required and implemented. The user representative and user groups had settled on system functionality that was not agreeable to other users; significantly, the
154 Butler & Fitzgerald
objecting users possessed enough political muscle to overturn the agreed requirements and have the system functionality changed to suit their preferences. Consequently, perfectly legitimate user requirements were discarded for what were essentially political reasons. A very similar scenario existed in relation to change management issues that arose during the life of the GIS project: here, business managers were aware of the potential for significant change management problems to develop when the system was implemented. These problems related to the radical nature of the change in work-related roles, responsibilities, and remuneration of one of the user constituencies involved. Interestingly enough, while users participated in and were satisfied with the system as developed, they were unhappy with the consequences of its introduction. The issues here were quite complex, as two different unions were involved; and members of one, the draughts men, were losing out to the planning technicians, who were represented by the larger of the two unions. The business manager involved had left it to the CLC to sort things out instead of dealing with the issue prior to the commencement of development. The absence of adequate managerial attention to these issues meant that, although both systems were developed with the cooperation of users, both projects encountered user resistance at the implementation stage.
EPILOGUE The problems described in the previous sections were not unique in Telecom Éireann; similar tribulations beset other instances of information systems development and implementation within the company. The weakness in Telecom’s approach to change management at this time (1995/96) was commented on by the IT Director who stated that “What we have been doing here at IT is just reacting to the business needs which up to now have been ad hoc and poorly articulated at best; this has been disastrous for us here. [The new CEO] has changed all that. One of my CSFs is to ensure that business process engineering gets off the ground in Telecom. I’m answerable to him on that. We all recognize the need to change the way we do business; if we don’t we won’t survive. It’s as simple as that. Getting the business sponsors to sign up via a project charter is only part of the solution. We need then to take ownership … The unions are quite powerful, nothing gets done without their say-so. That’s the other challenge for us. I don’t mean beating them over the head or anything … we need to get them on-side as well.” The results of a strategic business review conducted in the early part of 1995 helped chart future business strategy, a major outcome of which was the recognition of a need for a comprehensive approach to participative decision-
Institutional Context, User Participation, and Organizational Change 155
making over the life of the strategy. Accordingly, the company’s new CEO set up a Joint Strategic Consultative Group (JSCG) in 1995 with the unions to give effect to a partnership approach to organizational change, particularly in relation to IT, as it was the CEO’s intention to have IT systems support and enable Telecom’s transformation. A framework agreement for the transformation of the company was drawn up at this forum in consultation with the unions. In this agreement, the finer details of the company’s “Organizing to Compete (OTC)” strategy were fleshed out: employee participation in the achievement of all business objectives and IT-enabled change formed the basis of the strategy. However, in order to achieve commitment to the desired business process changes, the associated reduction in staff numbers and operating costs, and increased quality of service to the customer, the company entered into an Employee Share Ownership Agreement (ESOP) that gave employees a 14.9% stake in the company. This was tied to the achievement of the transformation goals and was effectively a blanket, up-front agreement by the unions on all issues of organizational change surrounding systems development and IT implementation. As a direct result of this strategy, the CEO established a Business Process Design Directorate in 1997; in the following years, the IT Directorate was actively developing future methods of operation (FMO) for the company’s business functions with Bellcore (US) in collaboration with the new directorate. Here, mimetic and normative forces, chiefly in relation to best practice in US telecommunications companies, as interpreted by the Bellcore consultants, as well as business processes in place at KPN and Telia, as interpreted by managers in those companies who had taken up senior positions within Telecom, all influenced the formation and reengineering of Telecom’s core business processes. It is perhaps ironic that the implementation GAS system, which caused so much controversy among different constituencies of users between 1995 and 1997, was fully integrated with the Fault Handling System in 1999 and then ported to the company’s intranet platform. The new platform was implemented across the company’s operational units on a phased basis over the following two years. This new platform permitted operational and repair staff full access, via GSM-enabled laptop computers, to now integrated fault handling and appointment systems. The upshot of this piece of business process reengineering was that all but two of the company’s regional fault handling centers were closed in April 1999, and the staff redeployed to other duties. This would not have been possible without the ESOP deal and the accompanying agreement to IT-related change by the unions. The political role that the CLC played in the past has all but been eliminated, along with the attendant problems described herein.
156 Butler & Fitzgerald
DISCUSSION AND CONCLUSIONS The previous sections illustrate that Telecom Éireann’s institutional context played a pivotal role in shaping and influencing all aspects of the content and process of user participation in systems development (RQ1); it also provides ample evidence that it was institutional mechanisms which helped resolve change management problems in this company (RQ2). Clearly, as the section on change management and the epilogue have illustrated, comprehensive organizational policies and structures are required to accommodate matters of organizational change surrounding systems development—as is the case in the organization studied. However, management in Telecom Éireann realized that even if organizational policies on user participation and change management are included as “rules of the game” (in terms of institutional mechanisms and arrangements that structure organizational actors’ roles and responsibilities in system development), they are no guarantee of actors’ abilities or intentions to either “play” competently or fairly. This study revealed that problems could arise within an institutional context that possessed mechanisms to manage and agree upon change prior to development and which addressed issues of user participation. Here, the interplay of project-, process- and user-related factors coalesced to produce systems of high quality, and which would have been acceptable to users if business managers and the unions had been competent and fair in their dealings from the outset and had properly addressed issues related to change in business processes and their effects on organizational actors. Telecom’s management introduced additional institutional arrangements, described in the epilogue to this case study, to address these deficiencies. The lesson to be drawn from all this is that if organizations are to ensure that management of change proceeds smoothly, and the benefits of user participation are to be maximized, then a combination of process redesign—to have business managers focus properly on and provide a detailed plan of their requirements prior to development—and stakeholder agreement—to have organizational actors buy into change upfront—is required. The third research question (RQ3) inquired as to which of the model’s factors were critical in shaping and influencing user perceptions of product quality and user acceptance of the end product. As we have seen, an appropriate institutional context provides the bedrock on which successful user participation is based. This study’s findings indicate that issues of (a) project complexity and degree of task structure; (b) user/analyst relationships and communication; and (c) users’ willingness and ability to participate, all critically influence users’ perceptions of system quality. While these issues exert a general effect on user acceptance of an implemented system, it was
Institutional Context, User Participation, and Organizational Change 157
found that institutionally-mediated factors exert a specific influence on the level of user acceptance of systems viz. (a) the expected change wrought by the new system; (b) user influence and power relationships; and (c) user commitment to development-related change. With respect to RQ4, it would, as this study’s findings suggest, be difficult and perhaps dangerous to attempt to disentangle and isolate factors that relate specifically to user participation and management of change. For ultimate system success both issues need to be addressed. Nevertheless, an answer to RQ4 has been provided en passant when describing the answers to the other research questions, particularly RQ3. The model presented herein and empirically tested in a constructivist study of user participation and management of change presents a significant contribution to understanding what are complex phenomena. It provides holistic confirmation of what were fragmented conceptualizations and findings in previous studies. This study’s conclusions have laid bare one of the more common myths of the IS field, namely that user participation may lead to increased user acceptance of systems by reducing user resistance to change (Ives and Olson, 1984). As the findings of this study have illustrated, an organization’s institutional context is of primary importance in this regard. Finally, while this study provides a much-needed, empirically-based understanding of issues surrounding user participation and management of change in systems development and implementation, the conceptual model presented herein can be employed by future researchers as a framework for investigating such complex phenomena in order to establish a body of cumulative research in the area.
ENDNOTES 1 The Sunday Times (Irish Edition), News Review, 15 April 2001, p. 1. 2 Statement of Company Position on Current Industrial Relations Issues, October 1995.
158 Whitley & Pouloudi
Chapter X
Studying the Translations of NHSnet Edgar A. Whitley London School of Economics and Political Science, UK Athanasia Pouloudi Brunel University, UK
This paper explores the ways in which innovative information systems projects take on a life of their own. The paper begins by reviewing some of the more traditional ways of making sense of this phenomenon—resistance to change, escalation, and unintended results—before introducing the sociology of translation. This provides a theoretical framework for viewing the transformations that an information systems project undergoes. The framework is then applied to the case of the NHSnet project in the United Kingdom. Using the language of sociology of translation, we consider the underlying stakeholder relations in the case study and draw more general conclusions for the responsibilities of stakeholders involved in an information systems lifecycle.
INTRODUCTION Few information systems projects follow a straightforward path from initial idea through to widely used working system. Instead, what typically occurs is that the nature of the innovation and the purpose of the project changes many times during the implementation process. Much information systems research attempts to try to explain what goes on over the life of the project. The purpose of this paper is to add one new element to the range of conceptual tools available to the information systems researcher trying to Copyright © 2002, Idea Group Publishing.
Studying the Translations of NHSnet 159
understand what happens to a particular innovation and to demonstrate how the insights from using this tool can add to our understanding of information systems implementation. The paper begins by reviewing some of the main ways in which the changes that an information systems project undergoes have been conceptualized. These include unintended effects, resistance to change, and escalation. The paper then introduces the notion of translation that has been used in the field of science studies and shows how it can be applied to the study of information systems, paying particular attention to the particular kinds of translations that an information systems project can undergo. The paper then presents the case study, namely the introduction of a new shared network in the UK National Health Service (NHSnet). This project is seen as a series of translations, and the paper explores some of the main translations and discusses their implications for relevant stakeholders. The paper ends with a summary and discussion of the benefits of using this approach to analyze the “life” of information systems projects.
UNDERSTANDING THE LIFE OF A PROJECT There are many different ways in which information systems researchers have tried to conceptualize the life of a project. One approach is to describe the events associated with a project and to talk about them in terms of anticipated, unanticipated, and emergent changes. Another approach is to talk about the changes in terms of resistance to change and the mechanisms that can be used to counter the implementation of the system. A third approach is to consider the project as potentially escalating out of control.
Unanticipated Changes Orlikowski (1996) describes an organization introducing Lotus Notes as a groupware solution for a firm in the software industry. The firm, pseudonymously known as Zeta Corporation, is the developer of a range of powerful software products in the area of decision support and executive information. Their tools are based around the Omni fourth generation language and allow users considerable flexibility in how to analyze their data. As a consequence, many users have technical queries about how to make the products perform particular tasks. The groupware system was introduced into the product support area to enable the sharing of information about problems between the support team (Orlikowski, 1996, pp. 25–27). The organization had previously used a stand-alone system to store details about client problems. The existing system had limitations in terms of inconsistent usage, poor data quality and limited search capabilities. The intention behind the new system was to pool
160 Whitley & Pouloudi
all the data in one shared system. Thus, advisors would be able to draw on the experiences of all previous interactions, rather than just their own. As an illustration of the success of this, the number of records of client problems in the database grew from 4,000 records to 35,000 in the two years from December 1992. As Orlikowski notes, however, the system was successful, in part because of the particularly cooperative culture in the department. Thus, if the same technology had been introduced into an organization with a less cooperative culture, it is unlikely that a similar success would have been noted. In describing the changes that arose as a result of the system, Orlikowski differentiates between anticipated changes, opportunistic changes (which are not anticipated ahead of time, but are introduced purposefully as a result of an unexpected opportunity or event), and emergent changes which arise spontaneously out of local innovation. An example of an anticipated change arising from the system was the ability of managers to control the resources in the department more easily; by being able to monitor the number of calls, they were able dynamically to change the allocation of work. An opportunistic change that arose from this was the decision to introduce the role of support partners who had specialist knowledge and who could support less experienced staff who handled the front line of calls. An unanticipated consequence was the way in which these frontline staff dealt with their new support partners. The organization discovered that many junior specialists were reluctant to reassign calls to their support partners; often they felt that tackling difficult problems would help them to develop their own careers, whereas on other occasions, the reluctance arose from a concern not to be seen to be dumping problems on their support partners. Unfortunately, Orlikowski’s analysis goes no further than differentiating between the three types of change. No explanation is given for why emergent changes arise, how they could be prepared for and how they can be controlled.
Resistance to Change A second way of conceptualizing the changes that a project undergoes is through the notion of resistance to change. This is perhaps best typified by the classic paper by Keen (1981), which outlines a variety of approaches which have been used to counter the implementation of a new information system. Amongst the counter-implementation games identified by Keen are easy money, budget, and territory, whereby a project is supported because it can be used to support some needed activity within the player’s sphere of influence (p. 29). Another game is tenacity, whereby a project is kept incomplete until one’s particular terms are satisfied. Odd man out is used by players who give only partial support and withdraw when the project faces trouble (p. 29). Other
Studying the Translations of NHSnet 161
games identified by Keen include up for grabs, where a project with only lukewarm support is taken up by another player; reputation, whereby a manager gets credit for being a bold innovator but leaves the project before the implementation stage and hence avoids any backlash arising from any problems that exist (pp. 29-30). Thus, according to Keen, a project is under constant threat of counter-implementation, and management must be prepared to take counter-counter-implementation measures to ensure that the project succeeds. A similar argument is put forward by Markus (1983) who highlights the political aspects of any system implementation, seen from a perspective which emphasizes the effects of the interaction between the people and the systems.
Escalation A third approach to understanding the phenomenon is through the notion of escalation. Keil (1995) defines escalation as a continued commitment in the face of negative information about prior resource allocations coupled with uncertainty surrounding the likelihood of goal attainment. In order to study the factors that can lead to escalation, Keil describes the experiences that CompuSys (a pseudonym) had with a project called Config. Config was a rule-based expert system that was designed to help the company’s sales force produce error-free configurations prior to producing pricing estimates. Previously, the company had made estimates based on incorrect configurations and had to bear the cost of any discrepancies itself. The organization had positive experience with another system (Verifier), which was used to produce correct system configurations, and was therefore expecting that this project would be successful as well. The Config project was finally terminated 13 years after it was initiated. During this time, feedback about the project was predominantly negative. Eight years after the project was initiated, usage of the system had dropped to less than 2% of all transactions. A number of explanations were given for the continued support of the project in the face of such negative assessments. Amongst the key arguments identified by Keil are the fact that the project was perceived to have a large net present value, that the project was regarded as an investment in research and development, and that the problems appeared to be temporary setbacks rather than fundamental problems of concept. Moreover, the organization had a history of successful projects in this area, and the manager of the project was taking a high degree of responsibility for the success of the project. Indeed, Keil argues that the involvement of the strong project champion meant that the project was defended at times when it might legitimately have been cancelled.
162 Whitley & Pouloudi
Summary Clearly there is overlap between each of these approaches; for example, what one sees as an unanticipated change could be viewed by another as an attempt at counter-implementation. This again could be seen by another as a project that is potentially escalating out of control. What all these approaches implicitly share, however, is a feeling that these occurrences are undesirable and avoidable. In contrast to this view, the next section presents an approach which takes it for granted that a project is likely to be changed over its lifetime and instead tries to understand the ways in which these changes come about. With this understanding, it is possible to add a managerial agenda that can try to minimize these changes, but the approach still accepts that even then success is not guaranteed.
THE SOCIOLOGY OF TRANSLATION The sociology of translation has its origins in social studies of science and the question of how statements come to become facts. Ignoring questions of ontology and epistemology (Searle, 1999; Sokal & Bricmont, 1998), a statement only becomes a fact when other people use it. A scientist may discover some phenomenon in nature, but this will only become a fact when it is accepted as such by others (Latour, 1987). Clearly there are important questions about how others come to accept the statement as a fact which are not easily answered (for an appreciation of the complexities here, see Barnes et al. [1996]; Collins and Pinch [1993]; Biagioli [1999]; Fuchs [1992]), but the social process whereby statements become transformed into facts is also important and has direct parallels with the way in which innovations come to be accepted within an organization. As Latour puts it: “(A) sentence may be made more of a fact or more of an artefact depending on how it is inserted into other sentences. By itself a given sentence is neither a fact nor a fiction; it is made so by others, later on. You make it more of a fact if you insert it as a closed, obvious, firm and packaged premise leading to some other less closed, less obvious, less firm and less united consequence” (1987, p. 25). Thus, the creation of facts is very much a collective process. If a statement is made that solves an ongoing dilemma but no one reads it then it is as if it has never been made. “Fact construction is so much a collective process that an isolated person builds only dreams, claims, and feelings, not facts” (1987, p. 41). The issue, therefore, becomes one of making other people take up the statement and use it, and there are direct parallels for the case of an innovation. An innovation only succeeds if other people can be convinced to make use of the new system. Unfortunately, there is no guarantee that the people will take up
Studying the Translations of NHSnet 163
the fact or innovation, nor that they will use it in the way intended. The innovators must therefore act at a distance (Miller, 1992) to achieve two potentially conflicting ends; they must enroll others so that they participate in the use of the innovation, and they must control their behaviors in order to make their actions predictable and commensurate with the intentions of the innovator (Latour, 1987, p. 108). The case of information systems innovations is made even more complex by the fact that the individuals who sponsor a new system are often very different from those who develop it, who are again different from those who will use the resulting system. The question of whose innovation it is, in these cases, is particularly complex. It is common to find project sponsors trying to convince the users of the benefits of a new system and then the developers trying to reconvince them of the benefits of the particular system they have ended up delivering. For the innovator to be successful, therefore, two goals must be achieved, or more accurately, the observer must be able to see the actions of the innovator as matching these goals—it is always possible that this is not actually what the innovator intended. First, the interests of the other actors must be translated into interests that match that of the innovator, and then the other actors must be kept in line and under control. The first activity, of translating interests, can be done in a number of ways, including the situation where the interests of the other actors already matches those of the innovator (“Here is a system that addresses the concerns you have”). Thus, developers provide a system that is intended to address the concerns of a particular user group; experience has shown that such a straightforward solution is unlikely. Another situation arises where the innovator tries to persuade the other actors that they should want the solution proposed (“You should use this system”). The innovator may persuade the other actors that they have a problem and that the innovation provides a solution to that problem. This persuasion may require the users to redefine their identity. For example, the developers of video recorders had to persuade television viewers that they were not just people who missed a TV program, but that this was an avoidable problem. If they chose to use a videorecorder, they could cease being people who missed their favorite programs and instead be people who had the opportunity to organize their lives more flexibly, as they could always record programs when they were out. Again, such situations are uncommon; a more likely scenario is where the other actors can be persuaded to adopt a new innovation that is almost like what they want (“It does most of what you want, so why not make use of it”). Thus, they can take up the innovation if they only have to change their identity slightly rather than fundamentally. Another approach is to reshuffle the interests of the other actors, to make them more amenable to the innovation.
164 Whitley & Pouloudi
This can be done by displacing the goals of the other actors—if they don’t appear to have a problem, then why not create a problem for them (for which you have the solution, of course)—or by creating new goals for them and then becoming indispensable for the solution (Latour, 1987, pp. 108-121). There are obvious parallels here with some of Keen’s strategies of counter implementation outlined above. For example, the easy money, budget and territory games are used by people who are trying to translate the interests of the original project to meet their own ends. Similarly, the up-for-grabs game translates a lukewarm project into the goals of the counter-implementer. The Config case demonstrates the various ways in which the project was translated over the life of the system. What began as a system which was designed to provide support for the sales force by enabling them to produce accurate quotations was, at various times, a project which existed because of its potentially huge financial potential, a project that represented a major investment in research and development and hence would provide the experience for future developments, and a project that was closely allied to the reputation of its manager. In each case, the project was translated from its initial intentions and adopted by new people for different reasons, changing the shape of the system substantially. Having translated the goals of the actors to match those of the innovation, the next stage is to maintain the innovation on the path that it has set up. Here it is important to realize that the control over the innovation is only as strong as its weakest link. For example, a project may have been initialized and may have the support of a senior manager. If this manager leaves the position of support, then the control of the project may be weakened. The reverse situation occurred in the Config case, where the presence of the project champion kept the project going long beyond its feasibility. The problems of maintaining control and remaining indispensable are also apparent in the case of the “unexpected changes” in the groupware project at Zeta corporation. The unanticipated (as opposed to emergent) changes arise when control cannot be maintained at a distance. Thus, the introduction of the system limited the control that the organization could have on its front line help staff. They were able to control their jobs by separating out tasks to front line and support partners. They were unable, however, to control how these front line staff undertook their work. Zeta was unable to stop these people from translating their work into their own ends (i.e., they didn’t transfer calls to their support partner in part because they didn’t want to be seen to be ineffective operators as this would affect their career development plans). However, what they could do is revise their own behavior (e.g., revise the reward schemes) in order to encourage (or coerce) operators to work as management envisaged. An interesting aspect of the Zeta
Studying the Translations of NHSnet 165
case was the constant circle of translation, whereby the behavior of one group had an impact on the behavior and perceptions of the other.
VIEWING THE TRANSLATIONS IN AN INFORMATION SYSTEMS PROJECT If we accept that information systems projects are likely to undergo a series of translations over the life of the project, we now have a useful technique for viewing the life of an information systems project and understanding what happens to it. The technique involves viewing the project over its life and identifying the various translations it undergoes. At each of these translations we are now able to determine the kind of translation that is undergone, the reasons for the translation and the effects of each translation. This approach focuses on particular events and it may be necessary to investigate the context of each of the translations in more detail. This technique will now be used to describe the life of a project in the UK National Health Service (NHS) associated with networking the various actors into an integrated NHSnet.
Background to the NHSnet Case Study The Information Management Group of the NHS Executive, the body responsible for the execution of health care policy in Britain (NHS Executive, 1997b), launched the NHS-wide networking project in 1993, as “an integrated approach to interorganizational communications within the NHS” (NHS Executive, 1994, p. 6). The objective of the project has been to enhance communication and information exchange between various healthcare providers and administrators. Therefore, it has been intended as a response to a number of problems experienced in NHS communications. Such problems include inefficient purchasing, lack of integration, fragmented networks, limited future potential, aging private radio systems, and insufficient resources (NHS Executive, 1994). The NHSnet is expected to support data communications that cover a variety of information flows across different levels. At a national level, it will support messaging between health authorities and the NHS Central Register; at a regional level, it will support access to centralized data processing (finance, payroll, etc.); at a local level, it will support links between primary care doctors (GPs) and hospitals (for the exchange of pathology test results, referral/discharge details, and waiting list inquiries), as well as between GPs and health authorities (NHS Executive, 1994). More generally, the NHSnet infrastructure is expected to cover a
166 Whitley & Pouloudi
variety of business areas, including patient-related service delivery, patientrelated administration, commissioning and contracting, information services, management- related flows, and supplies of NHS organizations (NHS Executive, 1995). Future links across these areas will rely less on paper and telephone communication and increasingly on EDI and electronic mail messaging. Since 1996, wide area networking services for data and voice have been available and can be purchased; the NHSnet is available. Yet, despite the technological success of the project, and in particular its completion within schedule, its implementation has suffered from the lack of acceptance by the medical profession. Doctors remain sceptical of the security of this network. Their concerns have been overtly voiced, primarily through the British Medical Association (BMA), the national professional body of physicians in the United Kingdom, but also by their computer security consultants. These parties fear that patient data may be misused by both NHS members and external parties (Willcox, 1995; Pouloudi & Whitley, 2000; Pouloudi, 1998). At the moment, although the network is used for administrative and purchasing purposes, its use falls well behind the initial NHS Executive plans which perceived the exchange of patient information as an important implementation objective.
Translations in NHSnet The NHSnet presents an interesting case of an actor-network that has undergone a series of translations. These translations were noted by recording the viewpoints of the stakeholders of the network (Pouloudi, 1998), those who participate, influence, or are affected by it, and are following the network over time, using it where possible to promote their interests or the interests of the stakeholders that they claim to represent. These stakeholders were identified using the iterative method suggested in Pouloudi and Whitley (1997). The following paragraphs present the problems in the implementation of the system and break these down in a series of translations that the project has undergone. Although the NHS Executive had piloted the project with doctors at an early stage, it was only after the network started being implemented and adopted at the local level that the doctors, through their representative body, the British Medical Association (BMA), reacted to the use of the network, arguing that it did not safeguard the privacy of medical information. Further concerns were raised when they were expected to have to pay for the service, even though the technological infrastructure was considered dated and unreliable. As a result of their concern, doctors have threatened not to participate in the electronic exchange of data unless they can be convinced that the privacy of patient data is safeguarded. Yet, the NHS Executive have
Studying the Translations of NHSnet 167
argued that the proposed system is better than its predecessors, ad hoc manual and electronic exchange systems: data confidentiality was quoted as one of the shortcomings of the previous situation and one that the NHS-wide networking infrastructure would safeguard (NHS Executive, 1994). The 1996 conference in Healthcare Computing (18-20 March 1996, Harrogate, UK) provided the opportunity for a direct confrontation of the two sides on the matter: “The measures we have put in place are to stop anybody who is unauthorized getting at data from, and via, the [NHS-wide networking] system, and one of the key parts of that system is a strong authentication challenge,” said Ray Rogers, then Executive Director of NHS Information Management Group. The conflict has since slightly receded since the NHS adopted the BMA’s suggestion to encrypt data, published a report on data encryption (NHS Executive, 1996), and thus improved the chances of cooperation on data security with the BMA (Creasey, 1996). Given the advantages of electronic exchange of healthcare data, there has been a general optimism that the NHSnet will be used and the debate will be resolved in a way that leaves both of the currently conflicting parties satisfied. Underlying the confidentiality debate, the most visible conflict in the NHSnet case system’s implementation, we can distinguish three interesting changes in the nature of the NHS-wide networking project as the various stakeholders understand or present the network from different perspectives in order to serve their interests. Translation 1 First, the debate of the BMA and the Information Management Group on confidentiality has translated the network from a technical system (a network infrastructure to support information exchange in the NHS) into a system threatening the privacy of medical information, an issue of confidentiality. This issue has been at the heart of the debate because the doctors consider it as a key responsibility (and therefore part of the identity) of their profession. In response to this reaction, and in order to avoid the cost of another spectacular system failure in the NHS (cf. Beynon-Davies, 1995), the NHS Executive (and the government) have responded with a reconsideration of the security issue of the network. The “Zergo Report” (NHS Executive, 1996) proposed the use of encryption to safeguard the privacy of medical records. While the BMA debated which encryption algorithm would satisfy the NHS needs best, it is clear that as a result of this report, the NHS Executive has tried to translate the network, and the discussion about its adoption, back into a technical problem, that of encryption. Their suppliers have supported this view: “Firewall-to-firewall encryption could potentially act as an enhance-
168 Whitley & Pouloudi
ment to NHSnet security and go some way to placating the BMA” (McCafferty, 1996). In order to face the challenge, the BMA has formed alliances with privacy activists (e.g., Privacy International) and academics on one hand, in order to raise the profile of the debate. On the other hand, they have created an alliance with security consultants, in order to challenge the technical features of the network, as well (Pouloudi & Whitley, 2000). Following the debate, the NHS Executive has now made explicit its view of the NHSnet as a “secure national network” (NHS Executive, 1998a), effectively redefining the network. Translation 2 The alliance between the BMA and security consultants resulted in the security consultant to the BMA at the time, Ross Anderson, becoming the spokesperson of the BMA on the NHSnet implementation: “We have to take a long hard look at the IM&T strategy and rewrite it so that it is centered on clinical concerns rather than administrative concerns; so that it is oriented towards patients rather than administrators and optimized for the delivery of healthcare rather than as a means of enforcing bureaucratic power and control from the center” (Dr Ross Anderson, Security Advisor, BMA). The debate about the capability of NHSnet to safeguard confidentiality has been most intense when Ross Anderson was acting as security consultant to the BMA and Ray Rogers was Executive Director of the NHS Information Management Group. Both people considered the NHSnet as a key system: one that endangers the privacy of medical data or one that is part of a vision to modernize the NHS. To a certain extent, the debate was perceived as a personalized issue, perhaps as both people took ownership of the debate and saw the progress of the network as their “mission.” Some statements reflected an almost personal rivalry (e.g., British Journal of Healthcare Computing & Information Management, vol. 13, no. 3, 1996, p. 6). This was noted by those involved: “I regret that discussions between the Department [of Health] and the BMA have been conducted in such a public and fraught environment” (Rogers, 1996) and by those reporting on the conflict “that debate was not at all times marked by reason and moderation” (Fairey, 1998). Ray Rogers has since been replaced by Ann Harding in the Director’s post of the Information Management Group. Subsequently, the group was also dissolved and a new NHS Information Authority established to provide effective guidance in the implementation of the NHS strategy (NHS Executive, 1998b). Ross Anderson, while still a privacy advocate, is no longer acting formally as a security consultant or spokesperson for the BMA. Interestingly, as neither of these previous protagonists of the confidentiality/security debate holds the same
Studying the Translations of NHSnet 169
position at the moment, the nature of the debate on the NHSnet has changed again and has become less intense. Translation 3 At the same time the Information Management Group was disbanded, the NHS put forward a requirement for all computerised general practices to connect to the NHSnet by the end of December 1999. The NHSnet is now formally described as “the best medium for the transfer of clinical information” (NHS Executive, 1998b). It is not clear, however, whether the compulsory link of GPs to the network will be equivalent to using the network as envisaged by either the doctor community or the NHS Executive. In any case, the network has undergone another translation. Rather than being a system that doctors will want to use as originally intended, because it speeds up the delivery of healthcare, facilitates communication with their peers, or is more secure than the systems used previously, it is a network that they are required to use and pay for: “Why are we still being pressurized to join a network with such poor performance and functionality, run by people without any wish to deliver what ‘the users’ want?” (GP). This is a translation that is common to information systems implementation and often underlying resistance to change phenomena. In interorganizational systems in particular where the asymmetry of power between sponsor and adopters is often prominent (Cavaye, 1995), the importance of end-user requirements tend to become undermined by the sponsor’s policy and priorities. Translations in the Broader Context Our discussion so far has looked at the network and those events that were directly related to its progress. However, these translations should be considered in light of a broader set of changes in the context, which can contribute to our understanding of the NHSnet translations. Because of the importance of the public and political character of the NHS, and as a result of the government setting its strategic direction, changes in the political scene or legislation in the UK have a direct impact on the translations of the NHS. The following list gives an indication of the political scene in a series of additional events and publications with direct impact on NHSnet: May 1997 Labor government elected “In my contract with the people of Britain I promised that we would rebuild the NHS” (foreword by the Prime Minister in Department of Health, December 1997). • “The new NHS” “replaces internal market with integrated care” (Department of Health, 1997). • “Report on the review of patient-identifiable information” (Caldicott Committee Report). This report has been the result of the “increasing
170 Whitley & Pouloudi
concern about the ways in which patient information is used in the NHS in England and Wales and the need to ensure that confidentiality is not undermined” (NHS Executive, 1997a). • White paper on Freedom of Information Act “to legislate for freedom of information, bringing about more open government.” September 1998 Information for Health (NHS Executive, 1998b): “the Information Management Group will be disbanded and replaced with an NHS Information Authority to provide a lead for the new partnership development and to ensure effective guidance is given for successful delivery of the strategy.” The deadline for computerised GPs to link to NHSnet (by the end of 1999; November 1998 New Data Protection Act) will enhance the protection afforded to patients. It is therefore evident that, as the NHSnet underwent a series of translations, so did the NHS, prompting, in turn, further changes for the network. Also, the membership of the UK in the European Union and the need to comply with legislation on data protection has implications for the translation of the confidentiality debate, especially for the attention given to particular issues and the way in which these are “translated” in the NHSnet case. It is worth noting that the impact of each change cannot be considered in isolation. Each piece of legislation also undergoes a series of translations as a consequence of the diverse interests that stakeholders—at a European and national level—serve, or wish to be seen to serve.
FOUR MOMENTS OF TRANSLATION The previous section illustrated some of the translations characterizing the NHSnet. These are translations in its technology (Translation 1), in the personal roles of stakeholders (Translation 2), in mode of adoption for the system (Translation 3), as well as in the broader context. Although we have separated them out for the purposes of our analysis, these translations are closely intertwined, not least because stakeholders respond to the views and changes introduced and supported by others, thus introducing new changes. The sociology of translation literature presents and explains the changes through “four moments of translation” (Callon, 1986; Introna, 1997). It is worth noting that these “moments” are witnessed, but cannot be neatly separated in the NHSnet case. This is because each stakeholder of the network and each related technology or piece of legislation that has an impact on the network goes through similar ‘moments’ at different points in time. The following paragraphs illustrate how these four moments have been witnessed in the NHSnet case implementation with supporting statements from various stakeholders.
Studying the Translations of NHSnet 171
Problematization: an actor defines an “obligatory passage point” (an actor network linked by discourses presenting the solution of a problem in terms of resources owned by the agent that proposes it [Callon, 1986; Latour, 1987]). In the NHSnet case, for example, the response of the NHS to confidentiality concerns with the publication of the Zergo report meant that encryption algorithms became at that point the obligatory way to discuss the confidentiality issue: “For the first time the NHS has a strong, total security package. How much more does the Department [of Health] have to do before the BMA acknowledges what a large step forward this package is, and supports what we have put in place? What else is there to do?” (Ray Rogers, Executive Director of the Information Management Group at the time). In the broader sense of this translation moment, we can consider the use of the NHSnet as the primary system for discussing information exchange in the NHS. Intéressement: actors try to impede alliances that may challenge the legitimacy of the obligatory passage point (or, in the contrary, form alliances to support it). In the NHSnet case, this has been evident in the rhetoric used by the NHS Executive to establish the credibility of the network: “The NHSnet is more secure than all the other networks that are out there and will continue to be used until we manage to replace them” (Ray Rogers). This perception was reinforced with the publication of the Zergo report, where, as we noted previously, the debate was translated to focus on the issue of which encryption algorithm would be appropriate for the needs of the NHS and the rights of the patients. Similarly, the formation of the Caldicott Committee obliged the BMA and its allied stockholders to become less polemic to governmental proposals: The Caldicott Committee failed to lay down hard and fast rules for patient confidentiality, but because it produced a list of “good intentions,” it certainly made it harder for BMA and other concerned organizations like DIN to continue to breathe fire and brimstone about matters. In this the commission probably served its purpose well (Chairman of the Doctor’s Independent Network). Enrollment: bargaining and concession—alliances are consolidated. In the NHSnet, some of the stakeholders did not engage in the debate but formed, instead, an alliance with those stakeholders standing for their interests: “Each local medical committee decides whether it supports the BMA’s position and, so far, each committee has universally supported the BMA’s position on this to the point that there was no dissent and that’s because confidentiality is so closely linked to the general practitioners’ hearts, really” (Secretary to a group of local medical committees).
172 Whitley & Pouloudi
Mobilization: defining the legitimacy of a spokesperson. As the debate about the confidentiality of patient data has been almost monopolized by the NHS Information Management Group and the BMA, it is not surprising that some of the “by-standing” stakeholders’ perceptions about this debate reflected the acceptance (as in the view of the local medical committees above) or reservation about the role, real motivation, and legitimacy of the protagonists: “The BMA are on one hand rendering a public service: making sure that patient confidentiality is maintained. But, on the other hand, something else may come out; the BMA will seek some payoff for sharing information. Let’s not forget that the BMA is essentially a trade union, representing the interests of doctors, but cannot be accused of doing so openly because they also have professional concerns for the patients” (member of the NHS Central Communications Management Group). Other stakeholders voice their concern from the absence of another appropriate—in their view—spokesperson: “There was one representative of a patient association at the meeting, and I was appalled because they said the NHSnet was a good thing. We have a problem with these people. It is inconceivable that the BMA moved in this debate faster and made suggestions before the patients’ associations even made a press release. This will come down as the major anomaly in history” (Director General of Privacy International). Following from this analysis, which highlights better the tensions between stakeholder groups that underlie the translations of the network presented in the previous section, we would argue that all stakeholder perceptions become important to our understanding of translations. Indeed, they are useful in illustrating changes in alliances, attitudes, and expectations for the future. At the same time the translations have implications for the role and relationships of stakeholders, as well. In the NHSnet case in particular, the translations have a direct impact on professionals and patients as they have to react to the changes and reconsider their relations with other participants in the healthcare delivery process.
IMPLICATIONS BEYOND THE NHSNET This exploration of the stakeholders in the NHSnet project, their views on the network, and the ways in which they acted to translate the project to better match their own needs allows us to raise some general issues from the paper. These are applicable to other healthcare applications, such as the GPNet in the UK, as well as other large information systems implementations. In particular, the translations we have discussed signify for stakeholders a need to consider their role in the actor-network so that they can best promote their interests and safeguard their rights and do this in a way that doesn’t
Studying the Translations of NHSnet 173
shortcut due process (McMaster, Vidgen & Wastell, 1998). The other side of the coin, of course, is that they also need to respect the rights of other stakeholders. Thus, treating others as legitimate stakeholders can be considered as part of being a responsible stakeholder. This view, in the information systems and management literature, is often limited to predefined notions of stakeholder roles (e.g., “a manager should make decisions that serve the organization,” or “an information systems developer should develop systems that are functional and useful to the user/customer”). Blyth (1998) defines responsibility as “a legal or moral obligation for bringing about, or maintaining, a certain state of affairs” (p. 259). Thus, responsibilities may be formal and institutionalized or informal and related to a stakeholder’s set of values. Responsibilities can be prescribed, “felt,” taken up to avoid cost or punishment, or they may be enabled by certain factors. Blyth, for example, notes that a responsibility also implies elements of accountability, liability, trustworthiness, and blame (p. 259). However, the extent to which stakeholders are conscious or able to carry out these responsibilities may also be influenced by their interest, power, or perceived legitimacy. The NHSnet case supports these different motivations for taking up a responsibility. Furthermore, these motivations are interpreted differently depending on particular stakeholder perspectives. Going a step further, stakeholders could interpret their stakeholder entity as an obligation to defend their values and interests, either directly or through some representative stakeholder. Direct involvement could signify participation in debates or meetings where their interests and values are discussed. In case of indirect participation, stakeholders have an obligation to contribute to any formal procedures for representation and criticize the representative bodies if they fail to represent the appropriate interests and values or if they fail to represent them appropriately (Pouloudi & Whitley, 2000). Thus, the rights of stakeholders (e.g., the right to participate, to be fairly represented, to be considered as a legitimate interested party) can also be regarded as carrying an obligation for stakeholders to defend and honor these rights. This obligation of stakeholders will often need to be recognized by the stakeholder groups themselves rather than be expected or imposed by other stakeholders. This is a consequence of the problems relating to the asymmetry of information or other resources, access, power, or perceived legitimacy and the diverse interests and values of stockholders. If the responsibility of particular stakeholders to participate or otherwise act when their interests or values are at stake is institutionalized, less informed or less powerful stakeholders could find an inability to carry out their duties as stakeholders to be interpreted as a legitimate reason for other stakeholders to override their rights. Therefore, there would be a danger of under-representation
174 Whitley & Pouloudi
of some stakeholder interests. Consequently, stakeholder responsibilities often need to be internalized by the stakeholder group. In practice, this is common amongst certain professional bodies that consider themselves as a stakeholder group, with a predominantly common set of values and interests. Healthcare professionals are a good example of such a stakeholder group as their fundamental professional responsibilities have remained essentially similar (hence the use of the Hippocratic Oath to this date). Stakeholder responsibilities may be more difficult to define for groups that have been formed more recently and whose representative bodies lack a well-defined or a well-understood identity by other stakeholders’ identity. Clearly stakeholders who lack a group identity altogether, such as the patients, rely more on their individual sense of moral responsibility and their perception of rights and responsibilities as a guide to their behavior and their expectation from other stakeholders.
CONCLUSIONS The NHSnet case shows that information technology has become part of the day-to-day practice of many healthcare professionals. Thus, they need to be aware of the capabilities and limitations of this technology, in particular to the extent that this may affect their professional responsibilities. Clinicians are not technical experts, and it wouldn’t be fair to expect them to be. However, they need to be aware that the use of information systems is likely to have implications not only in their work processes but also in their relations with other stakeholders. If unable to evaluate these, healthcare professionals need to be aware of stakeholders or mechanisms that will help them address technological issues. Recent research reports the case of an NHS hospital trust which has not been able to learn from previous information technology failures in the healthcare area and has repeated common mistakes (Mitev & Kerkham, 1998). They were also unaware of the facilitating role that parties such as the Information Management Group (IMG) of the NHS Executive could play in their systems procurement and development. In cases, however, where prospective systems users in the NHS are familiar with the facilitator mechanisms that stem from the IMG’s role as the information technology experts within the NHS, the perceived legitimacy of such stakeholders will also come into play. For example, the NHSnet case has damaged the IMG image as they were seen not to take on board important values of other stakeholders and arguably contributed to the dissolution of the group. Certainly, the legitimacy question is complicated by other organizational and political concerns that affect stakeholder relations in healthcare. For network systems’ developers and sponsors, the responsibilities are perhaps more
Studying the Translations of NHSnet 175
complicated than those of the intended end users. Indeed, unless they succeed in convincing other stakeholders that they have taken their concerns on board, they undermine on one hand the way in which other stakeholders perceive their role and their professionalism and on the other hand the chance of successful adoption and growth of the systems they deliver. The role of interorganizational systems developers can be related to a certain extent to the previous discussion on the problems of stakeholder representation. Developers, being knowledgeable about technology, need to understand the perspectives, interests, and values of the users and other stakeholders because, ultimately, they will need to inscribe these to the system they build. Certainly, there is an important set of informal norms that cannot be transcribed in an information system. Also, a system “grows” (cf. Atkinson & Peel, 1998) and undergoes a series of translations when it is used, as stakeholders start using it in “unexpected” ways or use the system as a mechanism to defend or establish values and procedures. It is, therefore, a major challenge for developers to provide systems that are not perceived to conflict with the interests and values of stakeholders and to “sell” those that do. In an interorganizational context the reconciliation of diverse interests can become the developer’s responsibility. The use of the sociology of translation to analyze previous experience can improve our understanding of information systems change and the subtleties of stakeholder relations and representation. The use of stakeholder analysis can facilitate developers in understanding the scope and difficulties of the task and act according to the distinct context requirements. More generally, information systems professionals face increasingly complex dilemmas as systems tend to privilege the perspective of particular stakeholders. The information systems literature distinguished between three key stakeholder groups: managers, users, and developers. As systems increasingly become interorganizational and are used in domains where stakeholder relationships are political and changeable rather than commercial or predictable, information systems developers need to be more sensitive to the multiple stakeholders, the complicated, evolving, and context-dependent nature of their understanding of systems use and the implications that systems use will have for this broad spectrum of stakeholders. This paper has explored the different stages that an innovative project may undergo. Various approaches for understanding this process have been explored, although each provides limited assistance for generalizable understanding. After reviewing notions of unexpected change, resistance and escalation, the paper presented the sociology of translation as a mechanism for understanding the various stages in an information systems innovation. This approach, drawing from a sociological understanding of the development of
176 Whitley & Pouloudi
scientific facts, was then used to illustrate the various translations that the NHSnet project undertook in the United Kingdom. The language of the sociology of translation allowed us to see how the basis of the whole project was fundamentally transformed on a number of occasions and saw how these were related to the wider context of the system’s development. The NHSnet, at the moment, continues to be an expectation failure (Lyytinen & Hirschheim, 1987) from both the NHS Executive and the BMA perspectives. Indeed, issues like that of confidentiality and privacy of medical information have not been resolved. However, as a result of the various translations, including changes in spokespersons, in priorities and obligatory passage points, confidentiality no longer appears to be at the heart of the NHSnet implementation problems. At the same time, as many technological issues remain unresolved, including the architecture and storage of the electronic health record, but also organizational and political the responsibilities of stakeholders (e.g., “Caldicott Committee Guardians” are to be appointed to all NHS organizations to monitor safeguarding confidential patient information), the network will continue to undergo “translations.” In this paper we explained how these translations are the result of the actions of numerous stakeholders, and importantly, that these stakeholders have a right and an obligation to promote and protect individual rights, such as the privacy of medical information. More generally, our approach to the lifecycle of information systems as actor networks undergoing a series of translations has proved to be an interesting way to study information systems implementation. By considering different stakeholder perspectives rather than restricting our analysis to actor involvement, we had an opportunity to consider technical, organizational, and political issues shaping an interorganizational system. Our case study is another indication that interorganizational systems are political systems. Politics, as often manifested in stakeholder relations but also in the way in which various stakeholders attempt to “translate” the system to serve their interests, are unavoidable and an integral part of an information system. In practice, our approach is valuable to the stakeholders immersed in the situation, in this case healthcare professionals in particular, because it challenges them to make sense for themselves of translations, how these may be triggered by other stakeholder interests or capabilities. From a theoretical perspective, this approach also enables more general discussions about the rights and responsibilities of stakeholders, thus contributing to the normative aspect of stakeholder theory (Donaldson & Preston, 1995), which has been neglected in the information systems literature (Pouloudi, 1999).
Predicting End User Performance 177
Chapter XI
Predicting End User Performance I. M. Jawahar and B. Elango Illinois State University, USA Research on the influence of attitudes toward computers and end-user performance has reported inconsistent results. The inconsistent results, at least in part, could be attributed to the lack of correspondence between the general nature of the attitude measure and the specific nature of the criterion, end-user performance. Based on Ajzen and Fishbein’s (1980) behavioral intentions model, we argue that attitudes toward working with computers matches end-user performance in terms of specificity and relevance, and therefore should be consistently related to end-user performance. In this study, in addition to attitudes toward working with computers, the effects of goal setting and self-efficacy on end-user performance were also tested. Results indicate that attitudes toward working with computers, goal setting and self-efficacy significantly influence end-user performance. Strong support for attitudes, goal setting and self-efficacy indicate that end-user performance can be substantially enhanced by shaping end users’ attitudes toward working with computers, teaching end users to set specific and challenging goals, and enhancing end users’ beliefs to effectively learn and use computing technology. The proliferation of end-user computing (EUC) has been widely reported (e.g., Burrows, 1994). Computer literacy requirements have skyrocketed for clerical and support staff (Bowman, Grupe, & Simkin, 1995) and for many middle and senior management positions (Olsten, 1993). EUC has the potential to influence productivity, competitiveness, and profits. In recognition of this potential, organizations are devoting a substantial portion of their Copyright © 2002, Idea Group Publishing.
178 Jawahar & Elango
information technology budget to EUC activities. Given that training can affect the success or failure of EUC within an organization (Bostrom, Olfman & Sein, 1990; Rivard & Huff, 1988), preparing the workforce to use information technology productively has become a high priority in many organizations and is reflected by increased training budgets (Aggarwal, 1998; Finley, 1996). Since the primary purpose of introducing new technology is to improve productivity, organizations expect their employees to learn and apply EUC technology to increase their job performance and contribute to organizational effectiveness. The preponderance of research on end-user performance has focused on attitudes toward computers to predict end-user performance. However, these studies have generally reported inconsistent results (e.g., Kennedy, 1975; Kernan & Howard, 1990; Marcoulides, 1988; Mawhinney & Saraswat, 1991; O’Quin, Kinsey, & Beery, 1987; Roszkowski et al., 1988; Szajna, 1994). So it is not clear if, in fact, attitudes influence end-user performance. It is important to identify factors with potential to influence end-user performance because such knowledge can enable educators and trainers to design better programs and enhance end-user performance. This line of research has practical significance because unless end users learn and utilize end-user technology to improve their job performance, organizations are unlikely to reap the benefits of investments in training and in EUC technology. The primary purpose of this study is to investigate the effects of attitudes, goals setting, and self-efficacy on end-user performance. The conceptual foundation of this study is based on the dispositional paradigm. The dispositional paradigm is founded on the premise that individual differences are relatively stable both across time and situations and can be used to explain and predict behaviors and outcomes (see Allport, 1961; Kane et al., 1995; Staw & Ross, 1985). This paper is organized into four sections inclusive of this introductory section. In the second section, we will review the studies that examined the relationship between attitudes and end-user performance and offer plausible explanations for the inconsistent findings. In the third section, we will develop hypotheses for the study. Finally, we will present results of this study, offer suggestions to enhance end-user performance, and discuss avenues to extend research on end-user performance.
ATTITUDES AND END-USER PERFORMANCE Prior research on the relationship between attitudes and end-user performance has reported inconsistent results. About one-half of the studies that
Predicting End User Performance 179
examined the relationship between attitudes and end-user performance have reported a relationship. While some these studies reported a positive relationship, others have reported a negative relationship. Alternatively, roughly one-half of the studies failed to find a relationship between attitudes and end-user performance. Studies Reporting a Positive Relationship: In one of the early studies on end-user performance, Nickell and Pinto (1986) developed the Computer Attitude Scale (CAS) and investigated reliability and validity of the scale in five different samples. Nickell and Pinto found that scores on the CAS were positively correlated with final course grades of students enrolled in an introductory computer class. Scores on the CAS were also positively related to evaluations of job performance of computer operators provided by supervisors of those operators. In a study that involved retraining high school teachers to be computer science teachers, Roszkowski, Devlin, Snelbecker, Aiken, and Jacobsohn (1988) found that high school teachers’ attitudes measured by the Computer Aptitude Literacy and Interest Profile (CALIP) and the Computer Attitude Scale (CAS) were positively related to performance. In another study involving a large sample of undergraduate business students enrolled in an introductory Management Information Systems course, Jawahar and Elango (1998) reported that computer-related attitudes were positively related to performance in the course. Studies Reporting a Negative Relationship: Studies have also reported a negative relationship between computer attitudes and end-user performance. Rosen, Sears, and Weil (1987) conducted a study with undergraduate students enrolled in computer-based courses and found that computer anxious students received lower course grades and were twice as likely as non-anxious students to drop out of the course. Hayek and Stephens (1989) found that precourse and post-course computer anxiety scores were inversely related to performance in computer science courses. Studies by both Marcoulides (1988) and Mawhinney and Saraswat (1991) have also reported a negative relationship between attitudes, measured as anxiety, and performance, measured by course grade. Studies Reporting No Relationship: Researchers have also reported no differences in performance between those with favorable attitudes toward computers and those with unfavorable attitudes. Kennedy (1975) using a sample of inexperienced computer users and O’Quin, Kinsey, and Beery (1987) using a sample of college faculty and administrative personnel found that attitudes toward computers were not related to EUC performance. Kernan and Howard (1990) found that computer anxiety was not predictive of course grade. Szajna (1994) also reported similar results and noted that the effect of
180 Jawahar & Elango
computer anxiety on performance was inconsistent. In another study involving undergraduate business students enrolled in a required computer skills course, Szajna and Mackay (1995) failed to find a relationship between computer anxiety and performance in the course. In summary, while some studies have failed to find a relationship between attitudes and end-user performance, others have reported the relationship to be positive or negative. These studies indicate that the relationship between computer attitudes and performance is not necessarily to be assumed. At least three reasons could be offered for the inconsistent results reported by prior research. First, our review of prior research indicates that many previous studies have used the constructs of computer anxiety and negative attitudes toward computers, interchangeably. These two constructs are distinct and, consequently, are not interchangeable. Indeed, factor-analytic investigations indicate that computer anxiety and negative attitudes toward computers should be treated as separate constructs (Kernan & Howard, 1990). Second, items such as “Computers are a blessing to mankind,” “Computers make it possible to speed up scientific progress and achievements,” and “Computers are becoming necessary to the efficient operation of large businesses” reflect the spirit of the “attitudes toward computers” measure (Lee, 1970). Given the spirit of the items used to measure the construct, the rationale for expecting “attitudes toward computers” to be related to end-user performance is not clear. Just because a person has favorable attitudes toward computers does not automatically mean that he or she would be willing to work with computers. Willingness to work with computers could result in a higher rate of system utilization leading to a higher level of end-user performance. Willingness to work with computers is captured by Rafaeli’s (1986) Attitudes Toward Working with Computers Scale. We believe that attitudes toward working with computers will evidence a more consistent relationship with end-user performance than attitudes toward computers. Finally, we believe that besides attitudes, a myriad of individual difference factors has the potential to influence end-user performance. Given the initial stage of research on end-user performance, focusing on specific and theoretically relevant individual difference factors with potential to influence end-user performance as opposed to general dispositional factors (e.g., locus of control, self-esteem, conscientiousness, etc.) may be more fruitful and certainly parsimonious. Evidence for our assertion that computer anxiety and negative attitudes toward computers are not synonymous has come in the form of factor-analytic investigations that have shown that the two constructs are distinct (Kernan & Howard, 1990). In this study, we will test our second and third assertions by
Predicting End User Performance 181
examining the effects of attitudes toward working with computers and those of two well-established individual difference constructs goal setting and selfefficacy on end user performance.
HYPOTHESES DEVELOPMENT Attitudes toward Working with Computers. As argued earlier, it is possible for a person to have favorable attitudes toward computers in general and yet have negative or unfavorable attitudes toward working with one in the workplace. Additionally, according to the behavioral intentions model of Ajzen and Fishbein (1980), behaviors or outcomes can be best predicted by attitudes that specifically relate to those behaviors than by more global and general attitudes. Attitude toward working with computers is much more specific and relevant to performance of tasks that require the use of computer skills than the more general attitude toward computers. Individuals who hold favorable attitudes toward working with computers are more likely to practice and learn EUC skills and evidence higher levels of performance on tasks that require the use of those skills than those who hold less favorable attitudes. In this study, we propose and test the following hypothesis. H1: Attitudes toward working with computers will be positively related to end-user performance. Goal Setting. The positive effect of goal setting on task performance is one of the most robust and replicable findings in the psychological literature (Locke & Latham, 1990; Locke et al., 1981). Literally, hundreds of studies have been conducted on goal setting with a range of subjects, including children, factory workers, managers, engineers, and scientists (Locke & Latham, 1990, pp. 40-62). Research on goal setting has documented that specific and difficult or challenging goals lead to higher levels of performance than the absence of goals or easy goals or “do your best” goals (Locke et al., 1981). Locke and Latham (1990) have shown that goal setting when combined with feedback or knowledge of results leads to high levels of performance. Thus, goal setting is most likely to improve task performance when the goals are specific and sufficiently challenging, and feedback is provided to show progress in relation to the goal. Setting specifc and challenging goals is particularly important when the task is complex or novel. Learning software packages is likely to be challenging to most individuals; consequently, individuals who set specific and challenging goals to learn and use new end-user computing technology are more likely to outperform those who do not set such specific and challenging goals.
182 Jawahar & Elango
H2: Goal setting (i.e., setting specific and challenging goals) will be positively related to end-user performance. Self-efficacy. Self-efficacy is the belief in one’s ability to effectively complete a task or exhibit a specific behavior (Bandura, 1982, 1977). Bandura (1982, 1977) has shown that self-efficacy beliefs are primarily shaped by performance accomplishments, vicarious experiences, verbal persuasion, and physiological states; and Jawahar, Stone, and Cooper (1992) have illustrated the mechanism through which self-efficacy beliefs affect behavior and outcomes. Theory and research on self-efficacy suggests that, in contrast to individuals with low levels of self-efficacy, the highly efficacious are less apprehensive of change, set more challenging goals, exert more effort, persist in the face of difficulty, and achieve higher levels of performance (Jawahar et al., 1992; Wood & Bandura, 1989). Prior research has also documented the predictive utility of self-efficacy in diverse settings (for a review, see Jawahar et al., 1992). Self-efficacy can be considered a potential antecedent of training effectiveness, because individuals who enter training believing they are capable of mastering the training content are likely to learn more during the training. For instance, in one study, Gist, Schwoerer, and Rosen (1989) studied managers and administrators undergoing two types of training in the use of computer software. Trainees with higher self-efficacy prior to training performed better than their low self-efficacy peers on a timed computer task at the end of training. When work arrangements are computerized, highly efficacious individuals who believe that they can learn and effectively use new end-user computing technology are likely to outperform those with low levels of efficacy beliefs. H3: Self-efficacy will be positively related to end-user performance.
METHOD Subjects Subjects were 467 undergraduate students enrolled in three different sections of a Management Information Systems course taught by the same instructor at a large state university. Students in MIS courses have been used as acceptable surrogates for end users in many previous studies (e.g., Bohlen & Ferratt, 1997: Palvia, 1991). In this required MIS course, subjects were taught fundamentals of microcomputer theory, word processing, spreadsheet,
Predicting End User Performance 183
and database software packages. Subjects attended 2 theory class periods and 1 lab period each week during the 15-week semester. Due to incomplete responses, the final sample was reduced to 431. Of the 431 subjects, 207 were male and 224 were female. A majority (80%) of the subjects’ age varied between 20 and 23 years.
Study Design and Procedure In the first two weeks of the semester, a questionnaire was administered to 467 undergraduate students enrolled in a Management Information Systems course to measure the constructs of attitudes toward working with computers, goal setting, and self-efficacy. In addition, subjects were requested to provide demographic information (e.g., sex, age), their social security number and section number but not their names. They were assured of anonymity and that only the investigators (not the instructor) would have access to their responses. At the end of the semester, scores recorded in the grade book obtained from the instructor were matched with subjects’ social security numbers for data analysis.
Measures Independent Variables: Attitudes toward working with computers were measured with a 10-item scale (α = .88) with scale points of 1 - completely disagree, 2 - strongly disagree, 3 - disagree, 4 - neither disagree nor agree, 5 - agree, 6 - strongly agree, and 7 - completely agree. Rafaeli’s (1986) Attitudes Toward Working with Computers Scale was modified so that items would be relevant to the subjects. Sample items included “ I would like to use computers to do part or all of my work,” and “Working with computers is an enjoyable experience.” Goal setting was measured with 5 items (a = .7) to ascertain the extent to which subjects set specific and challenging goals with respect to performance. Sample items included “I have set a specific goal, in terms of a grade, for this course,” “I have set a challenging goal, in terms of a grade, for this course,” and “I have set a specific and challenging goal, in terms of a grade, for this course.” Goal setting was measured with a seven-point scale with the same anchor points as those used to measure attitudes toward working with computers. The methodology used by Bandura and his colleagues (e.g., Bandura, 1982; Zimmerman, Bandura, & Martinez-Pons, 1992) was used to measure self-efficacy. Self-efficacy was measured using 5 items (a = .71). We used the same scale points as Zimmerman et al. and subjects rated their perceived self-efficacy on a 7-point scale with scale points of 1 – not well at all, 3 – not too well, 5 – pretty well, and 7 – very well. Sample items
184 Jawahar & Elango
included “I can learn to effectively use the word processing software package,” “I can learn to effectively use the spreadsheet software package,” and “I can learn to effectively use the database software package.” Dependent Variable: Lab assignments relating to all three software packages as well as exams on conceptual material were graded from an objective key and thus did not require any judgment on the part of the grader. Neither the instructor nor the researchers did any of the grading. Assignments and exams were graded and recorded by 4 graduate assistants. To check reliability of grading, a random sample of assignments and exams were independently graded and coded by graduate assistants. Reliability of grading by the graduate assistants was very high and varied between .87 to .95. For each subject, scores on assignments on each of the three software packages and the written exams were computed, so each subject had 4 scores. Since the 4 scores were highly correlated with correlations ranging from .81 to .89, consistent with prior research (e.g., Kernan & Howard, 1990; Mawhinney & Saraswat, 1991; Roszkowski et al., 1988), we used the final course grade in the form of percentage as the dependent variable. The distribution of final grades, although slightly positively skewed (i.e., more As than Fs), approached a normal distribution with grades ranging from F to A.
RESULTS Pearson correlations and regression analysis were used to test the hypotheses. Means, standard deviations, and a summary of regression results are presented in Table 1. Pair-wise correlations between the independent variables varied from .210 to .230, indicating that multi-collinearity was not a problem in this data. In our discussion of the literature, we had asserted that attitudes toward working with computers would be a better predictor of end-user performance than attitudes toward computers. Though we had not explicitly hypothesized a relationship between attitudes toward computers and end-user performance, we measured attitudes toward computers using a modified version of Lee’s (1970) Attitudes Toward Computers Scale (a = .75) and tested its relationship with end user performance. Lee’s original scale was modified because some items in Lee’s (1970) scale are not reflective of attitudes toward computers in the 1990’s. Such items (e.g., “They work at lightning speed” and “There’s something exciting and fascinating about electronic brain machines”) were eliminated. Attitudes toward computers were measured using the same 7point scale used to measure attitudes toward working with computers.
Predicting End User Performance 185
Table 1: Means, standard deviations, and summary of regression results M
SD
R2
Attitudes toward working with computers with computers
5.3
.94
.072
.268
Supported
2
Goal setting
5.0
.77
.106
.326
Supported
3
Self-efficacy
4.7
.6
.293
.541
Supported
Hypo. Independent Variable
Stand. Results at at p < .001 Beta β ______________________________________________________________________________ 1
Like many studies (e.g., Kernan & Howard, 1990; Szajna, 1994; Szajna & Mackay, 1995), we also failed to find a relationship between attitudes toward computers and end-user performance (r = .16, p = 12). However, as predicted in hypothesis 1, the score on attitudes toward working with computers (M = 5.3, SD = .94) was positively related to end-user performance, measured as percentage of the total possible points in the course (r = .27, p < .01, R2 = .07, β = .27, p < .001). As predicted in hypothesis 2, goal setting (M = 5.0, SD = .77) was positively related to end-user performance (r = .32, p < .001, R2 = .11, β = .37, p < .001). Also as predicted in hypothesis 3, self-efficacy (M = 4.7, SD = .60) was positively related to end-user performance (r = .54, p < .001, R2 = .29, β = .54, p < .001). No age or sex effects were found in any of the analyses. Also self-reports of computer ability and prior computer experience had no effect on end-user performance. Together, the three individual difference factors of attitudes toward working with computers, goal setting, and self-efficacy explained 35% of the variance in end-user performance. Addition of interaction terms increased the variance explained in end-user performance by only a negligible amount (.05%), thus ruling out the possibility of interaction terms obscuring the results of the independent variables.
DISCUSSION The major impetus for this study was the inconsistency of prior research investigating the effects of attitudes toward computers on performance. We argued and found support for a positive relationship between attitudes toward working with computers and performance. We also found strong support for the
186 Jawahar & Elango
effects of goal setting and self-efficacy on end user performance. The amount of variance in end-user performance explained in this study compares very favorably with that explained by other studies (e.g., Evans & Simkin, 1989). For instance, in Evans and Simkin’s (1989) study two dozen independent variables explained just 24% of the variance in end-user performance, whereas in this study we were able to explain 35% of the variance with only 3 variables. A potential limitation of this and many other studies on end-user computing (e.g., Bohlen & Ferratt, 1997) is the use of end-user surrogates subjects. Subjects in this study fit Rockart and Flannery’s (1983) descriptions of non-programming and command level end users; and therefore, study results should generalize to those populations.
Implications for Practitioners and Researchers The results of this study suggest that managers can increase end-user performance by implementing goal setting, by shaping end users’ attitudes toward working with computers, and by enhancing end users’ self-efficacy beliefs. Self-efficacy beliefs can be enhanced through direct experience, vicarious experiences, and verbal persuasion (Bandura, 1982). Managers should use all three avenues to enhance self-efficacy beliefs. For instance, to enhance self-efficacy beliefs through direct experience, managers should begin by assigning tasks that can be easily performed by the end user. The next set of tasks should require the use of more difficult and complex end-user computing skills. By methodically increasing the level of difficulty and complexity of tasks, managers can build and strengthen end users’ efficacy beliefs. To enhance self-efficacy beliefs through vicarious experience, managers should identify effective end users as role models and persuade others to emulate those role models. Managers could also enhance efficacy beliefs by persuading end users to learn and apply computing skills. This can be accomplished by expressing confidence in end-users’ ability to do so. Additionally, managers should work closely with trainers and end users to identify potential constraints in the training and in the post-training environment and remove those constraints. At the very least, trainees should be taught how to effectively cope with those constraints. There is ample evidence that individuals do poorly in situations they believe exceed their coping abilities, whereas they behave assuredly when they judge themselves capable of managing otherwise intimidating situations (Bandura, 1982,1977). Enhancing coping-efficacy will not only strengthen computer-related self-efficacy beliefs, it will also have a direct influence on end-user performance and the effective use of computer-based technologies in organizations.
Predicting End User Performance 187
Strong support for goal setting by these data suggests that trainers should set specific and challenging goals with respect to both learning and transfer. Learning goals involve the material to be learned by trainees during the training program. Also, by working with managers, trainers should identify specific work assignments that provide opportunities to apply skills taught in the training program and set transfer goals. Transfer goals involve applying skills learned in the training program to the job. Both learning and transfer goals should be specific and challenging. To evaluate accomplishment of transfer goals, managers and trainers should monitor end users to see if they are applying skills learned in the training program to complete job assignments that require the use of those skills. Rewards can be used to institutionalize the application of skills learned in the training program by end users to their job environments. Specific attitudes are likely to be more easily influenced than general attitudes. Moreover, study data yielded support for attitudes toward working with computers, a specific attitude, but not attitudes toward computers, a general attitude. Therefore, trainers and managers should focus on enhancing specific attitudes such as attitudes toward specific computer applications or software packages. Implementing goal setting and strategies to enhance efficacy beliefs are likely to have a positive influence on end users’ attitudes toward working with computers. Favorable attitudes toward working with computers can also be shaped by explicating how knowledge of specific computer applications and the proper application of that knowledge can help employees accomplish their work in a more productive manner and enhance their contributions to the organization.
CONCLUSION Attitudes toward working with computers, goal setting, and self-efficacy address motivation of end users. Our results indicate that enhancing motivation of end users is one avenue to increase end-user performance. Future research should replicate results of this study with end users from ongoing organizations and should also try to uncover additional factors with potential to influence end-user performance.
188 Coombs, Doherty & Loan-Clarke
Chapter XII
The Role of User Ownership and Positive User Attitudes in the Successful Adoption of Information Systems within NHS Community Trusts Crispin R. Coombs, Neil F. Doherty and John Loan-Clarke Loughborough University, UK
The factors that influence the ultimate level of success or failure of systems development projects have received considerable attention in the academic literature. However, despite the existence of a ‘best practice’ literature many projects still fail. The record of the National Health Service has been particularly poor in this respect. The research reported in this paper proposes that two additional factors; user ownership and positive user attitudes warrant further development and investigation. The current study investigated these two factors in a homogenous organizational sector, Community NHS Trusts, using a common type of information system, in order to eliminate the potentially confounding influences of sector and system. A multiple case-study design incorporating five Community Healthcare Trusts was utilized. The key results from the analysis indicated that both user ownership and positive user attitudes were important mediating variables that were crucial to the success of a CIS. In addition, it was also identified that the adoption of best practice variables had a dual role, directly influencing the Copyright © 2002, Idea Group Publishing.
The Role of User Ownership and Positive User Attitudes 189
level of perceived success but also facilitating the development of user ownership and positive user attitudes. These results will be of particular interest to practising IM&T managers in the NHS and also to the wider academic research community.
INTRODUCTION The challenge for the NHS is to harness the information revolution and use it to benefit patients (Rt. Hon Tony Blair, 1998)1 Over the past twenty years the level of penetration and sophistication of information technology has grown dramatically, with computer-based information systems actively supporting all key business processes and significantly enhancing both the operational effectiveness and the strategic direction of organizations of all types. The UK’s National Health Service (NHS) is one, particularly large and complex, organization that has been keen to harness the potential of IT to enhance its administrative, managerial and clinical performance. Unfortunately, in both the public and private sectors, the successful acquisition and introduction of information technology is still dogged by high failure rates (Lyytinen & Hirschheim, 1987; Kearney, 1990; Hochstrasser & Griffiths, 1991; Clegg et al., 1997). More specifically, there is much evidence to suggest that the NHS’s record has been particularly poor, with respect to the successful deployment of computer-based information systems (for example: National Audit Office, 1991; Keen, 1994; National Audit Office, 1996). There is, therefore, still a pressing need for well-focussed research to provide insights into how levels of failure can be reduced from both a general perspective and with regard to the NHS in particular. To help investigate these issues a research project was initiated to explore the factors that affect the success of Community Information Systems (CIS) within the NHS. It was envisaged that the application of CIS within the community healthcare sector would provide a particularly fertile research domain for the following two reasons: 1. Community Trusts form a reasonably homogeneous organizational sector, distinct from the acute sector and community information systems provide different instances of a common type of application; consequently the number of confounding factors in the study are greatly reduced; 2. Two recent official reports (Audit Commission, 1997; Burns, 1998) have identified a high degree of variability in the quality of CIS with many existing systems failing to deliver the anticipated benefits. Consequently, it would be possible to compare and contrast the experiences of Trusts, which had experienced a range of different outcomes.
190 Coombs, Doherty & Loan-Clarke
The initial phases of this project entailed a detailed case study, based upon interviews with a wide variety of staff at Central Nottinghamshire Healthcare (NHS) Trust (CNHT)2 and a survey of IT managers within Community Trusts (Coombs et al., 1998; Coombs et al., 1999). These studies provided important insights into the role and impact of CIS and helped to focus the research objectives, for the later stages of the project. More specifically, in addition to confirming the importance of factors such as training; senior management commitment and participation; testing and user involvement, the results from these preliminary studies, also indicated the critical roles of user ownership and positive user attitudes. As these latter factors have not, to date, received the attention they warrant in the academic literature, a follow-up, qualitative study was initiated to explicitly review their importance. The paper is organized into several sections. The following section presents a summary of the relevant information systems literature before presenting and justifying three research objectives. The justification for, and implementation of, the multiple case-study research design and research methods are discussed in section three. The research results are presented in a series of tables and diagrams, which are discussed in the fourth section. Finally, the importance of this research for the NHS and other organizations is assessed in the concluding section.
CONTEXTUAL BACKGROUND AND RESEARCH OBJECTIVES In the past twenty years much interest has been generated in the identification of factors critical to the successful outcome of systems development projects (For example: Yap et al., 1992; Sauer, 1993; Willcocks & Margetts, 1994; Keil, 1995; Whyte & Bytheway, 1996; Flowers, 1997; Li, 1997). The aim, therefore, of this section is to review the most common best practice factors highlighted in the literature, prior to reviewing the potentially important contributions of positive user attitudes and user ownership to the successful outcome of a systems development project. In so doing, the objectives of this research are established.
Best Practice in Systems Development Projects A number of empirical and in-depth studies have been conducted which examine success factors in the development and implementation of information systems. These and other studies have helped to focus IT professionals’ attention on the importance of factors such as: user involvement (Wong &
The Role of User Ownership and Positive User Attitudes 191
Tate, 1994; Whyte & Bytheway, 1996); senior management commitment (Beath, 1991; Sauer, 1993); staff training (Miller & Doyle, 1987; Whyte & Bytheway, 1996); systems testing (Ennals, 1995; Flowers, 1997) and user support (Miller & Doyle, 1987; Govindarajulu & Reithel, 1998). There is, therefore, a well-documented body of ‘best practice’ knowledge that should guide the IT practitioner in the effective development and implementation of information systems. However, there exists a paradoxical situation in that far too many projects still fail, despite the availability of this body of knowledge, which should help to promote success. Why in so many instances should this be the case? It could perhaps be that the advice is either: blatantly disregarded; not universally appropriate; not well disseminated or not always possible to heed. Alternatively it might be that the adoption of existing best practice guidelines is not, by itself, sufficient to ensure the successful outcome of systems development projects.
The Importance of User Ownership Van Alstyne et al. (1995) have stated that: ‘ownership is critical to the success of information systems projects’ with the key reason for this being ‘self-interest; owners have a greater vested interest in system’s success than non-owners’ (p. 268). However, Clegg et al. (1997) suggest that in far too many projects it is the developers rather than the users and user managers who own the system, which may have undesirable consequences for the system’s performance. Unfortunately, this apparently important concept has received relatively little explicit attention in the information systems’ literature. Where ownership has been addressed in studies it has typically been in the context of increasing user acceptance (Robey & Farrow, 1982; Guimaraes & McKeen, 1993) or minimizing user resistance (Markus, 1983, Beynon-Davis, 1995). Based upon this review of the literature, and the results of the preliminary stages of this research, the following working definition for user ownership has been derived: ‘The state in which members of the user community display through their behavior, an active responsibility for an information system’. To clarify this definition, it is necessary to add the two following qualifiers. Firstly, it must be stressed that whilst it is highly desirable that user ownership should be exhibited by the whole user community, throughout all stages of the system’s development and operation, this may not always be the case. Secondly, it should be noted that the users may not be able to claim exclusive ownership of the system, as ownership will be shared with members of the steering committee and the development team, especially in the system’s developmental stages.
192 Coombs, Doherty & Loan-Clarke
The Importance of Positive User Attitudes In purely quantitative terms the importance of positive user attitudes has probably received more attention in the literature than user ownership. It is, for example, widely recognized that it is desirable to attain positive user attitudes as this may have a beneficial impact upon user behavior, ultimately influencing user acceptance of the system (Lucas, 1978 & 1981; Zmud, 1983; Ginzberg et al., 1984; Joshi, 1990 & 1992). More specifically, Grantham & Vaske (1985) and Davis (1993) have suggested that positive user attitudes are an important predictor of system’s usage. In the context of this research, the following working definition for positive user attitudes has been derived: ‘The state in which members of the user community display positive opinions and beliefs towards the information system’. It should be noted that, as for user ownership, levels of positive user attitude may vary between different members of the user community and also between different phases of the system’s development and operation. Finally, the working definition of positive user attitudes appears in many ways similar to constructs used in other studies, such as ‘user satisfaction’ (DeLone & McLean, 1992), ‘user information satisfaction’ (Bailey & Pearson, 1983; Srinivasan, 1985) or ‘user reactions’ (Clegg et al., 1997). However, there is one important distinction; whilst user satisfaction, user information satisfaction and user reactions are typically formulated as responses to a recently implemented system, positive user attitudes is a state which can begin from the project’s inception and continue throughout the system’s working life.
Critique of the Literature and Establishment of Research Objectives Unfortunately, despite the substantial body of knowledge with regard to best practice within systems development projects the incidence of systems failure and systems under-performance remains stubbornly high. This may in part be due to the following limitations with regard to existing literature. 1. The ‘best practice’ literature, which is extensive, has limitations in terms of either depth or generalizibility. For example, survey studies, whilst providing the breadth of coverage, lack the capacity to effectively deal with the complexity of the system development process (Sauer, 1993). By contrast, case studies, whilst far better suited to handling the complexities of systems development, either relate to only one case or focus upon a number of unrelated cases; in both instances the generalizibility of findings is problematic.
The Role of User Ownership and Positive User Attitudes 193
2.
Whilst some studies have noted the importance of user ownership and positive user attitudes, little work has specifically targeted these factors to identify why they are significant and how they can be achieved. Furthermore, this research has typically been conducted in isolation from the research into best practice. For example, most studies of best practice factors (for example: Miller & Doyle, 1987; Whyte & Bytheway, 1996; Doherty et al., 1998) do not include user ownership and positive user attitudes. Consequently, it is difficult to judge the relative importance of these factors and their relationship with other best practice factors. To overcome these weaknesses, a qualitative study was initiated, targeting the development and implementation of community information systems. This approach isolated a single organizational sector, Community Trusts, in which different instances of a standard application of IT were being applied and ensured that the following research objectives could be addressed: 1. Identification of the relationship between the ability of the project teams to encourage user ownership and the resultant level of success or failure of the operational information system; 2. Identification of the relationship between the ability of the project teams to encourage positive user attitudes and the resultant level of success or failure of the operational information system; 3. Identification of the relationships between user ownership, positive user attitudes, other best practice factors and the resultant level of success or failure of the operational information system. It was envisaged that through the exploration of these issues new evidence could be provided with respect to the importance and role of user ownership and positive user attitudes in systems development projects. In addition, it would be possible to provide advice to IT practitioners on the importance of taking steps to foster user ownership and positive user attitudes. More specifically, the research should provide important insights into the successful development and implementation of information systems within the NHS in general, and Community Trusts in particular.
RESEARCH METHODOLOGY Doherty et al. (1998) have noted that existing research on best practice has tended to focus on developing a critical set of factors affecting IT implementation success with less emphasis being placed on ‘how’ and ‘why’ these factors interact together to produce either success or failure. Gable (1994) and Pare and Elam (1997) advocate the greater use of case study
194 Coombs, Doherty & Loan-Clarke
research to explore and develop an increased understanding of these complex relationships. The aim of this section is to provide an overview of the methods by which a case study-based piece of research was initiated and executed to explore the objectives described at the end of the previous section.
Research Design The initial stages of this research project comprised a detailed case study at a single Community Trust and a survey of all Community Care (NHS) Trusts in the UK. Whilst both of these studies provided important insights and facilitated the design of the research presented in this paper, both can be criticized from a methodological perspective. For example, whilst it has been argued that ‘case study research is particularly appropriate for the study of information systems development, implementation and use within organizations’ (Darke et al., 1998, p. 278), Galliers (1992) notes that case studies are usually restricted to a single event or organization and that it is difficult to collect similar data from a sufficient number of similar organizations making it difficult to generalize from case study research. Similarly, it can be argued that survey-based studies, whilst providing the breadth of coverage, lack the capacity to effectively deal with the complexity of the system development process (Sauer, 1993). Multiple case studies however, allow the study of phenomena in more diverse settings and facilitate cross-case analysis and comparison. This multiple approach allows the researcher to confirm that findings are not being unduly influenced by confounding variables unique to individual research settings (Cavaye, 1996). In addition, multiple cases may also be used either to predict similar results (literal replication) or contrasting results for predictable reasons (theoretical replication) (Yin, 1994, p. 46). Based on the desire to explore and interpret the complex relationships between best practice, user ownership, positive user attitudes and success a multiple case study approach was undertaken.
Research Instrument Design Past literature on best practice and organizational impact as well as the evidence from the initial case study research and the questionnaire survey were used to develop questions to be included in a semi-structured interview schedule. The choice of a semi-structured interview over a standardized interview was made because of the exploratory nature of the research and the fact that it would not have been possible to create a fully structured guide from current knowledge (Diamantopoulos & Souchon, 1996). The interview schedule had four sections: biographical/introductory; the adoption of best
The Role of User Ownership and Positive User Attitudes 195
practice variables; the attainment of user ownership and positive user attitudes; and the determinants of success. The final section of the interview consisted of a short questionnaire that used a five point Likert scale to measure various aspects of the perceived system success. The success measures were adapted from the six generic measures developed by DeLone & McLean (1992) and addressed both user and management perspectives on various aspects of the system. Each interviewee was sent a letter outlining the aims of the research project and indicating the specific areas that would be explored through the interviews.
Targeting & Execution of the Interviews In studying the responses to the questionnaire survey it became clear that one particular community information system (Comwise, designed by Systems Team) was most common among respondents. It was therefore possible and desirable to concentrate on this sample as it removed the confounding factor of variations in system design from the analysis. However, in order for the results from the Comwise sample to be generalized to the rest of the respondents it was necessary to confirm that the Comwise sample was representative of all the respondents. Statistical means and variances were calculated for each question on the survey and compared for both the main group of respondents and the Comwise sample. No significant differences between the means or variances for each group were identified and consequently it was concluded that the Comwise sample was representative of the main group of respondents to the questionnaire. The Comwise sample group consisted of 18 Trusts and had a further advantage in that the performance of the system in different Trusts appeared to vary considerably as perceived by the respondents to the questionnaire survey. Therefore, a multiple case study approach would enable a range of Trusts to be studied that were using the same CIS but were producing contrasting results in terms of success. On the basis of their perceived CIS performance a range of five Trusts were contacted and in each case the initial contact was through the respondent to the questionnaire, either the IM&T Manager or the Information Manager. An interview was conducted with each of these key informants at the end of which requests were made for additional members of the Trust to interview. It was considered particularly important that staff from areas outside the Information and IT Departments of the Trust be interviewed to record their perspective on the use of a CIS. The clinicians form the largest stakeholder group that use a CIS and one of the key measurements of the success of a CIS is the clinicians’ satisfaction with the system. A criticism frequently levelled at quantitative IS research is that it tends to concentrate on documenting and
196 Coombs, Doherty & Loan-Clarke
studying the views of IS professionals who have a clear vested interest in the success of the system. Consequently the opportunity to interview and document other staff views towards the system was considered to be of great importance. The key informants were asked to identify an administrator, a clinical manager and a clinical user who would be willing to participate in the study. As can be seen, from the breakdown of interviewees presented in table 1, it was not always possible, for practical reasons, to achieve the desired mix of informants, but sufficient numbers of informants participated from each Trust to ensure that the sample reflected a range of views. In addition to participating in the interviews, the IM&T Managers were asked to provide, if possible, documentary evidence, such as published articles, internal reports or newsletters, which were used to help contextualize, explain and verify the interview responses. As can also be seen from the information presented in Table 1, such information was forthcoming in a number of cases. Each interview was conducted, in-situ, at the Trust and lasted approximately an hour. To ensure the validity of the interview process, the informants were asked to supply specific evidence and examples to support their assertions. In the vast majority of cases, the face to face interview was complemented by a follow-up phone call that was used to clarify issues and attain supplementary information. Both the initial interviews and the follow-up phone calls were tape recorded and later transcribed verbatim. In terms of the characteristics of the Trusts visited, table 2 shows that although there was some variation in the range of services they provided and premises they used, they also exhibited a number of common features. The Trusts all operated on a decentralized basis with a small number of hospitals and multiple health centres. Most importantly, the same core staff groups used the system within each Trust. Similarly, because all the Trusts are based within the community healthcare sector they are guided by common policy and priority goals identified by the Department of Health. Furthermore, table 2 also demonstrates that all the Trusts were using the same software package and supplier for their community information systems. Consequently, by Table 1: Range of informants interviewed at each trust Informant IM&T Manager Manager Clinical Manager Clinical User Totals
Trust A* ✓
✓ 2
Trust B ✓ ✓ ✓ ✓ 4
Trust C* ✓ ✓ ✓✓ ✓ 5
Trust D ✓✓ ✓ ✓ 4
Trust E ✓ ✓ ✓ ✓ 4
The Role of User Ownership and Positive User Attitudes 197
targeting these five Trusts it was possible to minimize the effect of confounding factors such as healthcare sector, type of system and system design.
Data Analysis Strategy The analysis follows the three concurrent activities identified by Miles & Huberman, (1994, p. 10) of data reduction, data display and conclusion drawing/verification. This approach is necessary to ensure that the researcher does not become overloaded from unreduced data transcripts and their information processing abilities impaired (Faust, 1982). Data reduction was conducted on each interview transcript using mainly ‘in-vivo’ codes, that is codes derived from phrases used repeatedly by informants (Strauss & Corbin, 1990). In-vivo codes (as opposed to codes determined prior to the analysis) are appropriate when the research is essentially exploratory and are more useful in identifying new variables than adopting constrained literature-based codes (Diamantopoulos & Souchon, 1996). In addition, marginal remarks Table 2: Case study trust profiles Trust Profile
Trust A
Trust B
Trust C
Trust D
Trust E
Service Provision
Community
Community
Community
Community
Community
Acute
Acute
Acute
Mental Health 3 11
Mental Health 7 16
Mental Health 4 42
Mental Health 5 8
Mental Health 5 7
District Nurses* Health Visitors School Nurses PAMS
District Nurses* Health Visitors* School Nurses PAMS
District Nurses* Health Visitors* School Nurses PAMS
District Nurses* Health Visitors* School Nurses PAMS
District Nurses* Health Visitors* School Nurses PAMS
System Profile
Trust A
Trust B
Trust C
Trust D
Trust E
System Name
Comwise
Comwise
Comwise
Comwise
Comwise
Version
2.2
N/S
2.2+
N/S
2.2
Supplier
Systems Team Ltd Fully Implemented Yes
Systems Team Ltd Fully Implemented Yes
Systems Team Ltd Partially Implemented Yes
Systems Team Ltd Fully Implemented Yes
Systems Team Ltd Fully Implemented No
Clinical and Clerical Staff
Clinical and Clerical Staff
Clinical and Clerical Staff
Clinical staff
Clinical and Clerical Staff
Number of Hospitals Number of. Health Centres/Clinics Staff Groups Using System
Level of Implementation System Uses Portable Technology Staff Using the System
* Main staff groups using system
N/S - Not Supplied
198 Coombs, Doherty & Loan-Clarke
were used during the coding period to add clarity and meaning to the transcripts as well as having the ability to help revise and improve the coding structure (Chesler, 1987). From the codes it was possible to develop a series of within-case matrix displays for each Trust. The within-case analysis was primarily conducted using the following three displays: 1. Time ordered displays: The time ordered display was used to show the variations in each variable over time and the major events during the CIS project identified by respondents. This display is primarily descriptive although it does have the value of preserving the historical flow and permitting a good look at the chain of events (Miles & Huberman, 1994, p. 110); 2. Conceptually ordered displays: This display was used to study the variables in more depth and generate more explanatory power. A thematic conceptual matrix was developed for each case to study the manifestation of the variable, the facilitators and inhibitors directly related to that variable and any solutions that had been subsequently proposed or adopted (Miles & Huberman, 1994, p. 131); 3. Effects matrix: Finally an effects matrix was also constructed for each Trust. This display concentrates on the outcomes of each of the variables concerned and their effects on other variables and areas associated with the CIS project. Each variable was analyzed for positive and negative effects on specific outcomes and whether they were considered by informants to be direct or indirect relationships (Miles & Huberman, 1994, p. 137). Following the within-case analysis the displays were synthesized into a series of fewer cross-case displays. The cross-case analysis took the form of a composite thematic conceptual matrix (Miles & Huberman, 1994, p. 183) and a causal network display (Miles & Huberman, 1994, p. 222). The composite thematic conceptual matrix allowed us to study the similarities and differences between the facilitators and inhibitors for each variable. The causal network was developed from a series of linear sub-models that displayed the linkages between variables more clearly before they were synthesized into a single overall causal model. It is primarily the results of these cross-case displays that are presented in the following section of this paper.
RESEARCH RESULTS The research findings are reported in this section by presenting evidence in the form of specific examples and comments gathered through the inter-
The Role of User Ownership and Positive User Attitudes 199
view process. To make the discussion of these findings more meaningful they are related to the three specific research objectives identified in section 2.4.
The Relationship Between User Ownership and Success To explore the relationship between user ownership and success, informants were specifically asked during the interviews whether user ownership was occurring within their Trust and whether achieving user ownership had been planned or was occurring as a reaction to the development and operation of the system. Furthermore, each interviewee was asked to assess the effect that achieving user ownership was having on the operation of the CIS. From the results of the cross-case analysis it was clear that there was a mixed experience across the Trusts with respect to user ownership. In four out of the five Trusts (A, B, C & D) it was found that achieving user ownership had been planned. However, only informants in one Trust (A) said that user ownership was already occurring at high levels. As one clinical user noted: ‘there is ownership because we use it to inform our clinical practice’ and ‘I think it [user ownership] was a deliberate policy by IT and I think the new Head of Information will extend that even further than it is now’. Informants in Trust B provided contrasting views on the occurrence of user ownership indicating that user ownership was perceived to be occurring at high levels in some areas, but not others. For example, whilst the IM&T Manager (B) stated: ‘this is something the Head of Information has been working towards and this process is now coming where we are saying this is your system, what do you want us to record? Whereas before they were told what to record’, a manager (B) suggested: ‘I don’t think staff have ever had ownership of the system and I think that is an extremely important issue’. Informants in Trust C stated that user ownership was starting to occur, but only at moderate levels so far, with one manager (C) noting: ‘we may be a little hard on ourselves but we still don’t believe that we have got adequate user ownership’. However, in the majority of the Trusts (A, B & C) high levels of user ownership were expected in the future, once greater access to information was provided (A & B) and the CIS was fully implemented (C). By contrast, informants in Trusts D and E indicated that there was little or no user ownership occurring. For example, in Trust D a clinical user noted ‘it has certainly been useful but I wouldn’t have thought of it as our system’, whilst the IM&T Manager at Trust E reflected: ‘I think users see it as part of the daily grind of filling in these Daily Diary Sheets and so on and that’s it really’. One manager at Trust E summarized the situation more bluntly: ‘its just a necessary evil’. However, it was only the informants from Trust E who stated that user ownership had not been planned and was unlikely to occur in the future.
200 Coombs, Doherty & Loan-Clarke
However, despite the range of experiences in achieving user ownership, there was a clear consensus across informants in all the Trusts, of the importance of user ownership to the ultimate success of a CIS. In three of the five Trusts (A, B & C) user ownership was identified as having particular importance in avoiding failure with informants making the • ‘The Trust is very reliant on user ownership because it is a Community Trust and staff are very decentralized. If the staff, the clinicians on the ground, don’t own the system, feel how essential it is, it would be a complete failure’ (clinical user, A); • ‘Achieving user ownership will be the deciding factor of whether they use the system or not’ (IM&T Manager, B); • ‘If they [clinical staff] don’t get ownership they will rely on non-clinical people trying to tell them how to use it which won’t work’ (IM&T Manager, C). It was interesting to note that the importance of user ownership was also recognized in Trust E, with a clinical manager commenting that: ‘the Trust needs to encourage user ownership’, even though all informants from the Trust recognized that they had not experienced user ownership. More specific benefits from achieving user ownership were also identified by respondents. For example, the IM&T Manager (A) stated that: ‘having user ownership has meant that users are in control of their information’ and a clinical user (A) noted that: ‘ownership is about recognizing and seeing the potential to develop things that are going to be clinically useful. Without the ownership you wouldn’t be getting the ideas being generated and pushing the development of it [CIS] which in turn is improving patient care’. Similarly, the IM&T Manager (C) stated that: ’I think once they [clinicians] own it they will try and optimize its use and they will try and explore different ways in which the system can be used to improve the service’. The importance of attaining user ownership has also been highlighted by the experiences of those Trusts that failed to achieve it. As a manager from Trust B noted ‘without ownership of the data they [clinicians] don’t feel they are involved or they are controlling it then we are going to have problems with the quality’. Similar views were expressed at both Trusts D and E. For example, the IM&T Manager (D) stated that: ‘I think the main problem is the quality of the data. They are not interested in what is going in so the quality is poor. That then undermines the quality of the reports that are pulled off’. Similarly, a clinical manager (E) noted: ‘It’s not worthwhile entering the data from their [clinicians] point of view, they can’t access the data and not being able to get the answers that they want further increases the amount of cynicism associated with the system’.
The Role of User Ownership and Positive User Attitudes 201
The relationships that have been identified above clearly provide strong evidence to suggest that there is a relationship between user ownership and success. Furthermore, it has been demonstrated that the relationship can have either a positive or negative effect on the overall perceived success of a CIS, depending on how well user ownership is addressed during the systems development project.
The Relationship Between Positive User Attitudes and Success The cross-case analysis of user attitudes indicated that all Trusts planned to develop positive user attitudes during the development, implementation and use of their respective community information systems. Informants at Trusts A and C stated that user attitudes were positive towards the CIS and that this positive attitude was thought likely to continue in the future. More specifically, the following examples were cited as evidence of positive user attitudes: an increased demand for reports (IM&T Manager, A; clinical manager, C); efforts by users to improve data quality (IM&T Manager, A); and general positive comments about the CIS during staff meetings (clinical user, A; clinical manager, C). Furthermore, in Trust C it was envisaged that the planned increases in access to information would further develop and enhance the users’ positive attitudes. As one clinical user (C) commented: ‘we are going to get more out of the system for our [the clinical staff] benefit looking at what we do in terms of monitoring things on various diagnoses and incidents’. Informants at Trusts B and D gave mixed responses as to whether they perceived user attitudes to be positive; indicating that there was a variation in perceptions towards the CIS across these Trusts. However, the IM&T Manager (B) did expect user attitudes to be more positive in the future commenting that: ‘once we get the printers out there and the users start using the system and they start asking for more information it is going to be a great deal easier to give users information’. Only informants at Trust E stated that although attempts had been made to develop positive user attitudes, at the time of interviewing attitudes were not positive and they were unsure whether attitudes would improve in the future. It was perceived by informants in Trust E that clinicians only considered their interaction with the system to be a mandatory routine that provided no personal benefit. Specific comments included: ‘it’s just what they call a necessary evil because of Billing’ (manager, E); ‘positive is not the right word. It is part of a thing that we have to do, so we do it’ (clinical user, E); and ‘the users tend not to ask for information. A lot of them don’t seem to be interested’ (IM&T Manager, E).
202 Coombs, Doherty & Loan-Clarke
The quality and availability of the information output was perceived as being key to the attainment of positive user attitudes: • ‘The single most important factor is that we have access to the information’ (clinical user, A); • ‘I think in terms of report writing, people are now coming to me and saying, can I get this information? How is this done? That is the best news, that they are taking it seriously and thinking maybe I can do something with it.’ (clinical user, C); • ‘I think there is a lot of evidence of positive user attitudes in the way people have adapted to using the system. Where it [CIS] has been a success they [clinicians] are starting to get ideas for the development of the system and people are starting to look at the information in terms of what can be collected’ (clinical manager, C); • ‘If you could get meaningful information out, then I think it would fire them up and they would be interested’ (manager, E). There was general agreement, from informants in all Trusts, that there is a significant relationship between user attitudes and success. For example, it was noted that once positive user attitudes had been attained, there were significant resultant benefits with respect to the quality of the data input. As one clinical user (A) noted: ‘I think the biggest benefit is staff are motivated to record and reflect what we do accurately’. This view was endorsed IM&T Manager (A) who noted: ‘they are committed to doing it [record information] and they are committed to ensuring that their colleagues also do it [record information] and record accurately’. These views were echoed by an IM&T Manager from Trust B who noted: ‘it has made the user more responsible for feeding in the data in on time and correctly’. Conversely, in Trust E where positive user attitudes were not identified, there have been severe problems with data quality. The IM&T Manager (E) stated that: ‘generally the staff aren’t very interested in the activity once they have done it’ and as a result, ‘we discovered that anything that we tried to get from it [CIS] was corrupted by poor data quality and I think we went into a little bit of despondency then’. The negative impact of failing to achieve positive user attitudes was also recognized in Trust D, with the IM&T Manager noting that staff: ‘always blame the system for the errors, it’s never their own errors that have caused the problems’. The above findings suggest that information and data quality may be inextricably linked to the attainment and retention of positive user attitudes. In three Trusts (A, B & C) where information quality and accessibility were perceived as being high, positive user attitudes have resulted. This in turn encouraged the users to be more attentive to the quality of their data input,
The Role of User Ownership and Positive User Attitudes 203
which ultimately enhanced the effectiveness of the system. Conversely, in Trusts D and E, where there has been poor quality in terms of the information output then this has contributed to negative user attitudes, which ultimately undermined the perceived success of the CIS.
The Relationship Between Best Practice, User Ownership, Positive User Attitudes and Success During the interviews the role of several best practice variables and their relationship with the overall perceived success of the CIS were also reviewed. The importance of these best practice variables to information system success has already been established in the literature (see section 2) and the responses from informants strongly supported the existing research. However, the actual occurrence of these best practice variables did vary between the different Trusts as shown in table 3. This table indicates that Trust A, whose informants were generally very positive about the impact of their system, were very successful in the adoption of best practice. By contrast, the findings for Trust E, whose informants were far less positive about the impact of their system, indicate that they were the least successful in the adoption of best practice. The remaining three Trusts were generally better than Trust E but behind Trust A, both in the adoption of best practice and in terms of the perceived success of their systems. These results suggest that those Trusts that adopt high levels of all the best practice variables are more likely to achieve higher levels of perceived success associated with their CIS. Table 3: Level of adoption of different best practice variables at each trust Best Practice Variable
Trust A
Trust B
Trust C
Trust D
Trust E
Senior Management Commitment and Participation Well Balanced Project Team User Involvement Management of User Expectations User Training
+
+
+
-
-
+
+
+
+
-
+
-
-
+
-
+
-
-
+
+
+
+
-
User Support
+
+
+
+
~
System Testing
+
+
+
+
+
Success Score
4
2.6
3.2
3.4
2.4
Note: + denotes high occurrence of variable, - denotes low occurrence of variable, ~ denotes moderate occurrence of variable. The overall measure of success is based on a 5-point Likert scale ranging from 1, CIS is very unsuccessful to 5, CIS is very successful.
204 Coombs, Doherty & Loan-Clarke
In addition, informants were also asked to discuss the main treatment approaches that had been adopted in their Trusts to achieve user ownership and positive user attitudes. In terms of achieving user ownership a range of treatment approaches were identified, however, there was clearly a strong emphasis on the role of best practice variables as the foundations for these methods. For example, in Trust A it was highlighted that high levels of senior management commitment had led to the provision of resources, which facilitated the delivery of regular, relevant reports to clinical staff and ultimately encouraged ownership (clinical user). User involvement and user training were also cited as being facilitators for developing user ownership with a clinical user at Trust A stating that ‘we are using the CIS to support our clinical issues and I think that is because of the involvement of clinicians right from the very start’ and ‘I think people who went to the training sessions came out recognizing that they would have to implement something that was going to be valuable to them in their clinical practice so the focus and the message from the training was very much to do with ownership’. Best practice variables were also identified as facilitating user ownership at Trusts B and C with senior management commitment resulting in the appointment of a systems champion (B). Senior managers were also seen as making a concerted effort to give out a positive message that the CIS was for clinical staff benefit (C). User involvement demonstrated that the CIS was for staff benefit and helped to allay the fears of users (C) and training helped to introduce the users to the concepts behind using the information that will be available from the CIS (C). However, as well as being effective treatment approaches, the lack of certain best practice variables were also identified as being significant inhibitors to the development of user ownership. Low levels of senior management commitment and participation were identified as contributing to the problems of clinicians attaining access to information, ultimately resulting in low user ownership (D & E). Additionally, low levels of user involvement resulted in the CIS being seen as being imposed on clinicians rather than being focused upon clinical needs (E). Finally, where clinicians had not been involved in deciding what information was collected, so the clinicians have not perceived the information to have any value for them (D & E). As well as affecting the development of user ownership, informants also indicated that user attitudes were also frequently influenced through the adoption of best practice variables. The importance of having a well-balanced project team in developing positive user attitudes was identified in Trusts A, B and C with informants stating that: • ‘I think the thing that has been most important is having somebody with a clinical background. I have a clinical background and I think the thing
The Role of User Ownership and Positive User Attitudes 205
that has made the difference is that clinicians have faith in you because they think you understand what you are doing’ (IM&T Manager, A); • ‘I think they [clinicians] had more of an affinity with the Head of Information because of his clinical background. I think they felt he was one of their own and their needs would be understood and their requirements would be addressed’ (manager, B); • ‘The fact that we have got a Clinical Development Advisor in place is helping to develop the system as well which from a clinician’s point of view is excellent’ (clinical user, C). Similarly, good management of user expectations (A & C) and good quality user training with friendly staff, (A, C & D) were also cited as directly contributing to positive user attitudes. It was also significant to note that as in the case of user ownership, not adopting certain best practice variables were also perceived to directly inhibit the development of positive user attitudes. A lack of senior management commitment to using information, low levels of training for clinicians and not realizing user expectations were all identified at Trust E as having a negative effect on user attitudes. Similar problems in terms of managing user expectations were also cited as causing negative user attitudes at Trusts B and D. In addition, low senior management commitment at Trust D resulted in low levels of resource provision for the CIS and frustration among clinical users, which was also cited as directly contributing to low user attitudes. This evidence suggests that best practice variables have a dual role in systems development projects. Not only do they have a direct relationship with the perceived level of success associated with the CIS but they are also important facilitators for managing and developing user ownership and positive user attitudes, both of which are perceived to have a positive relationship with system success. An overview of the relationship between the adoption of best practice, the attainment of user ownership and positive attitudes and their resultant impact on system’s success is presented in figure 1.
DISCUSSION OF FINDINGS Having reviewed the role of user ownership and positive user attitudes in influencing the successful application of community information systems, it is important to contextualize these findings within the relevant literature and in so doing, establish their contribution. Furthermore, the implications of this study, both for healthcare practitioners and IT professionals also need to be reviewed, as do the study’s potential limitations.
Note: Lines indicate where evidence of causality has been found at the trusts highlighted in brackets
Figure 1: Diagram showing the relationships perceived to exist between best practice variables, user attitudes, user ownership and their impact on the CIS project at different trusts
206 Coombs, Doherty & Loan-Clarke
The Role of User Ownership and Positive User Attitudes 207
Whilst the importance of user ownership and positive user attitudes has previously been touched upon in the information systems literature (For example: Davis, 1993; Van Alstyne et al., 1995), their precise role and the mechanisms by which they are achieved has not previously been explicitly explored. Consequently, this research makes two significant contributions. Firstly, it confirms the importance of user ownership and positive user attitudes to the successful outcome of information systems projects through investigating a common organization and system type. Secondly, and more importantly, it presents evidence that both user ownership and positive user attitudes have to be explicitly planned and then facilitated through the adoption of best practice; in essence they play an important mediating role between the adoption of best practice and the ultimate achievement of system’s success. Despite the relatively small sample on which these findings are based, they are given added credibility when interpreted in light of the behavioral sciences literature, especially that concerned with organizational change. For example, Pierce et al. (1991) argue that motivation and positive behavioral responses to change are the result of ‘psychological ownership’ and Barbara Senior (1997) highlights the importance of creating positive employee attitudes, to reduce resistance to change initiatives. Furthermore, it is suggested that user ownership and positive user attitudes can be facilitated through employee participation (Bartkus, 1997), education and training (Kotter & Schlesinger, 1979) and senior management commitment (Clarke, 1994). These findings are of particular importance as they have a number of significant implications for the practice of information systems development and project management from both a healthcare and also a more general information systems perspective. Starting with the policy implications for healthcare professionals, the most recent NHS Information Strategy (Burns, 1998) suggests that clinicians: ‘must deliver the new [IT] agenda’, be part of a culture that is ‘change focussed and able to take advantage of new technology’ and have access to ‘fast reliable and accurate information about the individual patients in their care’. The results of this research suggest that these policy objectives will only be achieved if levels of user ownership are improved and if individual users develop more positive attitudes towards information technology. Consequently, all future IM&T projects within the NHS must adopt coherent change management strategies explicitly focussed upon the attainment of user ownership and positive attitudes, facilitated through active user participation, senior management commitment, high quality training and education and well-balanced project teams. When it comes to the implications of this research for the wider practice of information systems development, any generalizations may have to be
208 Coombs, Doherty & Loan-Clarke
qualified. The UK’s National Health Service is an extraordinarily large and complex organization, which is still very labor-intensive. It has very strong traditions, cultures and subcultures running throughout and is generally perceived as being slow to change (Handy, 1993). Consequently, when embarking upon change programs, such as the introduction of new systems, managing the human resources and the behavioral issues is probably more important than in other contexts. Whilst, therefore, it is likely that attaining user ownership and positive user attitudes are generally important, especially in labor-intensive organizations, they may not be as important as they are within the NHS. Research into the adoption of innovative technology, within the organizational context, is an ambitious undertaking and therefore contains a number of inherent limitations. In particular, the adoption of the in-depth case study format limited the number of organizations it was possible to target and hence reduces the generalizibility of the results. The selection of a small number of stakeholders at each Trust to participate in the study is also the source of potential bias. However, the variations in opinion recorded from the different informants and in particular, the negative comments provided by some clinicians, provided a strong indication that the results were not unduly biased, despite the IM&T Managers acting as gatekeepers to informants. Furthermore, although each Trust typically has large numbers of clinical users, it was only possible to interview one from each Trust. However, the clinical user interviewed always reflected the main user group within the Trust and the clinical managers provided a further clinical perspective on the development and use of the system in each case. Finally, a further limitation relates to the scope of the study and the fact that it was not practical to study every possible variable that may have influenced the successful outcome of systems development projects. Consequently, although the study provides many interesting and novel insights, the aforementioned limitations should be taken into account when interpreting the results. Such limitations also highlight the need for follow-up studies, employing different methods, targeting different populations and focussing upon different combinations of variables.
CONCLUDING REMARKS This paper provides an in-depth study of how information systems are being developed and applied within Community Trusts; an important, yet largely neglected research domain. It explicitly explores the role of user ownership and positive user attitudes and in so doing provides important new
The Role of User Ownership and Positive User Attitudes 209
insights into how they can be achieved through the well-focused application of user participation, senior management commitment and high quality training and education. In a rapidly changing and ever more challenging organizational environment, where information technology plays an increasingly important operational and strategic role, such insights into the effective practice of information systems development have become critical.
ENDNOTES 1 All our Tomorrow’s Conference, Earls Court, London. 2nd July 1998. 2 This organization was used as there is an established research link with the Authors’ institution.
210 Orr, Allen & Poindexter
Chapter XIII
The Effect of Individual Differences on Computer Attitudes Claudia Orr, David Allen and Sandra Poindexter Northern Michigan University, USA
Computer competency is no longer a skill to be learned only by students majoring in technology-related fields. All individuals in our society must acquire basic computer literacy to function successfully. Despite the widespread influx of technology in all segments of our society, the literature often reports high levels of anxiety and negative attitudes about using computers. Monitoring the computer attitudes and developing an understanding of the variables that affect computer attitudes will assist educators and adult trainers in providing appropriate learning experiences in which learners can succeed. This study examined the relationship between computer attitude and experience, demographic/educational variables, personality type, and learning style of 214 students enrolled in a university computer literacy course.
INTRODUCTION It has become apparent that computer competency is necessary not only for citizens to function efficiently on a personal level in our society, but to develop, advance and succeed in their professional lives. End-user computing has emerged as a significant issue affecting organizations. As Torkzadeh and Angulo (1992) caution, “the success of this end-user computing is dependent on the user’s acceptance and commitment” (p. 99). Copyright © 2002, Idea Group Publishing.
The Effect of Individual Differences on Computer Attitudes 211
Unfortunately, despite the increasing use of computers in schools, homes, and workplaces across the United States, research continues to report high levels of anxiety, resistance and poor attitudes toward computers among students in higher education who are preparing for professional careers as well as those employees already well established in the workplace. In 1993 researchers Rosen and Weil estimated that technophobia afflicted as many as one-third of the 14 million college students in the country (DeLoughry, 1993). A study supported by Dell Computers concluded that 55% of Americans suffer from some degree of technophobia (Williams, 1994). Ostrowski, Gardner, and Motawi (1986) conducted a study to determine the extent of enduser attitude problems; more than 50% of the respondents indicated observing computer attitude problems, with anxiety occurring most often. A metaanalysis of computerphobia research led Rosen and Maguire (1990) to conclude that one fourth to one third of all people – college students, business people, and the general public – may be classified as “computerphobic.” They also indicate that an additional segment of the population is uncomfortable with computers and will avoid them whenever possible. A variety of terms are used in the literature to describe the negative attitudes associated with computers - computer anxiety, cyberphobia, computerphobia, or technophobia are a few most often used. Jay (1981), one of the first to use the term “computerphobia,” provided the following definition: “(a) resistance to talking about computers or even thinking about computers, (b) fear or anxiety toward computers, and (c) hostile or aggressive thoughts about computers” (p. 47). Although research has established that stress and anxiety reduce an individual’s ability to perform effectively (Elder, Gardner, & Ruth, 1987; Torkzadeh & Angulo, 1992), and computer anxiety, in particular, has been found to be predictive of whether technology is used and how technology is used (Scott & Rockwell, 1997), Rosen and Weil (DeLoughry, 1993) report, “few in higher education and elsewhere in society treat technophobia as a problem worthy of their attention” (p. A25). They say that too many people are under the illusion that computer anxiety will disappear if the world is flooded with technology. Also, Torkzadeh and Angulo (1992) emphasize “computer anxiety is not a transitory problem that will disappear as the current generation of students, who are gaining computer exposure at an early age, move in to the workforce. The computer training and exposure that young people receive in most high schools and colleges is inadequate since the current proliferation of computers will demand more – not less – computer literacy. The increasing demand for strategic use of computer applications will require even more comprehensive and
212 Orr, Allen & Poindexter
continuous training programmes” (p. 104). Computer anxiety has implications for instruction and training, both in educational environments and in the workplace (Dyck, Gee, & Smither, 1998). Harrison and Rainer (1992) explain that organizations need to understand how individual differences relate to computer skill given the growth of end-user computing. As stated by Loyd and Gressard (1984a), “positive attitudes increase the prospect for achievement in any academic endeavor, and negative attitudes make achievement of competency less likely; empirical study of the relationships among these attitudes will help us clarify the character and significance of computer attitudes among students” (p. 68). Maurer and Simonson (1993-94) recommend additional research be conducted most specifically to determine personality variables that may relate to computer anxiety. Ayersman (1996) also encourages further study of computer anxiety so that more effective methods can be developed for reducing its detrimental effects. Given a better understanding of factors that may affect computer attitude, educators and trainers may be able to identify high-risk learners and to introduce appropriate interventions that may help students and end-users improve their attitudes toward computers and realize their full potential in the classroom and on the job. This study was conducted to examine relationships between computer attitude and experience and computer attitude and various personality, demographic, and educational variables.
REVIEW OF LITERATURE Since the early 1980s, researchers have been studying the computer attitude phenomenon by searching for factors that may predict computer attitudes. Studies conducted across most academic disciplines and at all educational levels, as well as in the workforce, have focused primarily on relationships between computer attitude and prior computer experience, gender, and age.
Computer Experience A number of studies have examined the effect of formal computer instruction on attitudes towards computers with attitude measures administered pre- and posttreatment. Results appear to indicate that formal instruction can improve computer attitudes, albeit in varying degrees. During a 16-week investigation, Pope-Davis and Vispoel (1993) measured computer attitudes of undergraduates and graduates. One group received microcomputer training while the control group received no training. Results showed that the students
The Effect of Individual Differences on Computer Attitudes 213
who received the computer training were less anxious, more confident, and more interested in using computers than students in the control group. While the training group reported significant positive changes in their attitudes during the course, the control group did not. During a semester-long introductory college class on computers in education, Maurer and Simonson (1993-94) found significant decreases in the students’ anxiety levels. Interestingly, this decrease was most pronounced for those students having less computer experience prior to the study. Ayersman (1996) also reported significant decreases in computer anxiety in undergraduate students during a 15-hour computer course and again during a 45-hour computer experience. Another study also found that a one-semester college course in computer science improved students’ attitudes towards computers (Shashaani ,1997). However, Jones and Wall (1989-90) reported only a small reduction in computer anxiety as a result of a semester course on computers in society. A number of investigations have focused on the association between previous computer usage and computer attitude, but the results have been mixed. Upon measuring computer attitudes of college students enrolled in a required computer information systems course, Marcoulides, (1988) concluded that computer anxiety is still present regardless of prior computer experience. In fact, two studies reported that even experienced computer users already in the workplace report symptoms of computer anxiety when they are confronted with learning new computer applications (Ostrowski, Gardner, & Motawi, 1986; Elder, Gardner, & Ruth, 1987). Using Loyd and Gressard’s (1984a) Computer Attitude Scale (CAS), Pope-Davis and Twing (1991) found no significant relationship between computer experience and computer attitudes among 207 college students in an introductory computer skills course. Additionally, in separate studies of secondary students, Shashaani (1994) and Woodrow (1994) both reported that computer ownership was not related to computer attitudes. Other studies, however, report that attitudes toward computers were related to computer experience (Ayersman, 1996; Busch, 1995; Coffin & MacIntyre, 1999; Koohang, 1989; Levine & Donitsa-Schmidt, 1998; Loyd & Gressard, 1984a; Shashaani, 1997; Woodrow, 1994) and computer ownership (Houle, 1996; Levine & Gordon, 1989; Levine & Donitsa-Schmidt, 1998; Ogletree & Williams, 1990; Seyal, Rahim, & Rahman, 2000); Shashaani, 1997). Houle (1996) also found computer experience at a job to be an important discriminate of computer attitudes and anxiety. As a result of their meta-analysis of 81 studies, Rosen and Maguire (1990) believe that while computer experience alone does not cure computerphobia, past experience is
214 Orr, Allen & Poindexter
related given that those who are highly anxious will go to great lengths to avoid computers. In a review of the computer anxiety literature, Maurer (1994) concluded that amount of computer experience seems to have the clearest relationship to computer anxiety of any variable studied, but he cautions that further research needs to be conducted on how anxiety develops so that its development can be interrupted.
Gender Much has been written in the literature about the prevalence of a “technological gender gap” – the idea that males and females have different technology-related attitudes, behaviors, and skills (Canada & Brusca, 1993). But despite the extensive number of studies that have been conducted, the results are conflicting. Studies have often hypothesized that computers appealed more to men and boys than to women and girls and therefore males were more likely to have had more computer-related experience. Because many studies also showed experience and computer attitude to be positively related, the logical conclusion was that males had more positive attitudes and less anxiety in working with computers than females. Using a variety of populations, most early research reported that males were less anxious and exemplified more favorable attitudes towards computers than females (Coffin & MacIntyre, 1999; Levine & Gordon, 1989; Massoud, 1991; Shashaani, 1994; Shashaani, 1997). Other studies reported no evidence of computer attitude gender differences (Ayersman, 1996; Busch, 1995; Houle, 1996; Jones & Wall, 1989-90; Loyd & Gressard, 1984a; Pope-Davis & Twing, 1991; Shaw and Giacquinta, 2000). Interestingly, a more recent study by Ray, Sormunen, and Harris (1999) of 62 college students, concluded that females are more positive about computers than males. In a meta-analysis of studies that investigated gender differences in computer-related attitudes of adult, college, high school, and grammar school populations, Whitley (1997) found that males showed greater sex-role stereotyping of computers, higher computer self-efficacy, and more positive attitudes towards computers than females. He questions their practical significance, however, because while the gender differences were statistically significant, they were small. In a recent study, McIlroy, et al. (2001) also found computer attitude males having more positive computer attitudes and less computer anxiety than females; however, these gender differences were quite small. Whitley cautions the literal interpretation of these studies and suggests that while men and women may have different self-evaluations, they may both fall within a ‘normal’ range. Maurer (1994) also explains that while the research suggests a relationship between gender and computer anxiety and age and computer
The Effect of Individual Differences on Computer Attitudes 215
anxiety, these relationships have not adequately been investigated to define the relationships.
Age Age of computer users has also been examined as a possible predictor of computer attitudes. The logical assumption, given their technologically-intensive socialization, is that younger students have more positive attitudes and less anxiety towards computers than older students. Research tends to show, however, that age is not a significant factor in computer attitudes. According to Rosen and Maguire (1990), very little evidence exists that older people are more computerphobic than younger people. Later studies corroborate this finding. In his study of undergraduate business administration students enrolled in a required introductory computer course in Norway, Busch (1995) reported that age had no significant effect on computer attitude. Shaw and Giacquinta (2000) also report that age is not a factor in the resistance for using computing for academic and professional purposes. Interestingly, using Loyd and Gressard’s (1994a) Computer Attitude Scale (CAS), Massoud (1991) and Pope-Davis and Twing (1991) both found significant positive relationships between age and the Liking subscale of the CAS – indicating that older students have a greater liking of computers (no relationships were reported between age and the other subscales or total scores). In their research of traditional and reentry college students in a computer literacy course, Klein, Knupfer, and Crooks (1993) even found that older students have greater confidence and more interest in learning about computers than younger students.
Personality Type Maurer (1994) identified the need for studying relationships between computer attitudes and various personality traits to determine anxiety reduction techniques that may be appropriate for different personalities. This factor, however, has not been studied as extensively as experience, gender, and age. Mawhinney and Saraswat (1991) studied the relationship between computer anxiety and personality type in undergraduate business students enrolled in computer courses and found a significant correlation between computer anxiety and personality type. They concluded that computer anxiety is more common in “feeling” type individuals than in the “thinking” types.
Learning Style Given that individuals achieve at higher levels and are more motivated
216 Orr, Allen & Poindexter
and less anxious when taught through their primary learning style (Dunn, Dunn, & Price, 1979; Keefe, 1979a; Kolb, 1981), computerphobia investigations have also targeted learning styles as possible predictors. According to Keefe (1979b), learning styles are cognitive, affective, and physiological behaviors that indicate how learners perceive, interact with, and respond to the learning environment. For example, Park and Gamon (1996) suggest that one person may begin learning a new computer program by experimenting with its features, while another person might prefer to observe how others are using it before trying. Bozionelos (1997) recommends studying the relationship between learning style and computer anxiety due to the continuous computer learning required of all individuals. The association between computer attitude and learning style has not been studied as extensively as other variables such as experience, gender, and age. When this variable is analyzed, however, the Kolb Learning Style Inventory (1985) has often been the instrument used to assess learning modality. Kolb classifies learners into one of four learning style groups: Converger, Diverger, Assimilator, or Accommodator. Descriptions of these groups is presented in the next section. Ayersman and Reed (1995-96) and Ayersman (1996) studied Kolbbased learning styles and computer anxiety among college students. In a short course, Ayersman and Reed (1995-96) found no significant differences in computer anxiety among the learning style groups both prior to and following the course. There were also no pre- to posttreatment changes in computer anxiety for any of the learning style groups. During a more intensive computer experience, Ayersman (1996) found significant differences in computer anxiety among learning style groups at the posttreatment point - Convergers reported lower anxiety levels than Assimilators and Divergers. Although Bozionelos (1997) administered the Kolb Inventory to a different population - 204 graduate students in a British management school – his results paralleled those reported by Ayersman (1996). Students who were classified as Convergers reported significantly lower levels of computer anxiety than the students who were classified as Divergers. The lower levels of computer anxiety reported by Convergers is consistent with their technically-oriented profile. Based on the inconclusive findings of predictors of computer attitudes, this study was undertaken to further define possible relationships.
STUDY Methodology The sample consisted of 214 students in six sections of a semester (15 week) computer literacy course at a regional midwestern university. Students
The Effect of Individual Differences on Computer Attitudes 217
enrolled in one of the sections through a process of self-selection during the university registration process during winter 1998 (three sections), summer 1998 (one section), and fall 1998 (two sections). The course satisfied a university liberal studies requirement and was open to all majors. No prerequisites existed and student computer experience ranged from beginner to advanced. Each section met in a regular classroom for 2 ½ hours per week and in a computer lab for 1¼ hours per week. A computer concepts textbook was used which focused on principles and theory of computerized information and was covered through a combination of lecture and small group discussions during the regular classroom sessions. All three sections were provided demonstrations of a multimedia CD-ROM that accompanied the text along with explanations as to how the interactive features could be effectively used to potentially improve knowledge learning. A computerbased tutorial program was used during the computer lab sessions which students used to acquire software competency. Assignments included projects from the accompanying worktext.
Purpose of Study Despite the technologically-oriented society in which we live, students and employees alike continue to exemplify negative attitudes about using the computer. These attitudes can reduce performance both in the classroom and on the job. Equipped with an understanding of the variables that may affect computer attitudes, educators and trainers could identify these individuals and provide more appropriate learning environments for success to occur. Therefore, the purpose of this study was to investigate the relationship between computer attitudes and computer experience, and selected demographic, educational, and personality variables. Based on the Computer Attitude Scale by Loyd and Gressard (1984a), three types of computer attitude were studied: (1) computer anxiety, consisting of anxiety toward or fear of computers or learning to use computers; (2) computer confidence, relating to confidence in the ability to learn about or use computers; and (3) computer liking, meaning enjoyment or liking of computers and using computers.
Hypotheses Given the lack of conclusive evidence on predictors of computer attitude, a review of the literature suggested continued study of the variables experience, gender, age, personality type, and learning style. Additional demographic and educational factors were also analyzed to determine possible relationships with computer attitude. Based on this analysis of the literature,
218 Orr, Allen & Poindexter
the following null hypotheses were examined: 1. There is no statistically significant change in computer attitude between the beginning of a computer literacy course and the end of the course. 2. There is no statistically significant relationship between computer experience and initial computer attitude, final computer attitude, or change in attitude. 3. There is no statistically significant relationship between selected demographic/educational variables and initial computer attitude, final attitude, or change in attitude. 4. There is no statistically significant relationship between personality type and initial computer attitude, final computer attitude, or change in attitude. 5. There is no statistically significant relationship between learning style and initial computer attitude, final computer attitude, or change in attitude.
Instruments and Descriptive Statistics Four instruments were administered to identify potential contributing factors to computer attitude. A description of the subjects follows an explanation of each instrument. Computer Attitude. Computer attitudes were measured using the Computer Attitude Scale (CAS) by Loyd and Gressard (1984a). Gardner, Discenza, and Dukes (1993) and Woodrow (1991) both conducted studies to analyze four computer attitude scales – one of which was Loyd and Gressard’s instrument –to compare reliability, dimensionality, and construct validity. Both studies concluded that the scales were equally similar and researchers would have more than adequate measures of computer attitudes using any of the instruments. After studying the internal consistency reliabilities of 14 computer attitude scales, Christensen and Knezek (2000) conclude that while most of the attitudinal subscales that were originally strong have held up well over time, a notable exception is the Confidence subscale of the Loyd & Gressard instrument. The CAS was administered as both a pre- and a post-evaluation measure. This Likert-type instrument consists of 30 items that represent computer attitudes on three subscales: (1) anxiety or fear of computers, (2) confidence in the ability to use or learn about computers, and (3) liking of computers or enjoying working with computers. Loyd and Gressard (1984b) report Alpha reliability coefficients of .86, .91, .91 and .95 for each subscale and the total score, respectively. Subjects in the current study chose one of five ordered
The Effect of Individual Differences on Computer Attitudes 219
responses, ranging from “strongly agree” to “strongly disagree.” Half of the 10 items on each subscale are negatively phrased to reduce the effect of response bias. Negatively phrased items are reverse scored and item scores for each scale are totaled to calculate the subscale score. Four scores – one for each subscale and the total - were computed for each student both prior to the course and at the conclusion of the course. While Loyd and Gressard (1984b) code responses for higher scores to reflect more positive attitudes toward computers (i.e., lower anxiety, higher degree of confidence and liking), the coding was reversed in the present study so that lower scores reflected more positive attitudes (i.e., lower anxiety, higher degree of confidence and liking). Mean scores for each of the three attitude subscales (anxiety, confidence, and liking) and the total are shown in Table 1. Each subscale score can range from 10 to 50; the total score is the sum of the three subscale scores and can range from 30 to 150. The total mean scores of 62.90 at the beginning of the investigation and 61.01 at the conclusion of the study indicate that as a group, students were fairly positive in their attitudes toward computers and these attitudes improved slightly during the semester. Specifically, students reported less anxiety about using computers after the course. The differences in confidence and liking could be due to statistical fluctuation. Computer Experience. A survey instrument was developed by the researchers to determine students’ levels of computer experience. Questions focused on the reason for enrolling in the course, past and present computer classes, past and present work experience using computers, and computer ownership. Students identified software applications presently or previously used in each category. Scores for each category were determined by a simple count of the number of software applications checked. Table 2 shows the amount of computer experience reported by students in the study. Table 1: Computer attitude mean scores N Pre-Attitude 214 Post-Attitude 208 Difference 208
Anxiety Confidence
Liking
Total
19.69 17.90 -1.68
23.07 23.85 0.89
62.90 61.01 -1.21
20.13 19.56 -0.43
Note: lower scores = less anxiety, more confidence, and greater liking
220 Orr, Allen & Poindexter
Approximately three-fourths (77%) of the subjects were not using computers in other classes during the semester they participated in this study. A larger proportion of students, however, reported that they had used computers in previous classes (high school or college). In fact, 39% of students indicated that they had used at least three different software application programs in prior courses. Only 6.7% of students reported no computer use in previous classes. Half of the participants had never used a computer in a job while the other half had been exposed to at least one software program in a job. Eight students reported having used more than seven software applications at work. Slightly more than half (53.8%) the participants indicated that they did not own computers. Demographic and Educational Factors. Demographic and educational data were gathered via a survey posted by the researchers on the World Wide Web. Students were asked to provide qualitative responses to a number of questions about their study skills and commitment levels to the course. Table 3 shows the distribution of demographic and educational data for the 214 students who participated in this study. GPA and credit load are shown in Table 4 by mean and standard deviation due to the wide range of responses for these factors. Table 2: Computer experience distribution Experience
Number
Computer Use/Software Applications in Current Course(s) 0 1-2 19 9.1% 3-4 22 10.5% 5+ 7 3.3% Computer Use/Software Applications in Prior Computer Course(s) 0 1-2 3-4 5-6 7+ Software Applications in Current or Previous Job 0 1-2 3-4 5-6 7+ Computer Ownership Yes No
Percentage
161
77.0%
14 39 82 63 12
6.7% 18.6% 39.0% 30.0% 5.7%
114 41 26 19 8
54.8% 19.7% 12.5% 9.1% 3.9%
96
46.2%
112
53.8%
Note: N ranged from 208 to 210 due to student absences and unusable responses.
The Effect of Individual Differences on Computer Attitudes 221
Of the 214 subjects in this study, slightly more than half were males (53.7%). Given that this was an introductory computer literacy course that served a liberal studies function, it is not surprising that the greatest number of students were freshmen (38%) followed by sophomores (25.3%), juniors (21.6%), seniors (14.1%), and graduates (1%). Participants ranged in age from 18 to over 27 with the greatest percentage of students between 18 and 22 years old. Most of the participants (89%) reported no dependents other than themselves and they lived either on campus (38%) or off-campus, but in the city (43%); only 18% commuted. When asked to evaluate their note-taking ability, the majority of students rated themselves as “average” (59.8%) while a third of the subjects defined their note-taking skills as “good” (32.7%) and only 7.5% believed they had “poor” note-taking skills. When asked to approximate the number of hours devoted to outside commitments (job, family, volunteer work, etc.), approximately 40% indicated more than 20 hours; 29% were committed for 11-20 hours, while 32% were only committed for 0-10 hours. Subjects were also asked to rate themselves on questions related to their study skills and levels of commitment to the course. When asked about their perceived learning mode, 71% indicated that they preferred hands-on assignments; the percentage preferring lecture and self-paced learning were about equal (11.7% and 10.3%, respectively) while the fewest number of students preferred the group approach (7%). Most of the students reported having either high or medium interest levels in the course (43.2% and 51.5%, respectively) while only 5.2% honestly indicated a low interest level. The majority of students (72.5%) believed they would demonstrate a high level of attendance during the semester while 7% predicted they would be absent more than twice each month. When asked how many times each chapter of the course text they typically read during the semester, clearly half (51.4%) of the students reported reading each chapter one time and 15% said they scanned rather than read. Approximately half (47.2%) of the subjects believed they were active participators during the class while the other half (51.4%) indicated that they were passive participators who answered questions when asked, but did not volunteer. Surprisingly, half the subjects reported spending a mere one to four hours per week outside of class working on the course; 35.5% indicated that they spent 5-8 hours per week, and 14.5% estimated devoting more than 8 hours per week. As shown in Table 4, students in this study reported a mean GPA of 2.72 out of a possible 4.00 and were taking approximately 13 credit hours during
222 Orr, Allen & Poindexter
the semester study (12 credit hours is considered a full load). Personality Type. Personality type was determined using the Keirsey Temperament Sorter II (Keirsey, 1998). This inventory, which was accessed on the World Wide Web (Keirsey, 1998) by the students in the study, is derived from Jung’s theory of psychological types. After responding to 70 questions, participants are identified as exhibiting one of four temperament types: Guardian, Artisan, Idealist, or Rational. Keirsey (1998) provides profiles of each type. Guardians are concrete in communicating, cooperative in implementing goals, and skilled in logistics; approximately 40-45% of the population is comprised of Guardians. Artisans are described as concrete in communicating, utilitarian in implementing goals, and skilled in tactical variation; Artisans make up approximately 35-40% of the population. Idealists tend to be abstract in communicating, cooperative in implementing goals, and skilled in diplomatic integration; only about 8-10% of our population is comprised of Idealists. Rationals are abstract in communicating, utilitarian in implementing goals, and skilled in strategic analysis; very few Rationals are found – only 5-7% of the population. Table 5 shows the distribution of student personality types for the 214 participants in this investigation. An overwhelming 68% of students were classified as Guardians while the remaining students were fairly well distributed among Artisan, Idealist, and Rational types. When compared with the general population, however, the current sample had approximately 25% more Guardians, approximately 25% less Artisans, an equal proportion of Idealists, and slightly more Rationals. When Chi-square analysis was performed, results showed a chi-square statistic of 69.9 with three degrees of freedom - a very significant difference. Therefore, the current sample is comprised of significantly more Guardians, fewer Artisans, and slightly more Rationalists than expected. Learning Style. The Learning Style Inventory (Kolb, 1985) was used to measure students’ individual learning styles. This relatively brief instrument consists of 12 items in which the student is required to rank order four endings to each statement according to how they learn best to how they learn least. Kolb (1985) explains learning as a four-stage cycle based on the following learning modes: concrete experience (CE), reflective observation (RO), abstract conceptualization (AC), and active experimentation (AE). Although different learners start at different places in the cycle, effective learning uses each stage and each person’s learning style is a combination of the four modes. After combining scores from the four learning modes, an individual’s specific learning style is determined as either Converger (AC-AE), Diverger (CERO), Assimilator (AC-RO), or Accommodator (CE-AE). Convergers are best
The Effect of Individual Differences on Computer Attitudes 223
Table 3: Distribution of demographic and educational data N=214 Characteristic
Number
Gender Female 99 Male 115 Class Status Freshman (1) 81 Sophomore (2) 54 Junior (3) 46 Senior (4) 30 Graduate (5) 2 Age 18-19 (1) 90 20-22 (2) 80 23-26 (3) 21 27+ (4) 23 College Arts & Science (1) 83 Business (2) 57 Behavior Science/Education (3) 45 Nursing (4) 7 Technology & Applied Sciences (5) 22 Number of Dependents (Including themselves) 1 191 2 16 3 3 4+ 4 Residence On campus 82 In city, but off-campus 93 Out of city 39 Note-taking Skills Poor (1) 16 Average (2) 128 Good (3) 70 Outside commitments (approximate hours per week) 0-10 (1) 68 11-20 (2) 63 20+ (3) 83 Perceived learning mode providing highest level of learning Hands-on assignment work (1) 152 Lecture (2) 25 Group projects (3) 15 Self-paced learning (4) 22
Percentage
46.3% 53.7% 38.0% 25.3% 21.6% 14.1% 1.0% 42.1% 37.4% 9.8% 10.7% 38.8% 26.6% 21.0% 3.3% 10.3% 89.3% 7.5% 1.4% 1.8% 38.3% 43.5% 18.2% 7.5% 59.8% 32.7% 31.8% 29.4% 38.8% 71.0% 11.7% 7.0% 10.3%
224 Orr, Allen & Poindexter
Table 3: Distribution of demographic and educational data N=214 (continued) Interest level in the course High (1) 92 Medium (2) 110 Low (3) 11 Expected attendance during the semester High (0-3 absences for entire semester) (1) 155 Medium (1-2 or less absences in each month) (2) 44 Low (more than 2 absences in each month) (3) 15 Number of times typically each chapter is read during a semester More than 3 times (1) 12 2-3 times (2) 60 Once (3) 110 Scanned only (4) 32 Thoroughness of completing software tutorials Carefully read and completed all steps (1) 101 Read but didn’t dwell on material (2) 85 Quickly moved (3) 28 Participation style Active participator who offers comments, 101 answers, and asks questions (1) Passive participator who answers questions 110 when asked, but do not volunteer (2) Avoid participation by looking away or 3 down when questions are asked (3) Number of hours typically spend outside of class working on course 8+ (1) 31 5-8 (2) 76 1-4 (3) 107
43.2% 51.6% 5.2% 72.5% 20.5% 7.0% 5.6% 28.0% 51.4% 15.0% 47.2% 39.7% 13.1% 47.2% 51.4% 1.4% 14.5% 35.5% 50.0%
Note: Numbers in parenthesis indicate coding for statistical tests.
Table 4: Mean scores and standard deviations for GPA and credit load N=214 Mean GPA Credit Load a
4.00 scale
2.72a 13.57
Standard Deviation 0.73 3.53
The Effect of Individual Differences on Computer Attitudes 225
Table 5: Distribution of personality type N=214 Personality Type Guardian Artisan Idealist Rational
Number
Percentage
146 27 21 20
68.2% 12.6% 9.8% 9.4%
at finding practical uses for ideas and theories; they are problem solvers who find solutions. Convergers prefer to deal with technical problems rather than social issues. Divergers view concrete situations from a variety of viewpoints. They prefer to observe rather than take action and like to acquire a wide range of ideas. Because of their imagination and sensitivity, Divergers tend to find careers in the art, entertainment, or service areas. Assimilators focus more on abstract ideas and concepts rather than on people. They are more theoretical than practical and find careers in information and science fields. Accommodators like to become directly involved in new experiences and to solve problems based on feelings rather than on logical analysis. Accommodators rely on people to help solve problems rather than on their own technical analysis and effect in marketing and sales careers. Table 6 shows that the subjects in this study were fairly evenly distributed among the four learning modes.
RESULTS A narrative of the statistical results follows. More detailed statistical data can be found in Orr, Allen, and Poindexter (2001).
Computer Attitude Change A simple multiple comparisons test (BonFerroni test) was used to simultaneously determine if the three components of attitude (anxiety, confidence and liking) changed during the semester. Results showed that anxiety decreased as the semester progressed. As indicated by a mean decrease of 1.678 between the pre- and post-attitude measures, students reported significantly less anxiety at the end of the course. However, confidence and liking for computers did not significantly change during the
226 Orr, Allen & Poindexter
Table 6: Learning style distribution N=214 Learning Style
Number
Accommodator Assimilator Converger Diverger
51 59 58 46
Percentage 23.8% 27.6% 27.1% 21.5%
semester. Therefore, it can be concluded that Hypothesis 1 is partially rejected. (in the analysis and discussion of attitude results, lower scores correspond to more positive attitude; e.g., a lower confidence score means more confidence and a lower anxiety score means less anxiety).
Computer Attitudes and Computer Experience Sets of variables were analyzed to determine relationships between computer attitudes and computer experience. Attitude is a set with three elements: anxiety, confidence, and liking. Experience is a set with four elements: current courses, prior courses, job experience, and ownership. A common statistical test used for examining whether two sets of variables are related is Canonical Correlation Analysis (CanCor). A more extensive discussion of CanCor analysis can be found in Aaker (1981), Anderson (1958), Green (1978), Johnson and Wichern (1982), Morrison (1967), Sharma (1996), and Tabachnick and Fidell (1983). CanCor analysis revealed three experience variables that may be related to initial attitudes – prior computing courses, job experience working with computers, and computer ownership. A significant relationship was found between prior courses and anxiety (r=-0.227, p=.001), prior courses and confidence (r=-0.139, p=.048), job experience and anxiety (r=-0.195, p=.003), job experience and confidence (r=-0.220, p=.001), job experience and liking (r=-0.201, p=.004), ownership and anxiety (r=-0.209, p=.002), ownership and confidence (r=-0.186, p=.008), and ownership and liking (r=-0.182, p=.012). Given that lower attitude scores indicate more positive attitudes, the negative correlations revealed that students who have more prior computer course experience report less anxiety and more confidence than students who have less computer course experience; students who have had more computer-related job experience report less anxiety, more
The Effect of Individual Differences on Computer Attitudes 227
confidence, and greater liking of computers than students who have had less computer-related job experience; and, students who own computers report less anxiety, more confidence, and a greater liking for computers than those who do not own computers. At the conclusion of the semester course, students were again asked to respond to the Computer Attitude Scale. Results of CanCor showed that the relationship between prior courses and computer attitude was no longer statistically significant. Significant relationships continued to exist for computer ownership and attitude on all three subscales and computer-related work experience and attitude, but only on the liking subscale. Again, students who have more work experience using computers tend to like computers more than students with less computer work experience (r=-0.179, p=.013). The students who own computers continued to have less anxiety (r=-0.197, p=.008), more confidence (r=-0.191, p=.010), and a greater liking for computers (r=0.175, r=.018) than the non-computer owners. When analyzing the relationship between change in attitude and computer experience, CanCor results indicated that only prior courses was marginally significant to attitude on the anxiety subscale with r=0.150 and p=0.048. Hypothesis 2 is rejected: 1. Initial computer attitudes: a. more prior courses indicated less anxiety and more confidence b. more work experience indicated less anxiety, more confidence, and greater liking subscales c. ownership indicated less anxiety, more confidence, and greater liking 2. Final computer attitudes: a. more work experience indicated greater liking b. ownership indicated less anxiety, more confidence, and greater liking
Computer Attitudes and Demographic/Educational Variables Canonical Correlation Analysis was conducted to determine demographic and educational variables that may be associated with initial attitudes, final attitudes, and change in attitudes. Results showed that notetaking skills, interest level, hours spent on class, class status, credit load, and age appear to be important variables affecting attitudes upon entering the course. Selfreported participation was marginally significant. The results indicate that at the beginning of the semester, the older students were more positive about computers (less anxious, more confident, and greater liking). However, those who had completed the least education (i.e., freshmen) and those who were carrying more credits were less anxious, more confident, and reported a
228 Orr, Allen & Poindexter
greater liking for learning about and using computers. Interestingly, good notetaking skills were also associated with a more positive attitude set. Also, higher interest in the class (the higher the score, the lower the interest) was associated with a positive attitude, but, not surprisingly, spending more time working on the class resulted in negative attitudes (higher anxiety, less confidence, and a greater dislike for computers). At the conclusion of the investigation, class status and age continued to play a role in predicting computer attitudes, however, credit load dropped out and GPA entered the equation. Of the educational variables, only interest level was significant in predicting final attitude – this significance was consistent on all three attitude subscales. These results indicate that at the conclusion of a computer literacy experience, higher interest levels (a lower score) predict more positive attitudes. Therefore, students who report more positive computer attitudes at the completion of a computer literacy experience have the following profile: older students with less education (i.e., freshmen) who have earned higher GPAs and report greater interest in using computers. And, marginally, those with good notetaking skills appear to have more confidence in using the computer. Results indicated that change in computer attitudes is related only to the variable, interest, on the anxiety (r=0.190, p=.008) and confidence (r=0.229, p=.001) subscales; the variable, hours spent on class, is related to change in attitudes on the confidence (r=0.175, p=.024) subscale. Change in anxiety and change in liking might be associated with hours spent; those who spend more hours reported a change to a more positive association with computers. Hypothesis 3, which refers to relationships between demographic/educational variables and initial, final, or change in computer attitudes, is rejected as a result of the following results: 1. Initial computer attitudes: a. lower class status indicated less anxiety, more confidence, and greater liking b. more credits indicated less anxiety, more confidence, and greater liking c. older students were more confident and reported greater liking d. better notetaking skills indicated greater confidence e. greater interest in the course indicated less anxiety, more confidence, and greater liking f. more hours spent working on the class indicated more anxiety and less confidence 2. Final computer attitudes: a. lower class status indicated less anxiety, more confidence, and
The Effect of Individual Differences on Computer Attitudes 229
3.
greater liking b. older students were less anxious, more confident, and reported greater liking c. higher GPA indicated more confidence d. greater interest in the course indicated less anxiety, more confidence, and greater liking Change in computer attitudes: a. greater interest in the course indicated less anxiety, more confidence, and greater liking b. more hours spent working on the class indicated less confidence
Computer Attitudes and Personality Type The fourth question focused on the relationship between personality type and initial computer attitude, final computer attitude, or change in attitude as measured by the Keirsey Temperament Sorter II. Analysis of variance results failed to reject the hypothesis. No significant relationships on the attitude subscales and personality type were shown for initial attitude (anxiety: F=0.78, p=0.508; confidence: F=0.47, p=0.704; liking: F=2.16, p=0.094); final attitude (anxiety: F=1.00, p=0.392; confidence: F=1.26, p=0.290; liking: F=1.79, p=0.150); and change in attitude (anxiety: F=0.44, p=0.721; confidence: F=0.92, p=0.434; liking: F=0.02, p=0.997).
Computer Attitudes and Learning Style Analysis of variance procedures were conducted to answer the fifth hypothesis to determine if relationships existed between learning style and initial attitude, post attitude, or change in attitude as measured by the Kolb Learning Style Inventory. Results indicated no significant relationships between initial attitude on any of the subscales and learning style (anxiety: F=1.40, p=0.243; confidence: F=0.92, p=0.433; liking: F=1.77, p= 0.154); no significant relationships between final attitude on any of the subscales and learning style (anxiety: F=1.61, p=0.187; confidence: F=1.58, p=0.196; liking: F=1.17, p=0.322); and no significant relationship between change in attitude on any of the subscales and learning style (anxiety: F=0.98, p=0.404; confidence: F=0.91, p=0.437; liking: F=0.25, p=0.864). Therefore, Hypothesis 5 was not rejected.
DISCUSSION Consistent with the findings reported by previous researchers, the present study showed that computer anxiety can be reduced through formal
230 Orr, Allen & Poindexter
computer instruction. The relationships between computer attitudes and prior computer experience corroborates the findings of numerous studies (cited in the section, Review of Literature). The fact that at the beginning of the course students who had more software application experience in prior courses were significantly less anxious and more confident than students who had less prior coursework, but that these differences disappeared following the semester computer literacy course, is additional evidence of the positive effect of formalized computer instruction on computer anxiety. Also consistent with the results of previous studies, it appears that students report more positive computer attitudes when they have had at least some work experience using computers. This emphasizes the importance of practical, real-world experiences. Most importantly, the finding that students who own computers consistently report less anxiety and higher levels of confidence and liking both at the beginning of the course and at the end of the course may provide evidence to parents and other consumers of the added benefit of this purchase. Contrary to the literature which suggests a “technological gender gap, the current findings do not support a difference in computer attitudes between males and females. These results are, however, consistent with the conclusions of many studies that women are no more computerphobic than men. Corroborating the findings of previous studies which found that older students enjoyed working with computers more than younger students, the older students in this study reported more confidence in and greater liking for computers at the beginning of the course and again at the conclusion of the course. Given that older students reported less anxiety about computers at the end of the semester (but not at the beginning), suggests that formal computer instruction can reduce computer anxiety in the adult learner. Interest level appears to be a strong and consistent predictor of computer attitude. Selfreported higher interest levels were associated with less anxiety, more confidence, and a greater liking for computers both at the beginning and at the end of the course. Other variables found to be associated with computer attitudes that have not been suggested in the literature included class status, credit load, notetaking skills, GPA, and hours spent working on the class. Of these, only class status was consistently found to be a factor on all three attitude subscales both prior to and at the end of the course. As expected, freshmen tend to report better attitudes than seniors. Typically, students who like to use computers or who may be interested in technology-oriented careers will enroll in the computer literacy course early in their education. “Computerphobics,” however, will avoid technology-oriented classes for as long as possible. It was expected that personality type and learning style would have been
The Effect of Individual Differences on Computer Attitudes 231
related to computer attitude, but the results did not show these associations. Perhaps because 68% of the current sample reported the same personality type (Guardians) and the distribution of all four types was dissimilar to that of the general population (this study had 25% more Guardians, 25% less Artisans, and 2 to 4% more Rationals than the general population), this distribution did not lend itself to statistical analysis. Contrary to the findings of previous studies, no relationships were found between computer attitude and learning style. However, many different modes of learning were utilized during this computer literacy course – individual hands-on, lecture, and small group discussion – which may have met the needs of most learners.
CONCLUSIONS The following conclusions are offered based on the findings of this study: (1) anxiety associated with computers may be reduced somewhat through formal classroom instruction; (2) students who have prior computer course experience are more positive about computers at the beginning of an introductory computer course than their peers with less computer-related course experience, but by the conclusion of the semester of instruction, this difference is negligible; (3) students who have work experience using computers have less anxiety, more confidence, and a greater liking of computers at the beginning of a computer course, but this work experience only affects the amount they like to use computers by the conclusion of the course; (4) students who own computers consistently report more positive attitudes toward computers; (5) males and females do not differ in their attitudes toward computers; (6) older students tend to have more positive attitudes toward computers than younger students; and (7) freshmen tend to be more positive about computers than upper classmen.
IMPLICATIONS FOR EDUCATION AND TRAINING If computer anxiety can be reduced through a semester of formal computer instruction as was shown in this study, and given the inverse relationship between achievement and anxiety (Elder, Gardner, & Ruth, 1987; Torkzadeh & Angulo, 1992), educators and trainers must continue to emphasize the benefit of formal computer instruction for students and employees. While the present study focused on computer attitudes of university students in a computer literacy experience, these findings may indeed be applicable to training needs in the workplace as well as for the life-long
232 Orr, Allen & Poindexter
learning essential in our technologically-intensive society. The results suggest that institutions of higher education as well as organizations must provide relevant, structured computer instruction for students and employees. Given the relationship between work experience and computer attitude, classroom teachers and industry trainers are advised to integrate practical applications into their classroom instruction. Encouraging learners to apply the computer to real world problems provides an important and often overlooked aspect of the educational process. Additionally, given the confirmation that students do experience computer anxiety, educators should not ignore its existence. Levine and Donitsa-Schmidt (1998) recommend that teachers periodically evaluate students’ attitudes, levels of anxiety, and computer-related self-confidence. Short questionnaires can be administered followed by classroom dialogue to explore the extent and nature of the negativism. Through quantitative as well as qualitative assessment of computer attitudes, teachers can develop a better understanding of their students’ attitudes and be able to recommend strategies for coping. Many colleges and universities are now recommending or requiring students own or rent a computer upon admission. Given the positive attitudes exemplified by students who own a computer, a by-product of this policy may in fact be that students who had no previous work experience or access to computers at home will develop more positive attitudes about this technology. However, it is imperative that no assumption is made that students will be motivated to apply this tool in their work without formal training. The fact that classroom instruction plays a significant role in reducing computer anxiety cannot be ignored.
Computer Viruses: Winnowing Fact from Fiction 233
Chapter XIV
Computer Viruses: Winnowing Fact from Fiction Stu Westin University of Rhode Island, USA
It would be difficult to find a veteran end user who is unwilling to share at least one “war story” concerning a computer virus. Viruses are, and undoubtedly will continue to be, a fact of life in the end user computing community. Many tales of bouts with computer viruses contain a good measure of embellishment, and many computer mishaps attributed to viruses are truly due to “pilot error.” Regardless of these facts, computer viruses are a problem worth addressing. This paper considers the past and current status of computer viruses and “defensive computing,” and the degree to which the situation has been clouded by hype, misinformation, and misunderstanding. While the coining of the term computer virus is attributable to Fred Cohen in conjunction with his 1983 academic research on a DEC VAX platform (Cobb, 1998), the phenomenon and did not become a concern to users of application systems until almost ten years later. In 1987, occurrences first appeared in several universities, and shortly thereafter in corporate settings. In today’s environment, the computer virus threat clearly impacts every computer user in one way or another. The degree of impact is not as clear, however. The severity of the virus threat is particularly difficult to pin down. One reason is that a major source, if not the major source of literature and information on computer viruses is the vendors of anti virus (AV) software Copyright © 2002, Idea Group Publishing.
234 Westin
products. I do not mean to imply that these vendors knowingly disseminate erroneous information, but let’s face it, they have a vested interest in your perceiving the virus threat as a major one. Their stock prices and revenues are known to rise rapidly in reaction to virus scares (Wired News, 2000). AV vendors also have a vested interest in your perceiving that their products can protect you from a large number of viral threats. Consider this latter issue. The anti virus vendors often list virus strains with minimal, inconsequential differences as being distinct viruses. In doing so, their product can be touted as protecting against more viruses than would otherwise be the case. An early example of this is the case of the Marijuana virus. When first released, the virus contained the phrase “legalize marijuana” as part of its message (this message is called the virus payload). In a later incarnation of the virus, the payload phrase was changed to “legalize marijuana” (note the Americanized spelling). Most anti virus vendors have listed these two versions as unique viruses, although the detection and removal procedures are identical (Rosenberger & Greenberg, 1996). Related to this is the fact that many single virus strains are known by multiple handles (sometimes dozens). This is often because viruses are “discovered,” named, and reported simultaneously from several different locations. The pseudonym problem is not a trivial one, and virus identification experts, understandably, tend to focus their energies on identification and collection rather than on nomenclature (Wells, 2001). The result of this situation is that it is difficult, if not impossible, to evaluate the true impact and infection rate of any particular virus. Another possible source of confusion lies in the fact that a potentially harmful virus need not pose any real threat to the end user community. This is because only a small portion of known computer viruses actually exist in the wild – the term used to denote viruses that are doing their evil deeds in the real world of computing applications. An article in Information Security Magazine (Cobb, 1998) noted that there were, at that time, fewer than three hundred viruses in this wild category (for the purpose of comparison, at the time of this current writing there are 214 viruses officially considered to exist in the wild). Compare this to the 16,000 plus viruses that existed in virus research facilities (this virus category is referred to as in the zoo). According to Rosenberger & Greenberg (1996), however, most AV vendors use this latter (zoo) number when reporting the state of the virus situation.
Computer Viruses: Winnowing Fact from Fiction 235
MEDIA HYPE It was noted above that there are surprisingly few unique computer viruses posing a real threat to our application systems. Also, even the worst virus outbreaks have had a relatively short lived, minor impact on the computing world. Media hype, however, would have us believe something quite different. Overblown, unsubstantiated virus reports seem to have been around as long as the virus phenomenon itself. Like Chicken Little declaring that the sky is falling, reporters tend to greatly overestimate the negative consequences of relatively benign events when it comes to computing technology. (Who can forget the Y2K predictions?) Noted virus hype expert Rob Rosenberger (2000d) sums it up this way: “The media has a deep-rooted fetish for computer virus stories.” An example is a rather bizarre proclamation that was common by the media in the early 1990s. In reaction to Peter Tippit’s research on the possible future spread of computer viruses, the popular press often espoused that one quarter of all IBM PCs were infected each month (Rosenberger & Greenberg, 1996). Some simple calculations reveal that, in such a world, each user would have expected his machine to be rendered helpless three times per year. An article in Forbes stated that at least a half dozen new viruses are released into the wild by computer vandals each and every day (Rao, 1996). Again, a little simple arithmetic indicates that, under this scenario, we would have to deal with nearly 2,200 new viruses each and every year! This just is not the case. The most recent statistics (6/01) indicate that there are currently 214 wild viruses with four newcomers and eleven dropouts from the previous month (Wildlist Organization International, 2001). The granddaddy of all virus hypes was arguably the Michelangelo scare of 1992. According to the media, this virus was posed to bring the digital world to its knees every March 6th, starting in 1992 (this trigger date happened to fall on the birthday of renaissance artist Michelangelo, and thus the name). Discovered in 1991, this relatively trivial virus lived in obscurity until a coincidence in January of 92 (the story of computer viruses is sprinkled with a good measure of strange fluke events). A PC manufacturer, Leading Edge, admitted that it had accidentally shipped 500 units infected with the virus. This information was picked up by the press, by coincidence, on the same day that another PC manufacturer announced that it had decided to include free AV software on their products (Rosenberger, 2000a). For some reason the news media latched on to the story and wouldn’t let it die. As the so-called M-
236 Westin
day (or V-day) approached, the media frenzy intensified to the point that the story earned top billing on just about all of the major news media. “Experts” from such institutions as the U.S. Department of Defense and the world’s major universities espoused horror stories of the impending dire consequences. John McAfee, the anti virus software guru, reportedly claimed that as many as five million computers had already been an infected worldwide (Gordon, Ford, & Wells, 1997). Symantec and the other antivirus manufacturers were capitalizing on the story, as well. On March 7th, the world of computing returned to normal – nothing much had happened (other than the fact that almost everyone had learned how to spell Michelangelo). While statistics varied, the highest count of infected systems was 20,000 worldwide. Michelangelo incidents have decreased steadily since this time. On M-day of 1998, there were two documented cases of the virus (Rosenberger, 2000a). The Hare virus scare of 1996 can also be blamed on media hype. This poorly written, otherwise inconsequential virus owes its worldwide reputation to publicity rather than to performance. In this case, the virus program became famous because it was purportedly being spread through sex oriented Usenet groups on the Internet. The Melissa virus and the Chernobyl (a.k.a. CIH) virus hit within a month of each other in early 1999. With each of these came media reports of the “most catastrophic virus in computing history.” Melissa struck on March 26. By April 4, reports finally admitted that the virus was “relatively benign” overall (Lemos, 1999a). Chernobyl was programmed to strike on April 26. This, too, was greatly overblown by the media. For several days prior to the Chernobyl attack, electronic and print media were once again filled with predictions of impending global calamity. In some cases, this continued through the days following the virus strike. For example, one source (TechWeb, 1999) stated that the Chernobyl virus had “hit over 150 million companies, universities, and other organizations worldwide”! More believable statistics peg the total worldwide count at slightly over one half million Windows machines (Lemos, 1999b). One (unexaggerated) ex post report appropriately described the final impact this way: “Despite the disruptions, no large-scale system failures were reported, and the virus, while widespread, seemed aimed more at disrupting daily life than destroying it.” (Mercury News, 1999). The title of another news story, written after the dust had settled, reiterates this sentiment: “Scary Name, No Big Deal” (Allbritton, 1999). At the time of this writing the I Love You virus, also known as Love Bug, is still fresh in the minds of most computer end users. A self-propagating
Computer Viruses: Winnowing Fact from Fiction 237
email virus, I Love You (ILY), struck on May 4 of 2000. Admittedly, this virus attack did have a sizable impact in various parts of the globe, affecting the Pentagon and the CIA as well as British Parliament. Once again, however, the true effect of this “mediocre worm/virus” (Rosenberger, 2000b) was clouded in media hype and mass hysteria. On the evening of the attack, NBC’s Tom Brokaw opened the news with a story attributing ILY with the “death of the Internet” (Rosenberger, 2000b)! (The last time I checked, my Internet connection was still alive and well.) In the weeks following the strike, news reports cited virus damage estimates exceeding $15 billion (Howell, 2000; Rosenberger, 2000d). Even as these estimates were being released by the press, noted virus guru Peter Tippett, in testimony before US Congress, reported the actual monitory damage at somewhat above $700 million (Tippett, 2000). Note that this amount is about five percent of the damage reported in the media. Two weeks following the strike, after the ILY hysteria had died down, various polls of computer users indicated that the virus was, in most cases, nothing more than a minor nuisance, if it was seen at all (Rosenberger, 2000c). This, then, is the final legacy of what is undoubtedly the worst virus outbreak in computing history.
GETTING AT THE REAL FACTS An obvious question to ask at this point has to do with where reliable virus statistics can be found. As I indicated earlier, the media are not the answer, nor are the Web sites of most AV software vendors (note that AV Web sites are an excellent source of information on the mechanics and treatments of viruses, I just do not recommend that they be used for virus threat evaluation.) There are, however, many organizations that dedicate resources to toward providing accurate information concerning the impact and threat of computer viruses. An important, though somewhat cryptic, source of information is the well-known WildList. This project, headed by virus expert Joe Wells, aims to provide an unbiased monthly list of actual virus infections throughout the world (i.e., virus incidents in the wild). This well-respected list is often used as an evaluation and certification tool for AV software. For example, ICSA Anti Virus Certification demands a 100% detection rate for the cumulative WildList. Since the list is updated monthly, it provides an excellent source of time series data on viruses and infection rates. WildLists dating back to 1993 can be found at http://www.wildlist.org/WildList/. The anti-virus group at IBM maintains a Web site
238 Westin
(www.research.ibm.com/antivirus/) focused on understanding and stopping computer viruses. This page contains links to dozens of scientific papers written by virus experts. Topics include social/cultural as well as technical issues. The page also provides links to recent virus stories in the press and to recent interviews with virus authorities. The CERT Coordination Center (CERT/CC) at the Carnegie Mellon Software Engineering Institute maintains a Computer Virus Resource page at http://www.cert.org/other_sources/viruses.html. Here, you can access databases on viruses and on virus hoaxes, virus FAQs, various virus-related papers and publications, and links to major AV software vendors. Also, through the Other Resources link, you can join virus-specific Usenet groups and mailing lists. Vmyths.com (http://vmyths.com/) is a Web site dedicated to exposing virus myths, hoaxes, and misconceptions. In an extensive, well-maintained site, Webmaster and virus authority Rob Rosenberger articulates his position passionately and effectively. It is well worth a visit by anyone who is interested in investigating the impact of computer viruses. Also, each visit to this Web site will make you crack a smile at least once. Visitors can sign up for various newsletters including a Virus Hysteria Alert (no explanation should be necessary). Another excellent source of information is ICSA Labs Anti-Virus Site (http://www.icsalabs.com/html/communities/antivirus/index.shtml). ICSA Labs is the security industry’s central AV product testing and certification facility. They are not affiliated with any AV product or vendor. One valuable link on this page brings you to their Virus Alerts page that provides detailed information on current viral threats, including vulnerability and threat assessment, and mitigation measures. Another interesting link takes you to their Hoax page. Here, you will find information on over fifty virus hoaxes. One of ICSA Labs major contributions in this arena is the ICSA Annual Computer Virus Prevalence Survey. The results of the current (2000) study can be accessed through http://www.truesecure.com/html/tspub/index.shtml. This 61-page document can be an invaluable resource in evaluating the current state, as well as the trends in the area of computer viruses.
THE BOTTOM LINE To close this piece I will provide a brief summary of the relevant findings from the aforementioned ICSA Virus Prevalence Survey (Bridwell & Tippett, 2000). Please keep in mind that the study is restricted to Intel-based machines
Computer Viruses: Winnowing Fact from Fiction 239
in the North American Commercial, Industrial, and Government sectors. Survey sites were required to have at least 500 PCs, two or more LANs, and two or more remote connections. The study was conducted early in 2000 and the survey period covered the prior two years. Through an unfortunate twist of fate, interviews were scheduled to begin in early May. This was just one week after the I Love You virus (ILY) struck, and thus surveys were administered in the midst of the ILY virus hype (yet another strange coincidence). Considering the survey results in light of this fact provides an interesting story of how paltry the damage really was in most cases. The 2000 study reports an annual virus infection rate of 160 virus encounters per 1000 machines – the fifth consecutive year of infection rate increase. Forty-one percent of respondents indicated the perception that the virus situation was much worse than the previous year. This is likely due to the mass-mail-payload viruses that were prevalent during the study period (e.g., Melissa, ILY). The fact that 87% stated that email attachments were the source of their most recent infection lends credence to this. The next most common source, a diskette from home, reached only 4%. The most common categories of infection were, in this order, macro virus, VB Script virus, and Java Script virus. The former category was more than nine times more prevalent than the combined script categories, however. Note that all three of these virus types can be carries through email payloads. It appears as though boot sector viruses that were so prevalent in the 1990s are becoming a thing of the past. There was fewer than one encounter per 1000 PCs per month for the in the first few months of 2000. At first blush one would expect that this drop was due to the prevalence of AV software products. An alternate explanation, however, is that the design of newer operating systems has rendered these viruses unable to propagate. Also, consider how infrequently diskettes, the main distribution vector for this type of virus, are now used. Indeed, AV software is quite pervasive in industry. At least 90% of the PCs were covered by AV software in 80% of the cases. There was 100% AV coverage in 55% of the cases. In this study the operational definition of a virus disaster was “an incident in which 25 or more machines, media, or files experienced a single virus on or about the same time.” Slightly over half of the respondents (51%) reported having such a disaster within the previous fifteen months. Of these, 81% reported disasters in May of 2000 (i.e., ILY virus). While in one extreme case the mot recent disaster involved a reported 4000 machines, in 81% of the cases fewer than 100 machines were affected. Loss of productivity was the most commonly noted organizational effect
240 Westin
of computer viruses, being cited by 70% of the respondents. However, in evaluating the cumulative person days lost as a result of the latest virus disaster, almost 75% reported ten or fewer person days lost. Related to this is the fact that the median dollar cost for the disaster was only $10K. These statistics do not really seem too bad, considering the fact that the data were collected just days after the most severe virus outbreak in history. (Remember? May 4, 2000 was the day that the Internet “died”!) One point that can be gleaned from the statistics presented in this study is that viruses tend to present continuous, but relatively small problems for most companies. A very few organizations suffer extreme consequences. The overall situation, then, does not seem to warrant the apocalyptic scenarios often presented in the press. Granted, the overall frequency of virus encounter is indeed increasing, with the lion’s share of incidents being due to rapidly propagating mass mail payloads. Our increasingly Internet-based society is surely responsible for this change in infection vector. This suggests the need for a shift in strategy for battling computer viruses. The traditional AV software approach based on “fingerprint” identification of known viruses is very much a virus-specific, reactive strategy. It requires that each individual virus be identified and isolated, and its fingerprint added to a list which then must be imported by each user who hopes to be protected. All this must occur before the virus invokes damage. With selfpropagating Internet-based viruses and worms, this virus-specific reactive strategy can be less than effective, as demonstrated by the increasing infection rate despite high AV software coverage on organizational systems. A complementary, proactive, generic strategy is founded on the philosophy that some viruses, particularly mass mail viruses, work too quickly for the reactive approach. The proactive generic strategy is based on careful application software configuration (web browser, word processor, email, etc.), on file attachment filtering, and the like. Examples can be a simple as disabling the automatic opening of email attachments, and turning off automatic firing of macros in word processing and spreadsheet programs. Detailed steps for the full array of these measures can be found in TruSecure Anti-Virus Policy Guide (TruSecure, 1999). These so-called synergistic controls are inexpensive and easy to implement, and they tend to be relatively unobtrusive to the end user. They can be quite effective in combating the new breed of Internetbased viruses and worms, however (Bridwell & Tippett, 2000). Also, a promising new heuristic-based AV technology works in the background by identifying virus-like behavior and warning users or system
Computer Viruses: Winnowing Fact from Fiction 241
administrators of such behavior. The obvious down side of this heuristic approach is the likelihood of occasional false positives and the resource costs related to such false alarms. The rapid, exponential propagation of the newer computer viruses relies on reaching lots of defenseless machines in a short period of time. As more and more organizations inoculate their systems with the full spectrum of AV approaches (reactive and proactive, virus-specific and generic), the seemingly unchecked dispersion of these new network-based viruses should be greatly hindered. The only remaining problem, then, would seem to be convincing the media that the sky is not really falling.
242 Westin
Section III Decision Support Systems and Artificial Neural Networks
Success Surrogates in Representational Decision Support Systems 243
Chapter XV
Success Surrogates in Representational Decision Support Systems Roger McHaney Kansas State University, USA Timothy Paul Cronan University of Arkansas, USA
When corporate difficulties arise, technology and new software development are often embraced as part of the solution. The modern manager has a wide variety of decision making aids at his or her disposal. One such aid, classified as a representational decision support system, is discrete event computer simulation. In order to assess the organizational impact of discrete event computer simulation, an instrument capable of measuring success is required. The importance of such assessment cannot be overemphasized. While empirical measurement of various information system inputs or independent variables such as information system budget expenditures or user participation is relatively straightforward, the development of corresponding output or dependent variables has been difficult. In an attempt to overcome these difficulties, researchers have suggested a variety of measurable surrogates. Work in this area has paved the way for the development of instruments used to assess success. This chapter focuses on external validity aspects of two popular information system instruments, the Davis measure of User Acceptance of Information Technology and the Doll and Torkzadeh measure of End-User ComputCopyright © 2002, Idea Group Publishing.
244 McHaney & Cronan
ing Satisfaction (EUCS). These instruments were designed for general purpose use and tested across a variety of settings, times, and persons. To ensure this generalizability extended to a very specific form of information technology, these instruments were administered to discrete event computer simulation users and tested for psychometric stability. This study provides additional evidence that the Doll and Torkzadeh measure of End-User Computing Satisfaction retained its psychometric properties when applied to users of discrete event computer simulation and therefore provides a reasonable surrogate measure for success in the implementation of this technology. An initial assessment of the Davis measure of User Acceptance of Information Technology (Perceived Ease-of-Use, Perceived Usefulness) returned poorer scores on the fit indexes but the evidence did indicate the expected factor structure was supported to some extent. The managerial implications of these findings are discussed.
INTRODUCTION End user application of representational decision support systems is a popular technology that is in widespread use in business and industry (McHaney and White, 1998). A primary manifestation of the representational DSS is computer simulation (McHaney and Cronan, 2000). A computer simulation involves the modeling of a process or system in such a way that the model mimics the response of the actual system to events that take place over time” (Schriber, 1987). In other words, simulation is simply using a computer to imitate the behavior of a complicated system and thereby gain insight into the performance of that system under a variety of circumstances. Within this context, computer simulation can be classified as a decision support tool. Discrete event computer simulation can be broken into two categories, simulation languages and simulators. A simulation language is a versatile, general purpose class of simulation software that can be used in a multitude of different modeling applications. These languages are comparable to FORTRAN, BASIC, COBOL or C, but have specific features to facilitate the modeling process. Some examples of simulation languages are GPSS/H, SLAM II, SIMSCRIPT II.5, and SIMAN V. Simulation language features aid in the modeling process and free the simulation analyst from the drudgery of recreating certain software procedures used by virtually all modeling applications. As a result these specialized languages have become powerful tools for modeling. Most simulation languages provide the features illustrated in Table 1.
Success Surrogates in Representational Decision Support Systems 245
Table 1: Simulation language features Feature Statistics Collection
Description Tools which gather data for purposes of inferential statistics about the model
Resource modeling
A means for the representation of a constrained resource in the model
Feature
A means for representation of the participants in the simulation model
Simulation Clock
Tools for analysis and step processing of the coded model
Random Number Generators
A means for producing random number streams for randomization of events within the simulation model
Model Frameworks
Generalized frameworks for the rapid development of a model.
A simulator is a user friendly software package that will aid in the development of a model for a particular application. Simulators and simulation languages are generally differentiated by several key features. Brunner (1988) characterized several of these as Ease of Use - designed specifically for the non-programmer; Tools for Quick Model Development - provide a fast method of model construction; and Base System Simulation Already Complete - a general model has already been constructed. Discrete event computer simulation offers many benefits. Foremost among these is the reduction of risk associated with decision making under uncertain conditions. Because simulation is descriptive instead of normative, it allows users to ask >what-if= questions. Other benefits include the detection of costly logic flaws prior to systems implementation, forced completion of systems design, the collection and convergence of disparate pieces of information, time compression, the ability to experiment with ideas too expensive or risky to implement, and the capability to include real-world complexities in problem analysis. Implementations and use of computer simulation by end users have been reported with varying levels of success and failure. Underlying factors affecting these outcomes have been the subject of empirical investigation (McHaney and Cronan, 1998; McHaney and Cronan, 2000).
246 McHaney & Cronan
In order for discrete event computer simulation and related technologies to continue growing in value, a better understanding of elements constituting successful implementation for end users need be developed. While efforts to empirically study success have only recently started to appear in the discrete event computer simulation literature, similar topics have been researched in the area of information systems (DeLone and McLean, 1992; Mahmood, 1987;) and DSS (Guimaraes, Igbaria, and Lu, 1992). Although much of this research is general, its intent is to provide a dependent variable and give researchers and end users the ability to assess specific applications of information technology (Doll and Torkzadeh, 1988). The identification of a dependent variable is a problem that both plagues and motivates information system (I/S) researchers. DeLone and McLean (1992, p. 61) echo the importance of this sentiment by stating, “if information systems research is to make a contribution to the world of practice, a welldefined outcome measure (or measures) is essential....without a welldefined dependent variable, much of I/S research is purely speculative.” Without a dependent variable, measurable in the context of valid and reliable instruments, meaningful comparisons of competing software packages, implementation approaches, system attributes, and software features become impossible. While much progress toward the identification of a dependent variable has been made, no single standard has gained widespread acceptance in the I/S community. Researchers have operationalized dependent variables according to various criteria. DeLone and McLean (1992) surveyed this literature and discovered most studies can be classified into six general categories—system quality, information quality, use, user satisfaction, individual impact, and organizational impact. They suggest researchers might develop a single comprehensive success instrument to account for all six dimensions. Although this comprehensive, standard I/S instrument for success does not yet exist, several very respectable measures are presently available and in use. Among these are the Davis (1989) measure of User Acceptance of Information Technology and the Doll and Torkzadeh (1988) measure of EndUser Computing Satisfaction. Past research has demonstrated instrument validity—content validity, construct validity, and reliability (Straub, 1989)— as well as internal validity and statistical conclusion validity for both instruments. Researchers have applied these instruments to various forms of information technology, both in and out of lab settings at various times (Adams, Nelson and Todd, 1992; Davis, 1989; Doll, Hendrickson, and Deng, 1998; Doll and Torkzadeh, 1988; McHaney and Cronan, 1998). The successes of these tests add evidence to the argument that external validity and
Success Surrogates in Representational Decision Support Systems 247
generalizability are present. However until recently, neither instrument has been applied to discrete event computer simulation and the developers of these measures advise caution in their application to different forms of information technology. The purpose of this study is to compare two general measures of information system success when applied to a group of professionals using discrete event computer simulation within the context of their jobs as decision makers and simulation analysts. This research seeks to determine if the Davis (1989) measure of User Acceptance of Information Technology (Perceived Ease-of-use, Perceived Usefulness) and the Doll and Torkzadeh (1988) measure of End-User Computing Satisfaction maintain psychometric stability when used to measure discrete event computer simulation success. The investigation focuses on establishing construct validity, internal validity and reliability. If the hypothesized psychometric properties of these instruments are consistent with prior studies (Doll, et al., 1998; Doll, et al., 1994; McHaney and Cronan, 1998), the use of these instruments can be extended to the measurement of success in discrete event computer simulation. While EUCS has been shown to be psychometrically sound when used in a discrete event computer simulation environment (McHaney and Cronan, 1998), this study determines which measure is be best suited to this population.
BACKGROUND The implementation of information system technology has traditionally been an uncertain process. Some systems are successful. Others are not. In order to identify the determinants of success, a researcher must first be able to operationalize success. Many empirical studies in the area of information systems have been concerned with this task (Guimaraes, Igbaria, and Lu, 1992; Mahmood, 1987). DeLone and McLean (1992) present an organized view of this quest for a dependent variable in information success. They state, ADifferent researchers have addressed different aspects of success, making comparisons difficult and building a cumulative tradition for I/S research similarly elusive.” This sentiment is also reflected in a statement by Jarvenpaa, Dickson and DeSanctis (1985): “Another factor that has contributed to weak and inconclusive results is the use of a great number of measuring instruments, many of which may have problems with reliability and validity.” A simple approach would be to ask users of computer simulation a single line item question such as ‘Was your simulation a success?’. At first glance,
248 McHaney & Cronan
this may seem to be an easy, straightforward method of obtaining a dependent variable for success. Upon deeper reflection, obvious problems come to light. In fact, the single item approach has been criticized as ambiguous or prone to misunderstanding (Straub, 1989). Modern instrument construction techniques and the ideas of construct validity and reliability are based on ensuring these problems do not diminish the veracity of the measures (Cook and Campbell, 1979). Another approach for obtaining a measure of simulation success would be to construct a new instrument from scratch. The process of building a new instrument is not an easy one. Straub (1989) outlines a procedure which includes accommodations for instrument validation, internal validity, statistical conclusion validity, and external validity. While this approach may have been employed in the construction of a measure of simulation success, yet another success instrument would have been added to the great number already in existence. DeLone’s and McLean’s (1992) concerns for a cumulative tradition for consistent information system research and Jarvenpaa’s, Dickson’s and DeSanctis’ plea (1985) for a standardization of information system instruments would have to be ignored. For these and other reasons, a decision was made to use existing surrogate measures for success. The first existing instrument considered for use with discrete event computer simulation success was the Davis (1989) measure of User Acceptance of Information Technology. Davis’ (1989) research centers around the pursuit of “better measures for predicting and explaining use” in information systems. He hypothesizes perceived usefulness and perceived ease-of-use to be determinants in user acceptance of information technology. His instrument is meant to be most germane for early assessments during development or brief initial exposures of technology (Doll, et al. 1998). He supports this hypothesis both theoretically and empirically. Davis’ (1989) original study has been corroborated by other research citing sound psychometric properties of the constructs (Adams, Nelson and Todd, 1992; Doll, et al., 1998; Hendrickson, Massey, and Cronan, 1993). While optimistic about the findings in his studies, Davis (1989) cautions against adopting the instrument without further research into “how the measures such as those introduced here perform in applied design and evaluation settings.” The purpose in studying Davis’ instrument within the context of discrete event computer simulation use is to test instrument validity and reliability (Straub, 1989) of the ease-of-use and usefulness scales to determine if it is generalizable to this particular area. Although researchers have validated the Davis instrument on a variety of information system technologies (Adams, Nelson and Todd, 1992; Hendrickson, Massey and
Success Surrogates in Representational Decision Support Systems 249
Cronan, 1993), discrete event computer simulation has not yet been tested. Upon establishing sound psychometric properties of the instrument in this area, further research into the factors influencing success in discrete event computer simulation can be developed. The second instrument considered for use in conjunction with discrete event computer simulation is the Doll and Torkzadeh (1988) measure of End-User Computing Satisfaction. Doll and Torkzadeh (1988) proposed an instrument for measuring end-user computing satisfaction (EUCS). Like Davis (1989), Doll and Torkzadeh (1988) developed a construct consisting of ease-of-use. In addition, they proposed a usefulness or information product component consisting of content, accuracy, format and timeliness. These five constructs comprise an instrument for end-user computing satisfaction. However, unlike Davis, their instrument is specifically designed to work within the current end-user computing environment consistent with current trends. Doll and Torkzadeh (1988) used a multi-step process to validate their instrument and found it to be generalizable across several applications. Additional validation studies were also conducted (Doll, Xia, & Torkzadeh, 1994; McHaney and Cronan, 1998, Torkzadeh and Doll, 1991). The Doll and Torkzadeh (1988) instrument is of particular interest in this study because many applications of discrete event computer simulation can be categorized as end-user computing and because of prior validation within a population of discrete event computer simulation users (McHaney and Cronan, 1998). In the McHaney and Cronan (1998) study, 411 participants using a variety of discrete event computer simulation software packages completed EUCS questionnaires detailing their ongoing experiences. The data collected indicated the instrument retained its psychometric properties and provided a valid success surrogate for end-users beyond the introductory stages of using representational DSSs.
METHODOLOGY The goal for this study was to establish whether the Davis (1989) and the Doll and Torkzadeh (1988) instruments are generalizable to the measure of success in discrete event computer simulation. In addition, the researchers will provide evidence regarding which instrument is better suited for use with discrete event computer simulation.
Instruments The survey administered included Davis’s (1989) Ease-of-use and Perceived Usefulness scales as well as the EUCS measures (Doll et al., 1988).
250 McHaney & Cronan
The exact form of these instruments are illustrated in Figures 1 and 2. Five position Likert-type scales were used to score the responses.
Reliability, Validity, and Model Fit Simple statistics and correlations were calculated for each element of the instrument. Cronbach's alpha was used to gauge internal consistency of the measures and overall reliability for the instruments. Construct validity was assessed using confirmatory factor analysis to determine if the hypothesized factor structure exists in the collected data Figure 1: Davis instrument U1
E1
U2
E2
U3 E3 Perceived Usefulness U4
Perceived Ease-of-use E4
U5 E5
U6 E6
EI: E2: E3: E4: E5: E6:
Learning to operate simulation would be easy for me. I would find it easy to get simulation to do what I want it to. My interaction with simulation would be clear and understandable. I would find simulation to be flexible to interact with. It would be easy for me to become skillful at using simulation. I would find simulation easy to use.
U1: U2: U3: U4: U5: U6:
Using simulation in my job would enable me to accomplish tasks more quickly. Using simulation would improve my job performance. Using simulation in my job would increase my productivity. Using simulation would enhance my effectiveness on the job. Using simulation would make it easier to do my job. I would find simulation useful in my job.
Success Surrogates in Representational Decision Support Systems 251
Figure 2: EUCS instrument C1
C2 Content
C3
C4
A1 Accuracy
A2
F1 Format
EUCS
F2
E1 Ease of Use
E2
T1 Timeliness
T2
C1: C2: C3: C4:
Does the simulation system provide the precise information you need? Does the simulation output information content meet your needs? Does the simulation provide reports that seem to be just about exactly what you need? Does the simulation system provide sufficient information?
A1: Is the simulation system accurate? A2: Are you satisfied with the accuracy of the simulation system? F1: F2:
Do you think simulation output is presented in a useful format? Is the simulation output information clear?
E1: Is the simulation system user friendly? E2: Is the simulation system easy to use? T1: Do you get the output information you need in time? T2: Does the simulation system provide up-to-date information?
252 McHaney & Cronan
(Bollen, 1989). The hypothesized Davis measurement model was tested for fit against the collected data using LISREL VIII (Hayduk, 1987; Jöreskog & Sörbom, 1993; Marsh, 1985; Marsh & Hocevar, 1985; Marsh & Hocevar, 1988). Figure 1 contains the a priori factor structure that was tested. Likewise, the EUCS instrument was tested using confirmatory factor analysis. Doll and Torkzadeh (1988) originally proposed a five scale factor structure. However, subsequent research and additional analysis has provided evidence that EUCS is a multifaceted construct consisting of five subscales and a single overall second-order construct (Chin & Newsted, 1995; Doll et al., 1994). The second level structure is a single factor called End User Computing Satisfaction which is composed of the original factor structure of Content, Accuracy, Format, Ease of Use, and Timeliness. Results of the current study are compared to Model 4—Five First Order Factors / One Second Order Factor solution recommended by Doll, et al., (1994). Again, LISREL VIII (Hayduk, 1987; Jöreskog & Sörbom, 1993) was used to test the fit of the hypothesized model against the collected data (Marsh, 1985; Marsh & Hocevar, 1985; Marsh & Hocevar, 1988). Figure 2 contains the a priori factor structure that was tested. Studies in this area have reported a variety of statistics as evidence of model adequacy or fit. As a result, several indexes will be examined and used to develop an interpretation of this study’s results. The chi-square statistic has long been considered a global test of a models ability to reproduce the collected data’s variance/covariance matrix. It is sensitive to sample size and departures from multivariate normality and must be interpreted with caution (Jöreskog & Sörbom, 1993). In spite of these flaws, it was reported for ease of comparison to prior studies. Other fit indexes which provide a sense of congruence between a hypothesized model and collected data were also assessed. Included are NNFI and CFI, two comparative fit indexes not affected by sample size. CFI is a normed relative noncentrality index that estimates each noncentrality parameter by the difference between its T statistic and the corresponding degrees of freedom (Bentler, 1990). NNFI is reported as being useful in situations where a parsimony-type index is needed to account for a number of parameters in a model (Bentler and Bonnet, 1980). Good fitting models generally yield fit indexes of .9 or above, leaving only a relatively small amount of unexplained variance (Bentler and Bonnet, 1980). Root mean square error of approximation (RMSEA) is an additional measure of model fit. Smaller RMSEA values are generally associated with better fitting models. Scores below .05 are considered evidence of good fit and those between .05 and .08, reasonable fits (Browne and Cudeck, 1993). The single
Success Surrogates in Representational Decision Support Systems 253
sample cross-validation index (ECVI) will also be assessed (Browne and Cudeck, 1993). This simple function of chi-square and degrees of freedom measures the discrepancy between the fitted covariance matrix in the analyzed sample and the expected covariance matrix that would be obtained in another sample of the same size (Jöreskog & Sörbom; 1993). An indication that the model fits well is that the ECVI for the collected data is less than the ECVI for the saturated model. A confidence interval can be computed to facilitate this assessment. For any models that appear to have a poor fit, modification indexes (Jöreskog & Sörbom; 1993) will be examined. Modification indexes exist for each unspecified path in the model. If a particular path is added, the resulting improvement in model fit is noted. This will provide additional insight on why model fit is poor. Cross loadings, correlated error terms and information pertaining to responsible items will be made more apparent and allow a thorough analysis.
Factor Loadings and Structural Coefficients In both models, factor loadings were calculated. Larger factor loadings, as compared to standard errors, provide evidence that collected data represents underlying constructs more closely. Factor loadings above .9 are considered excellent; those above .7 reasonable (Bollen, 1989). In the EUCS model, structural coefficients for the latent variables of Accuracy, Ease of Use, Format, Timeliness, and Content were calculated and evaluated using the same criteria (Bollen, 1989).
Discriminant Validity Discriminant validity distinguishes the degree to which scales are differentiable from each other and the degree to which the scales distinguish between different technologies used by those reporting. Although true MMTM analysis where multiple traits (Usefulness, Ease-of-use, and EUCS scales) are measured using different techniques over different technologies (simulation packages) could not be conducted since all data was collected via questionnaire and respondents identified a single software package for system development, a procedure similar to Davis’ (1989) was followed to determine which items had higher correlations with some non-trait, non-technology (broken into two classes–Bsimulation language, and simulator) items. The scales were also correlated across simulation packages.
254 McHaney & Cronan
Sample Subjects were five hundred randomly selected users of computer simulation drawn from a seven thousand name mailing list owned by the Society for Computer Simulation. Any known student members were deleted from the list prior to this selection process. Mail surveys were distributed to the subjects. Two forms were offered to each subject. One form asked questions from the perspective of the individual responsible for the development of a computer simulation (referred to as the analyst group). The second form asked questions from the perspective of the individual who uses simulationgenerated outputs in decision making, but does not develop the model (referred to as the decision maker group). Due to the context of the questions, the analyst group was asked to respond to the Doll and Torkzadeh (1988) instrument and the Davis (1989) instrument. The decision maker group was asked only to respond to the Doll and Torkzadeh (1988) instrument.
RESULTS Demographics Since the only known characteristic of the group being studied was an interest in computer simulation, demographics and other information were collected from both groups. Of the 171 respondents, 116 indicated they used discrete event computer simulation. The other 55 used different types of simulation. All other statistics and reporting in this research are based on only the discrete event computer simulation users. Nearly 61% of the surveys respondents reported having between three and fifteen years of computer simulation experience. Twenty-eight percent reported more than fifteen years experience and the remaining 11% had two years or less. Most respondents had advanced degrees (77.4%) and worked either in engineering (27.8%), education (19.1%), computer analysis/programming (14.8%), consulting (11.3%), research (11.3%) or management/ planning (8.7%). This breakdown compares favorably with prior studies that determined typical simulation usage (Eldredge & Watson, 1996; McHaney and White, 1998). Only 46.8% of simulation use was mandatory. Many respondents reported their use of discrete event computer simulation as high (43.4%), while fewer reported moderate (35.4%) or low (21.2%) use. Micro computers were the most popular hardware platform for simulation with 64% of respondents fitting into this category. The primary software packages used by the respondents were GPSS/H, SIMAN, SIMSCRIPT, C, SLAM, GPSS/
Success Surrogates in Representational Decision Support Systems 255
World, Extend, and AutoMod. These packages were classified as either simulation languages (54%) or simulators (46%). Approximately 79% of the respondents returned the analyst form, and 21% returned the decision maker form.
Davis Instrument Due to the nature of the Davis instrument (1989) and the context of its questions, it was only administered to analysts. Ninety of the ninety-one analysts responding to the survey completed the Davis instrument. The objective of this part of the study is to assess the reliability and construct validity of the two scales comprising the Davis (1989) instrument—-Perceived Ease of Use and Perceived Usefulness. The exact form of the distributed instrument is illustrated in Figure 1. Seven position Likert-type scales were used to score the responses. Reliability for the Davis instrument was calculated with Cronbach’s Alpha. The alpha for the Ease of Use scale was .93 and the alpha for the Usefulness scale was .94. These values compare favorably to reliabilities calculated by Davis (1989) and others (Adams, Nelson, and Todd, 1992). Table 2 reports simple statistics for each element of the instrument together with item and subscale correlations. In all cases, overall reliability (alpha) following removal of each item remains at .91 or above supporting reliability. Construct validity was assessed to determine if Ease of Use and Usefulness form two distinct scales (Davis, 1989). This assessment used confirmatory factor analysis with LISREL VIII to test the fit of the hypothesized model against the collected data. Table 3 reports the results of the factor analysis and compares them with the original Davis study. The factor loadings in the current study range from .715 to .954. This compares to a range from .63 to .98 in the original Davis (1989) study. While the twelve items appear to have divided into the two hypothesized factors, fit indexes indicate this may not be the case. The fit of the data to the hypothesized model was assessed using several measures. The first of these is the chi-square goodness of fit measure. Analysis indicates the data collected from discrete event computer simulation users may not fit the hypothesized factor structure (chi-square=235.97). The chi-square divided by the degrees of freedom confirms this marginal fit at 4.45 (Wheaton, Muthen, Alwin and Summers, 1977). The NNFI and CFI are .78 and .82 respectively, again indicating a poor fit. The RMSEA is poor at .197 and the ECVI problematic with the model’s value reporting higher (3.21) than the value for the saturated model (1.75).
256 McHaney & Cronan
Table 2: Davis instrument: Correlation matrices and simple statistics (Sample size = 89) a. Item Correlations U2 U3 U4 U5 U6 E1 E2 E3 E4 E5 E6
.62 .82 .68 .72 .57 .34 .39 .38 .30 .27 .36 U1
.82 .84 .73 .73 .25 .27 .28 .28 .30 .26 U2
.71 .88 .61 .35 .40 .41 .39 .32 .36 U3
.59 .77 .37 .36 .37 .34 .39 .31 U4
.61 .35 .39 .42 .38 .28 .33 U5
.34 .40 .40 .36 .47 .31 U6
.76 .57 .46 .65 .58 E1
.79 .72 .68 .66 E2
b. Subscale and overall instrument correlations Usefulness Overall Davis
.474 .822 .890 Ease of Use Usefulness
c. Individual Items: Simple Statistics
d.
Item U1 U2 U3 U4 U5 U6
Mean 2.31 2.22 2.21 2.10 2.33 1.84
Variance 1.71 1.62 1.58 1.44 1.54 1.27
E1 E2 E3 E4 E5 E6
1.89 2.33 2.20 2.36 1.83 1.99
1.21 1.40 1.20 1.43 1.07 1.17
Subscales: Simple Statistics Factor Mean Variance Ease of Use 12.62 6.44 Usefulness 13.03 8.07
Overall Davis
25.47
12.46
.77 .73 .77 E3
.61 .66 E4
.83 E5
Success Surrogates in Representational Decision Support Systems 257
Table 3: Davis standardized parameter estimates and t values Current Study
Item U1 U2 U3 U4 U5 U6 E1 E2 E3 E4 E5 E6
Davis (1989)
Factor Loading .830 .864 .954 .790 .880 .715 .716 .863 .899 .793 .841 .849
( 9.55) (10.17) (12.06) ( 8.85) (10.49) ( 7.69) ( 7.65) (10.08) (10.79) ( 8.85) ( 9.67) ( 9.83)
R-Square (Reliability) .69 .75 .91 .62 .78 .51 .51 .74 .81 .63 .71 .72
Factor Loading .91 .98 .98 .94 .95 .88 .97 .83 .89 .63 .91 .91
Although reliability is present within the data analyzed, the fit indexes fail to provide evidence for a factor structure consistent with the hypothesized outcome. Validity was further assessed through a correlation analysis of the Ease of Use and Usefulness scales with measures of expected use and actual use (Davis, 1989). Although significant, the correlations in this study were much lower than those reported by Davis in his original study. The Ease of Use scale correlated at .37 with expected use and actual use measures. This compares to Davis’ findings of .63 and .85. Usefulness correlated at .47 and .45 in this study. Davis reported .45 and .59. The modification indexes were examined to determine why the perceived usefulness and perceived ease of use instruments do not achieve a good fit. Cross loadings between items E5 and U6 were detected as well as strongly correlated error terms for many items within the Usefulness scale, particularly U2/U4 and U3/U5.
EUCS Instrument The Doll and Torkzadeh (1988) measure of End User Computing Satisfaction was administered to both the analysts and the decision makers. No significant difference between the responses of the two groups was discovered. One hundred and sixteen respondents completed this portion of the survey. Reliability for the Doll and Torkzadeh instrument (1988) was calculated with Cronbach’s Alpha and found to be .91. This compares favorably to an
258 McHaney & Cronan
overall alpha of .92 in the original study. Table 4 reports simple statistics for each element of the instrument together with individual and subscale correlations. The subscale correlations are all significant ranging from a low of .434 to a high of .713. Upon the removal of each individual item, alpha remains above .92. For each subscale item reliability is .84. These alphas indicate reasonably good reliability. A second level construct together with five first level constructs was tested using confirmatory factor analysis with LISREL VIII. Table 5 reports the results of the first order factor analysis. All constructs were significant with the factor loadings ranging from .594 to .944. This compares favorably with the Doll, Torkzadeh and Xia (1994) findings in which loadings were reported between .72 and .89. In addition, the structural coefficients and their t-values are also reported. Although the accuracy and ease of use items report slightly lower reliabilities (.35) than Doll, et al. (1994), the structure is significant. The fit of the data to the hypothesized factor structure was assessed using several measures. The chi-square statistic divided by the degrees of freedom confirms indicates a good fit at 1.23 (Wheaton, Muthen, Alwin and Summers, 1977). The NNFI and CFI reported above the desired value of .9 with values of .98 and .99. The RMSEA is excellent at .049. Both reliability and the construct validity appear to be present. Validity was further assessed through a correlation analysis of the overall construct, End-User Computing Satisfaction with a single item measure of success and a single item measure of satisfaction. The correlations reported at .62 and .66 respectively. ECVI appeared satisfactory with the model’s value reporting lower (.233) than the value for the saturated model (.265).
Discriminant Validity In the test for discriminant validity, individual items should correlate more closely to items meant to measure the same trait than items from other unrelated constructs and from the unrelated constructs as well. In this assessment, items from each of the five EUCS scales and the two Davis scales all correlated more highly with related items than with any others. In related correlation analysis across technology (simulation language versus simulator and simulation package), the Davis scales showed no significant correlation. The accuracy scale of EUCS, however correlated strongly (p<.001) with the simulation package used, indicating that particular software packages appeared to have a strong influence on end-user perceptions of system accuracy.
Success Surrogates in Representational Decision Support Systems 259
Table 4: EUCS Instrument: Correlation matrices and simple statistics (Sample size = 116) a. Item Correlations C2 C3 C4 A1 A2 E1 E2 F1 F2 T1 T2
.77 .54 .61 .48 .42 .38 .42 .32 .53 .51 .53 C1
.59 .63 .45 .50 .36 .42 .40 .60 .52 .62 C2
.46 .24 .19 .41 .40 .39 .48 .45 .49 C3
.58 .59 .37 .36 .38 .59 .57 .59 C4
.69 .10 .19 .19 .48 .42 .31 A1
.25 .30 .23 .49 .41 .41 A2
.84 .33 .43 .52 .45 E1
.29 .49 .55 .46 .40 .49 .39 .41 .50 .59 E2 F1 F2 T1
b. Subscale and overall instrument correlations Accuracy Ease of Use Format Timeliness Overall EUCS
.549 .489 .625 .713 .905 Content
.236 .434 .472 .651
.459 .536 .711
Accuracy
Ease of Use
c. Individual Items: Simple Statistics
d.
Item C1 C2 C3 C4
Mean 3.64 3.74 3.32 3.73
Variance 0.87 0.89 0.97 0.77
A1 A2
3.84 3.88
0.74 0.71
F1 F2
3.35 3.59
0.80 0.85
E1 E2
3.34 3.54
1.05 0.99
T1 T2
3.78 3.77
0.83 0.80
Subscales: Simple Statistics Factor
Content Accuracy Format Ease of Use Timeliness Overall EUCS
Mean 14.43 7.72 6.95 6.88 7.54 44.71
Variance 2.93 1.34 1.46 1.96 1.45 7.26
.578 .773 Format
.836 Timeliness
260 McHaney & Cronan
Table 5: EUCS confirmatory factor analysis a. Standardized Parameter Estimates and t values Current Study
Item (Reliability) C1 C2 C3 C4 A1 A2 E1 E2 F1 F2 T1 T2
Doll, Xia, & Torkzadeh Study (1994)
Factor R-Square Loading (Reliability)
.826 .876 .648 .774 .844 .816 .944 .889 .594 .929 .763 .771
(10.53) (11.54) ( 7.52) ( 9.56) ( 9.76) ( 9.39) (12.09) (11.11) ( 6.43) (10.23) ( 8.93) ( 9.04)
.68 .77 .42 .60 .71 .66 .89 .79 .35 .86 .58 .60
Factor
.826 .852 .725 .822 .868 .890 .848 .880 .780 .829 .720 .759
* (20.36) (16.23) (19.32) * (20.47) * (16.71) * (17.89) * (13.10)
R-Square Loading
.68 .73 .53 .68 .76 .79 .72 .78 .61 .69 .52 .58
Note: * indicates a parameter fixed at 1.0 in original solution. t values for item factor loadings are indicated in parentheses.
B. Structural Coefficients and t values Current Study
Doll, Xia, & Torkzadeh Study (1994)
Std. Structure R-Square Std. Structure Item Coefficient (Reliability) (Reliability) Content Accuracy Ease of Use Format Timeliness
.870 .591 .589 .719 .823
(11.05) ( 6.58) ( 6.55) ( 8.47) (10.20)
.76 .35 .35 .52 .68
.912 .822 .719 .993 .883
(17.67) (16.04) (13.09) (18.19) (13.78)
R-Square Coefficient
.68 .73 .68 .53 .76
Note: * indicates a parameter fixed at 1.0 in original solution. t values for factor structural coefficients are indicated in parentheses.
DISCUSSION The objective of this study was to determine if general information systems measures of success retain their psychometric properties when applied to discrete event computer simulation. The data collected in this survey indicate marginally poor results for the Davis (1989) instrument and stronger results for the Doll and Torkzadeh (1988) instrument. The findings of this study in no way diminish previous use of the Davis (1989) instrument
Success Surrogates in Representational Decision Support Systems 261
in other information system applications. They do, however, indicate the Davis (1989) measure may not be appropriate for use with discrete event computer simulation. Several reasons for this lack of construct validity and subsequent inability to generalize the instrument for use in measuring discrete event computer simulation success may exist. Among these might be the small sample size used in analysis, although it is consistent with prior studies and fit indexes less sensitive to sample size were used; or because discrete event computer simulation is a highly specialized field and not many other tools can be applied to the problems solved with simulation. In other words, simulation users have very little choice when it comes to use. This appears to be confirmed by the low correlations discovered between use, expected use, and the Davis Instrument. Another problem may be related to the varied number of packages analyzed. Prior studies conducted with the Davis instrument looked at one or two specific software packages rather than a class of software packages. Results of this study show little discrimination between packages for the Davis instruments and significant discrimination by the accuracy scale of the EUCS instrument. Other reasons may relate to theoretical application criteria. The Davis instruments were developed for early assessments during development based on brief initial exposure. Most respondents to this study were seasoned simulation users and a wide variety of similar technology was used. Davis (1989) reported that ease-of-use declines in its predictive power as users gain experience which appears to be consistent with this study. Applicants appeared to be more concerned with accuracy of outcomes than with ease-of-use or usefulness. The Davis instruments have not done well with ongoing operational applications. They have done much better with initial exposure data (Doll, et al., 1998). This study included very little initial exposure data with most respondents being seasoned professionals with a great deal of application experience. . The Doll and Torkzadeh instrument findings support the McHaney and Cronan (1998) study with a good fit and reasonable factor loadings. The discrete event computer simulation environment includes a wide variety of on-going operational applications. The instrument’s specific aim is to measure end-user computing satisfaction. Since most simulation users are endusers by Doll and Torkzadeh’s (1988) definition, the target population is more appropriate. In addition, Doll and Torkzadeh’s questions focus more on the outcome of the simulation process while Davis= questions focus on the process itself. This reflects the beliefs of many simulation users that the ‘‘success of a computer simulation is very much based on the correctness of the analyst’s model.’’
262 McHaney & Cronan
CONCLUSIONS The purpose of this research was to determine if information system instruments commonly used as surrogate measures for success can be applied within the area of discrete event computer simulation. The Davis Perceived Ease of Use and Perceived Usefulness (1989) and the Doll and Torkzadeh End-User Computing Satisfaction (1988) instruments were distributed to computer simulation users. An analysis of the returns indicated construct validity and reliability consistent with prior mainstream information system studies for the Doll and Torkzadeh instrument (1988). The Davis instrument (1989) did not fare as well. While reliability appeared to be present, construct validity was not. Although EUCS previously was proven reliable and valid in the simulation area (McHaney and Cronan, 1998), this has been the first effort to ensure Davis’ measures can be used in conjunction with discrete event computer simulation. This research shows EUCS to be a more appropriate surrogate success measure in ongoing simulation software use. By using the Doll and Torkzadeh End-User Computing Satisfaction measure (1989), researchers interested in comparing alternative computer simulation languages or implementation schemes can measure independent variables and establish correlations with success. This research indicates future studies wishing to compare various independent variables in the area of discrete event computer simulation would be well served in using the Doll and Torkzadeh (1988) End-User Computing Satisfaction instrument as a dependent variable. Based on the findings of this study, the Davis instrument should only be used to evaluate discrete event simulation success only after additional validation work has been performed.
A Decision Support System for Prescriptive Academic Advising 263
Chapter XVI
A Decision Support System for Prescriptive Academic Advising Louis A. Le Blanc Berry College, USA Conway T. Rucks and W. Scott Murray University of Arkansas at Little Rock, USA
A decision support system (DSS) was constructed to assist the academic advising staff of a college of business. The microcomputer-based system identifies any remaining unsatisfied degree program requirements, selects courses in which the student can enroll and then prioritizes them. Advisors are then able to spend time on more substantive or developmental advising issues, such as choice of electives, career options and life career goals. Using this system, a student with a minimum of computer knowledge can obtain an optimized course listing without the assistance of a human advisor in less than five minutes. A high-end spreadsheet (i.e., DSS generator) permits a workable and effective academic advising DSS. The database is the most significant part of this DSS. And, since the modeling component is difficult to separate from the structure of the data itself, a database management system might be a better choice as the DSS generator. This platform would provide a more flexible user interface as well as superior data handling capability but at some sacrifice in cost and implementation time. Copyright © 2002, Idea Group Publishing.
264 Le Blanc, Rucks & Murray
A recent development in the management of university and college organizations is the integrated software system, known also as enterprise resource planning (ERP) software. This integrated administrative software for higher education (e.g., CMDS [Computer Management Development Services]), operating on various hardware platforms, provides student advising data for a variety of prototyping and application development tools, such as Powersoft’s Infomaker and Microsoft’s Access.
INTRODUCTION This paper deals with computerized support for prescriptive or traditional advising. This type of advising conveys institutional requirements to college and university students. Presently, faculty and staff at the university usually perform this task. Designing and implementing a microcomputer system to aid in this process [4] [17] would permit the human advisors to pay more attention to substantive or developmental advising issues [8], such as career counseling. There is a rather limited body of literature about decision support systems (DSS) in the higher education environment. Later in this initial section, that current DSS that literature will be referenced. However, the next paragraphs describe the most influential past research on this particular development effort. Twenty years ago, a special issue of Decision Sciences that focused on higher education appeared. In that issue, Cox and Jesse (1981) applied a backward scheduling technique from manufacturing (i.e., material requirements planning or MRP) to the scheduling of university classes for each semester. In this prior study, a degree plan corresponded to a manufacturing “bill of material” (BOM). The objective was to determine which courses should be offered in each semester, over a multi-year planning horizon, based on the number and types of major fields of study. The study was macro in nature. This field research project applies on an individual or micro level the same logic of backward scheduling but for the delivery of a “service” known as a business degree. Scheduling of the individual courses for an undergraduate accounting major, for example, is driven by the accounting curriculum as represented by the degree plan – the BOM. This prototype system determines which specific courses an accounting major should take in a particular semester or possibly over several semesters in order to meet the objective of graduating in the least amount of time.
A Decision Support System for Prescriptive Academic Advising 265
The purpose of this prototype and limited field trial was to determine if a microcomputer-based academic advising DSS could be constructed with general package software such as a spreadsheet or database. The choice of a spreadsheet as the DSS generator was made because of the widespread acceptance and understanding of the workings of a spreadsheet and how sophisticated the current versions of the software have become. Their reasonable cost to purchase and operate was also an importance factor in this choice of system development platform. Conceivably, if the academic advising process could be automated, a thirty-minute advising session could be reduced to possibly less than five. Or further, students could have access to error-free advising whenever they desired it and with no geographic limitations. In effect, students could keep a copy (assume unofficial copy) of their academic records of courses completed versus courses required on a diskette or on their own fixed media storage. Academic advisors, faculty and/ or staff, would be freed from paper-based record keeping and counseling to a machine-based and automated advising process without time of place restrictions. In essence, perpetual advising would be possible as a result of computer-automated academic advising DSS.
Current DSS Research for the Higher Education Environment In the most relevant recent DSS research in higher education, Golumbic et al. (1991) employed intelligent systems to investigate the design of a general requirements model for university degrees. This research investigated the use of knowledge-based techniques in advising and assisting university students in planning their studies. But more current publications in the last decade have not shown interest in determining degree requirements or academic advising. Examples of this body of research include a paper by Dimopoulou and Miliotis (2001) that reported on the design and implementation of a PC-based computer system to aid the construction of a combined university course and examination timetable. Their system was constrained by the availability of classrooms and the increased flexibility of the students’ choices of courses. In less current publications, Mukherjee (1994) described a DSS that integrated the strengths of optimization and human judgement to schedule instructors to executive development programs at a university. Knowledgebased heuristics act automatically to revise the schedule in the direction specified by the user. Lau and Kletke (1994) employed a nonlinear integer programming algorithm to allocate on-campus recruiter interview slots to a
266 Le Blanc, Rucks & Murray
student bidding system. Interview slots for each firm are assigned beginning with the highest bidder until all slots are filled. Johnson (1993) constructed a support system for the task of creating teaching schedules (i.e., timetables) in a business school. The development tool, i.e., DSS generator, was a database management system (DBMS). Wu et al. (1992) reported the development of a database application for selecting an engineering college from 239 such schools and 26 engineering disciplines in North America. A second phase of their research developed a program to point rating and multi-criteria decision-making model to rank the universities for the user. Elimam (1991) reported a DSS to develop student admission policies at a university. This DSS has three modeling components: an academic performance analysis; model to estimate secondary school graduates supply and demand for university graduates; and, a student allocation model.
Expert System or Institutional DSS This academic advising system is an institutional DSS. Donnovan and Madnick (1977) suggested that DSS could be meaningfully classified as either “institutional” or “ad hoc” depending on certain characteristics of the decision being supported. DSS which deal with decisions of a recurring nature are considered institutional, while ad hoc DSS deal with specific decisions which are not usually anticipated or recurring. According to Donovan and Madnick, institutional DSS are most appropriate for operational control applications, while ad hoc DSS are most useful for strategic planning applications. The academic advising application described herein more closely meets the characteristics of the institutional DSS. As described by Sprague and Carlson (1982), a DSS has three major components (i.e., database, modeling and interface) and these are the unique attributes of this type of information system. This academic advising system has all three of these necessary components to be accurately labeled as a DSS. However, the modeling component is rule based, which suggests that maybe this is an expert system (ES). Sprague and Carlson noted that the modeling component of a DSS could use rules (e.g., if-then-else) as processing logic. For the academic advising DSS, the rules are directly a function of the BOM or degree structure, rather than some rare logic of an expert. Further, expertise is defined by Harmon and King (1985) as very high level capabilities possessed by only a few, in a very narrow domain, where the majority of the members of the group are far below the top performers. The BOM for each degree program establishes the course scheduling logic. This information is
A Decision Support System for Prescriptive Academic Advising 267
published as academic catalogs in large quantities with wide distribution. Some simple rules determine if a student has met the prerequisites for the courses listed by the degree program or BOM. A significant part of the academic advising problem is the large number of students that need advising several times a year, as well as the accuracy of the advice provided by human advisors to the students.
Complexity of the Decision The process of selecting courses can be a complicated one. There are several types of requirements students must meet, plus there are sequencing rules that must be observed. Types of Course Requirements The requirements for each major at the university are given in the university bulletin. The requirements are broken down into four parts and are summarized in Table 1. General university requirements are the courses that are taken by all students, regardless of major. These include basic English, mathematics, social science, and natural science courses, and constitute about one-third of the total requirements. The second type of requirement, called a core requirement, includes the classes taken by all students in a particular broad area. Since this project focuses on the curriculum of the college of business, the core requirements of interest are those for business students. These courses, making up about 40 percent of the curriculum, include basic introductory classes in accounting, economics, etc., as well as the set of advanced courses taken by all business students such as finance, organizational behavior, business strategy, etc. To reiterate, the distinguishing characteristics of core courses are that they must be by all students in the college of business. Table 1: Types of courses required for a business degree General Education
Courses in the liberal arts and sciences required of all students
Business Core
Foundation business courses required of all business students
Major Requirements
Specific courses required of students in a particular program
Electives
Courses that may be taken in any area
268 Le Blanc, Rucks & Murray
The third type of requirement is a course taken for a specific major. These courses vary according to the major and tend to be quite specific. For business students, major courses compose about 15 to 20 percent of the total courses. The fourth type of requirement for a degree is an elective. Electives are courses that the student is free to select. The number of elective courses required varies by major and generally range from one to four. Some majors require a certain number of general electives that may be taken in any area as well as a number of restricted electives that must be taken in a specific area. For this paper, “electives” refer only to the former; the latter are considered major requirements. Rules Regarding Prerequisites In order to complete a degree, a student must complete all requirements in each of the four types. However, the sequencing of courses is complicated by the necessity that certain courses be completed prior to others. For example, before a student can take business statistics, they must have already completed basic algebra and calculus. Courses that must be taken prior to another course are called prerequisites. Beginning courses have no prerequisites; however, some advanced courses require that five or more prerequisite courses be completed first. For highly structured programs, including most of those in a college of business administration, course sequencing is made further difficult by the overlapping and layered nature of prerequisites [7]. Some required courses have prerequisites that, in turn, have prerequisites. For example, before a student can take quantitative methods course (Eco2312), they must have already completed business statistics. But before the student can take statistics, they must complete business calculus. And, before the student can take calculus, they must take college algebra. In some majors, this layering of prerequisites is six levels deep. For a diagram of this phenomena, refer to Figure 1 that uses using the accounting major as an example. This layering of prerequisites forces students and advisors to plan several semesters, or even years, in advance to ensure that the student finishes the degree on time.
Benefits of an Automated Advising System Planning the proper courses to take is a complex matter. More than 40 courses are required for most undergraduate business degrees. Because of unsatisfied prerequisites, however, only ten of those courses might be appropriate for a business student to take in a given semester. A DSS for academic advising could easily check for which courses a student has satisfied the prerequisites and list those courses. Of course, the student could
A Decision Support System for Prescriptive Academic Advising 269
Figure 1: Course Contingency Chart Accounting Major Requirements Prerequisite Contingency Diagram Act4361 Act4351 Act4314 Act3321
2 3
Act3941 Act3312
Man3305 Man4308 Man4380 Eco3310
Eco2312 Man3300 Fin3310 Man3380
...
Eco2310 Eco2322 Eco2321
Act3330 ..... Act3311
4
Act2330
5
Act2810
Literature #1(7)
Man2101 Man2102
#2(5)
Eng1312 Eng1311
Mat1342 Mat1302
!1 Courses: His1311, His1312, The2200, Art2200, Mus2200, SocSci1, SocSci2. #2 Courses: Mar2380, Spe1300, His/Govt, NatSci1, NatSci2. Solid lines indicate prerequisities. Dotted lines indicate corequisites
determine this by identifying the degree requirements and their prerequisites in the university bulletin. In addition, skilled advisors probably have memorized most of the requirements and their prerequisites. However, an academic advising DSS allows students themselves to quickly determine which degree requirements they are eligible to take and eliminates the possibility of any error. In addition to the courses in which a student is eligible to enroll, the DSS could suggest the order in which the courses should be taken. This ordering is minimizes the amount of time required to complete the degree by ensuring that prerequisites are completed prior to the time the student needs to enroll in a particular required course. This is also a skill human advisors possess. However, utilizing an academic advising DSS will eliminate errors and permit the advisors to focus on the developmental aspects of academic advising. As described by Saving and Keim (1998), developmental advising forms a bond between the student and the advisor in a working and learning relationship. Ender (1994) and Kozloff (195) both believed that developmental advising was linked with higher retention rates.
Problems Addressed by Automated Advising The functions of student academic advising that are possible to automate can be divided into three hierarchical components. Refer to Figure 2 for a
270 Le Blanc, Rucks & Murray
diagram of these components. At the lowest level, it is necessary to identify the courses required for a certain business degree that are not yet satisfied. This must be performed first and without error for the DSS to provide value. The published university bulletin lists the requirements for each major. Thus, to model this function in the DSS, it is only necessary to encode the data from the bulletin. The DSS would then compare the student’s record with the list of requirements and extract the courses that have not yet been completed. Once the unsatisfied requirements for a degree have been identified, the next function identifies course requirements for which the student can register (i.e., cleared for registration). Course requirements for which the student has not completed any necessary prerequisites would not be available. This process involves evaluating the prerequisite rules and eliminating any requirements for which the student has not met prerequisites. The remaining requirements in which the student is eligible to enroll are then considered “eligible” courses. After identifying program requirements and selecting those that have no unsatisfied prerequisites, the top-level function prioritizes the list of “eligible” courses based upon their importance in meeting prerequisites for the remaining courses. Highest priority would go to courses that serve as prerequisites for the greatest number of subsequent courses. Figure 2: Functions of the academic advising DSS
Prioritizing courses in optimal order.
Prioritizing Selecting courses the student is eligible to enroll in.
Selecting Identifying remaining unsatisfied program requirements.
Identifying
A Decision Support System for Prescriptive Academic Advising 271
OVERVIEW OF THE ACADEMIC ADVISING SYSTEM System Typology This system can be classified as a DSS. It was designed to assist with decisions that are generally ill-structured and non-recurring [3] [16] such as the selection and sequencing of courses. The ES concept also may be germane in this case, since this is a rather narrow problem domain. A specific solution or decision(s) rather than just general assistance will be provided by the computerized support [18]. However, the advice on course selection and sequencing would probably not be considered “expertise.” With either concept, DSS or ES, the computerized support is composed of three parallel components: a user interface, a data or knowledge base, and a model base or inference engine [14]. The user interface of the system involves quizzing the student on which courses the student has completed. The system would list all requirements of the major and asks the student to indicate which courses they had completed. The database consists of the courses required for various degrees and all rules regarding prerequisites. The model component is the reasoning or calculating part and is discussed later in the paper.
Benefits of Support for Academic Advising This computerized support will permit students to conveniently extract a list of course requirements in which they are eligible to enroll. Without error, the DSS displays all remaining program requirements, noting in which of these the student is currently eligible to enroll. In addition, the DSS will sequence the courses in such a way as to minimize the number of semesters necessary for the student to complete the degree. The DSS could be integrated into the student advising process. For example, when students met with advisors to plan the next semester’s schedule, this system could be used to save time. Before consulting the advisor, students could obtain a printout, which could be referred to during the advising session.
Limitations of the Advising Support This DSS is designed to consider scheduling of courses only. It does not take into account other issues that might be significant in selecting courses. While classes are ranked in order that the system suggests they be taken, this ordering is based only on the course sequence that will minimize the number
272 Le Blanc, Rucks & Murray
of semesters required to complete the degree. It ignores course content issues. For example, since the two courses on macroeconomics and microeconomics serve a similar role as prerequisites for higher courses, these course have similar priorities. The DSS would suggest that they be taken simultaneously. This does not consider that a student might not wish to take two such related courses in the same semester. In addition, the system ignores individual attributes of students. For example, the DSS would recommend that college algebra be taken in the first semester. However, for a student with a weak mathematics background, it might be wise for the student to postpone college algebra until the second semester. The system only considers a single factor in selecting courses. However, by performing the routine mechanical analysis, the DSS relieves the human advisor from the mechanics of course selection and allows them to consider other important advising issues. Within a typical college of business administration, there are usually about a dozen standard undergraduate majors. This prototype system has only been implemented for a limited number of these. While the requirements for a major change every year or two (usually in minor ways), this DSS has only been implemented for the current catalog. Both of these limitations could easily be overcome by simply implementing the course requirements and their prerequisites unique to other majors and for other catalog years. Perhaps, this should be considered less a weakness than a potential for enhancement. Other potential weaknesses are discussed later in the paper.
Support Provided by the System This DSS queries the user (student or human advisor) and produces an optimized listing of eligible courses. Using both a menu driven and questionand-answer interface, the system first obtains information on courses already completed by the student. Next, the system extracts and optimizes the recommended courses for the student. The student then has the option of printing this report. Four lists are generated by the system. These are summarized in Table 2. The first and most important list is called Eligible Courses. This is the register of remaining required course the student is presently permitted to sign up for, given in priority order. The second list, termed Additional Courses, provides the remaining required courses that the student is not yet eligible to take due to prerequisite restrictions. The third list, Completed Courses, is simply a slate of the courses that the student identified as completed. And the final list, Violations, is an exception report listing courses that the student indicated as completed but with
A Decision Support System for Prescriptive Academic Advising 273
Table 2: Course lists generated by the system Eligible Courses
Remaining degree requirements that the student is eligible to enroll in, listed in priority order
Additional Courses
Additional degree requirements that the student is not currently eligible to enroll in
Completed Courses
Courses the student reported as completed
Violations
Courses completed without prerequisites
unsatisfied prerequisites. The four lists are composed automatically and are available for on-screen viewing as well as printing.
THE ADVISING DATABASE The first function of this DSS is to list the courses in which a student is eligible to enroll. Before this can be done, the student must provide the input for the system. This is done on the user input screen which lists all courses required for the degree. The student enters a Y to the right of the course if the class has been completed or an N if not. The DSS converts this information into numeric codes and transfers it to the database. The database remains hidden to the user. In order to explain how the DSS extracts eligible courses, it is necessary to explain the structure of the database [9]. The database consists of five basic fields, plus up to eight prerequisite fields. Each requirement for the degree is one record in the database. The first field is the course number. This identifies the particular course and is an abbreviated form of the department name and the course number. As an example, college algebra is listed as Mat1302. The second field is the course title. This is simply a longer description of the course. For example, Mat1302’s title is “College Algebra.” The next field is status, which indicates whether the course has been completed or not. This information is mapped from the user input screen. A one in the status field indicates that the course has not been completed, and a zero indicates that the course has been successfully completed. Generally in coding dichotomous data, zero represents “no” and a one represents “yes.” This scheme is reversed in this system for technical reasons. Using a one to indicate “no” allows the DSS to sum the prerequisite field to
274 Le Blanc, Rucks & Murray
determine the number of remaining prerequisites before a particular course may be taken. The third field is titled eligible. It is discussed in detail below. The fourth field contains the number of credit hours the course is worth, and the fifth field is the number of upper level credit hours that the course is worth. For each record (course) multiple prerequisite fields may be provided. There is one prerequisite field for each prerequisite for the course. Each prerequisite field contains a pointer to the status field of the prerequisite course. For example, since college algebra (Mat1302) is a prerequisite for business calculus (Mat1342), the record for business calculus includes a prerequisite field pointing to the status of college algebra. Status fields, as discussed above, contain zero or one, thus a prerequisite field contains a zero if the prerequisite course has been completed or a one if the course has not been successfully completed. There may be several prerequisite fields, each pointing to a different prerequisite. The field labeled eligible (noted earlier) is simply the sum of the values in the prerequisite field for the course records. A zero in the eligible field indicates either that there are no prerequisites for the course or that all prerequisites have been successfully completed. Any number other than zero indicates that there are that same number of prerequisites that must be completed before this course may be taken. This field is used by the DSS in extracting courses to recommend to the student.
THE MODELING COMPONENT In a DSS, the modeling component generates a recommendation for the decision maker, relying on information in the database and input from the user. The model is the “driving force” behind the system, although its presence may be more obscure than either the database or the user interface [5].
Modeling Versus Data Component For the academic advising DSS, the database component contains a data structure known in manufacturing terms as the “BOM” (Orlicky, 1970). In an academic or “service” environment, this BOM is the specific business degree program such as accounting, finance, management, marketing, or general business. The architecture of this data structure is inherently hierarchical, just like a standard manufacturing BOM or product structure. Its number of levels and the number of unique part numbers measure the complexity of the degree
A Decision Support System for Prescriptive Academic Advising 275
program or BOM. In the academic sense, levels refer to the number of semesters and part numbers correspond to the number of courses. Undergraduate degree program are eight levels or semesters, with second semester of the senior year labeled as the first level and first semester of the freshman year labeled as the eighth level. If a degree program requires 120 hours of three hour courses, then there are 40 part numbers identified by a departmental prefix and number (e.g., Mgmt 100 and Acct 405). Since a business degree is constructed from the bottom (freshmen year) to the top (senior year), the data structure is inherently vertical or hierarchical, and the levels are further connected vertically by prerequisites that may extend across several levels. (IBM’s first DBMS, IMS, was developed in hierarchical fashion for manufacturing applications.) The modeling component of the academic advising DSS has to process this hierarchical structure to determine which courses ought to be taken in a particular semester. Consequently, and quite naturally given the vertical data structure of a business degree program, rule-based processing was selected as the modeling approach. Like in much data processing, the data structure determines the processing method; and, that is certainly the case here. As suggested by Cox and Jesse (1981), the backward scheduling logic of MRP, combined with the rule-based processing applied in this DSS, determines the courses (or parts) needed to complete the degree program in the least amount of time. In this DSS, the modeling component selects eligible courses and suggests an effective and efficient sequence for completing these courses. Unlike the traditional DSS, however, this system has much of the processing function built into the structure of the data itself. The arrangement of the requirements in the DSS database (i.e., the structure of the records in the database) is the most important means of generating recommendations. In this DSS, the modeling function cannot be completely separated from the database.
Selecting Eligible Courses Now that the basic structure of the database has been described, the process of selecting eligible courses can be addressed. After the student has informed the DSS of all courses completed, the processing commences. The list of eligible courses as shown in Table 3 is composed of those courses that the student has yet to complete, but has completed any prerequisites. In order to extract these courses, the DSS selects all courses for which the status field is one (i.e., the course has not yet been taken) and the eligible field is zero (i.e.,
276 Le Blanc, Rucks & Murray
there are no unsatisfied prerequisites remaining). Once the database is properly designed and all prerequisite contingencies are included, the extraction of eligible courses is easy. This DSS also produces three other lists beside eligible courses. Completed courses are courses already completed by the student. This is done by extracting all records with status field set to zero. Other requirements (see Table 4) lists the additional requirements for which the student has not met the prerequisites. These are courses that must be taken in future semesters, but not the current one. This list is created by selecting records with status field set to one and eligible field set to anything other than zero. And finally, the system produces a list of violations. These are courses completed by the student for which the prerequisites have not been completed. This list is produced by the DSS extracting all courses with status set to zero and eligible courses set to anything other than zero.
Sequencing of Courses The above procedure ensures that courses on the eligible list are courses for which the student is, in fact, eligible. It does not, however, guarantee that the list is sorted so that the student knows in which order eligible courses should be taken. The sequencing of courses is less structured and mechanical than the extraction of eligible courses. As noted above, prerequisites are often layered, and sometimes this layering is deep. For example, for a student majoring in accounting, Act2310 Table 3: Eligible course sample screen Course #
Title
Act 2310 Man 2101 Man 2102 Mat 1342 Eco 2321 Eco 2322 Eng 1312 His 1311 His 1312 Nat Sci The 2200 Art 2200 Mus 2200 Soc Sci Soc Sci Mar 2380 URElect
Principles of Accounting I Computer Fundamentals Business Software Business Calculus Macroeconomic Principles Microeconomic Principles Comp 2 History of Civilization 1 History of Civilization 2 Bio 1400, Ear Sci 1402 or Chem 1409 Theater Art Music Geog 2312, Psy 2300 or Soc 2300 Geog 2312, Psy 2300 or Soc 2300 Legal Environment of Business Unrestricted Elective
Recommendation This lists all courses you are eligible to take, ranked in priority.
Use UP and DOWN arrows To move.
Press ENTER for menu.
A Decision Support System for Prescriptive Academic Advising 277
Table 4: Additional requirements sample screen OTHER REQUIREMENTS – Additional remaining courses. Press ENTER for menu. Course #
Title
Prerequisites
Act 2330 Act 3311 Eco 2310 Act 3330 Man 3300 Act 3312 Eco 2312 Fin 3310 Act 3341 Mar 3350 Man 3380 Eng Lit Man 3305 Eco 3310 Act 3321 Man 4308 Act 4314 Act 4351
Principles of Accounting II Interm Financial Act 1 Business Statistics 1 Interm Cost and Mgr Act 1 Org Behavior and Mgmt Interm Cost and Mgr Act 2 Quantitative Methods Business Finance Act Info Systems Principles of Marketing Business Communication Eng 2316/2317 Literature Mgmt Info Sys Development Money and Banking Federal Taxation 1 Production Management Advanced Financial Acct Auditing Theory Pract 1
Act 2310 Act 2330 Man 2101 Man 2102 Mat 1302 Mat 1342 Act 2330 Over 54 hours credit Act 3311 Eco 2310 Eco 2310 Eco 2321 Eco 2322 Act 2330 Act 3330 (Act 3312 is coreq) Over 54 hours credit Over 54 credit hours Eng 1312 Man 2101 Man 2102 Man 3300 Over 54 Eco 2321 Over 54 Act 2330 Eco 2312 Man 3300 Act 3312 Act 3312 Act 3330 Act 3341
must be taken before Act2330, which must be taken before Act3311, which must be taken before Act3312, which in turn must be taken before Act4314. (Refer to Figure 1.) Each layer of prerequisites represents one semester. If the accounting program has five layers of courses, and a student wants to graduate in four years or eight semesters (excluding summer or other special enrollments), the student must begin taking courses in this chain of prerequisites by the fourth semester. Deepest Layer Rule The method of ordering courses is based upon three hierarchical rules. The first and highest priority rule is the Deepest Layer Rule. Courses that are on the deepest level of prerequisites should be taken first; other courses are then suggested in descending order of their respective layers. If the accounting major has a maximum of five layers of courses, the first course(s) that should be taken is/are positioned on the fifth level. All eligible fifth level courses appear first, followed by level four, then level three, and so on. Figure 1 illustrates the layered approach to requirements. The Deepest Layer Rule is applied first in determining the order that courses should be taken.
278 Le Blanc, Rucks & Murray
Maximum Dependency Rule The second rule is Maximum Dependency Rule. This rule is used for sorting eligible courses within a given layer. For example, if the deepest incomplete layer is the fourth and there are three eligible courses on layer four, this rule is used to determine the respective priority of these three courses. This rule states that eligible courses on the same layer should be taken in order of the number of prerequisites served. This requires that each course be analyzed to see how many other higher-layer courses depend upon it as a prerequisite. The intra-layer course with the most number of such higher layer dependencies is preferred, followed by the course with the next highest number of dependencies. Figure 1 indicates the number of higher layer dependencies for each course by the number of stars present (i.e., more stars indicate more dependencies and higher priority). The Maximum Dependency Rule is used to determine priority for courses within the same layer, but only after the Deepest Layer Rule has been applied. Lower Course Number Rule The third and final rule for sequencing of courses is the Lower Course Number Rule. This rule prioritizes courses on the same layer with the same number of dependencies. The DSS simply ranks those courses in ascending order by the course number. For example, the results of the application of the other rules being equal, the Lower Course Number Rule would select His1311 before Art2200 because 1311 is less than 2200. The logic is that lower numbered courses are intended for less advanced students and, all things being equal, should be taken prior to the higher numbered courses. Sometimes, two or more courses are tied on all three rules. That is, the courses are on the same layer, have the same number of higher layer dependencies, as well as equal course numbers. In these cases, this sequencing rule is undefined and these courses may be listed in any order.
Table 5: Rules for optimal course sequencing Deepest Layer Rule Maximum Dependency Rule
Lower Course Number Rule
Courses with the greatest number of layers of courses above have top priority Courses serving as direct and indirect prerequisites for the greatest number of higher layer courses have the highest priority within layers Courses on the same layer with an equal number of higher layer dependencies are selected in order of ascending course numbers
A Decision Support System for Prescriptive Academic Advising 279
In summary, three rules are applied in determining course sequencing priority. In order of application by the DSS, these rules are the Deepest Layer Rule, the Maximum Dependency Rule, and the Lower Course Number Rule. The rules are summarized in Table 5. Firing in this order, these rules determine an optimized sequence for courses. These rules may not always provide the optimal sequencing, but they are nevertheless effective and relatively simple to design and implement. It is not necessary that all these rules be applied in every all case. The Deepest Layer Rule should probably always be applied. There could be a more efficient or effective means of sorting courses within the same layer.
THE USER INTERFACE Designing an Effective Interface In this DSS, the user interface is probably the weakest component. While the basic modeling and database components could easily be extended to represent additional student majors, the interface needs more flexibility. The interface is relatively weak because of the limitations in the underlying software package (i.e., DSS generator) [23]. This DSS is constructed from a high-end spreadsheet package. Ideally, this DSS would query the user with questions. Since this style of interface is very difficult to implement using spreadsheet software as a DSS generator, this system instead allows the user to scroll through the list of courses and mark the ones completed. A weakness of this interface style is that inexperienced users (e.g., students) could become confused by the query style. Several steps were taken to rectify the weaknesses of the interface. First, extensive instructions are provided on the query screen. These instructions explain where the user should mark answers and what to do when finished. (Refer to Table 6.) The user can only enter or change data in regions that are designed to be changed. This protects against accidental modification of the system.
The Main Menu This DSS is completely controlled by a command macros. The macros contain a sequence of instructions for the DSS generator on how to perform the most common functions. As soon as the system is invoked, a macro automatically invokes the main menu for the indicated academic major. Macros also handle the task of querying the user, processing user input, extracting the lists of courses, as well as displaying and printing reports.
280 Le Blanc, Rucks & Murray
Table 6: User input sample screen ACCOUNTING ADVISOR Type Y next to completed courses and N next to one not complete. Then press DOWN arrow to move to next course. You can also use the UP arrow to backup on the list. Press ENTER when finished. Eng 1311 Mat 1302 Spe 1300 His 2311, His 2312 or Pol 1310 Nat Sci 1 His 1311 Art 2200 Eng 1312 Mat 1342 Nat Sci 2 Act 2310 Eco 2310 The 2200 His 1312
N N N N N N N N N N N N N N
Two 4-hour natural science courses are required. Eligible courses are: Ast 1301 & Ast 1102 Bio 1400 + Bio 1401 Ear Sci 1402 + Ear Sci 1403 Chem 1409
Press ENTER when finished.
Table 7: Main menu sample screen
Input Courses Eligible Courses (Recommendation) Additional Courses Completed Courses Violations of Prerequisite Print Majors Menu Help
COLLEGE OF BUSINESS ADMINISTRATION STUDENT COURSE ADVISING SYSTEM MAJOR: ACCOUNTING Press H for Help
A Decision Support System for Prescriptive Academic Advising 281
The main menu for the accounting major is shown in Table 7 and contains the following options: Input Courses: Allows the student to indicate which classes are complete. The student types Y if the requirement is complete or N if it is not complete. An abbreviated sample input courses screen appears in Table 6. The user would use the arrow keys to scroll down and see the remaining courses. Eligible Courses: Displays the system’s recommendation, that is, it lists all remaining requirements for which the student is currently eligible to take, ranked in order that the system deems optimal. An abbreviated sample screen appears in Table 3. Additional Courses: Displays the remaining requirements that the student is not yet eligible to take. It also lists the prerequisites for these courses. (All prerequisites may not appear due to screen space limitations.) Refer to Table 4 for a sample screen. Completed Courses: Displays the list of courses the student indicated had been completed. Violations of Prerequisite: Displays the list of courses the student has taken without taking any necessary prerequisites. Print:
Causes the four reports discussed above to be printed.
Majors Menu: Help:
Returns the user to the list of majors. Provides rudimentary user assistance in entering courses.
Overall, the user interface of this system is adequate. Students should be able to use the system with the instructions provided on screen.
STAKEHOLDER IMPACT From a limited rollout of the prototype DSS, the effect of the DSS varied by the type of major and classification of the student. Fifteen trials were made on five different types of major at three classifications (i.e., sophomore, junior
282 Le Blanc, Rucks & Murray
and senior). Students in the business school cannot declare a major until they are a sophomore and have completed not only the necessary 30 hours for that status but also finish certain pre-business courses. The more structured a degree program, the more useful the DSS appeared to be. Accounting is the most “structured” curriculum as it requires more total hours than other degree programs, has more courses in the major, many of these “accounting” courses serve as prerequisites for subsequent and more advanced courses, and this interconnectedness begins at the sophomore level and continues non-stop until graduation. The least “structured” is general business, which has the opposite attributes of accounting. The other majors such as finance, management, and marketing are between the extremes. In general, accounting majors indicated that the advising DSS made the prerequisite linkages more apparent and suggested different course schedules and therefore course sequences than what they and their human advisor deemed as the best choice(s). For the least structured general business majors, the advising DSS recommended about the same class schedules but did so in a small fraction of the time as the human advising process and there was also more student confidence across all majors in the computerized schedule. The advising DSS always took less than five minutes for input and microseconds to process and produce output. The confidence was in a schedule that would be on the “critical path” (least time) to graduation. Sophomores and juniors indicated that the computerized course schedules were beneficial and that confusion was almost eliminated as to what courses to take in order to expedite obtaining a degree, as tuition was high and working while going to school left little slack in their hectic schedules. There was no time for mistakes in scheduling needed courses. The lower classifications have more classes left to take, and therefore the choices are much more plentiful and confusing. As a student approaches graduation (i.e., as they move higher up the academic bill-of-material), choices become fewer and as the puzzle is completed the choices are clearer and less ambiguous. Most of this evidence is anecdotal based on interview responses and direct observation. But the prototype established that the system was viable and flexible enough to easily handle multiple business majors. The BOM changes necessary to schedule a different major required changing only eight to ten courses. The great majority of the BOM for all majors was the same (i.e., general education requirements, pre-business block and business core.) Only the courses constituting the major and a very few electives were different between degree plans.
A Decision Support System for Prescriptive Academic Advising 283
It is intuitively obvious that students would like this approach to advising, as it takes just seconds and can be performed anywhere, anytime. Faculty who conduct academic advising intuitively like it, as their time is released to do more substantive work in the advising area such as recommending relevant electives according to a student’s major and personal interests. Faculty are also freed to devote more time to teaching and research responsibilities. Based on interviews with two registrars, they potentially liked this system because of its potential to reduce if not eliminate possible errors in the advice given to students or academic advice given by students to themselves. The legal implications according to the registrars are little different between paper catalogs and the same information contained in electronic media. If the database structure (BOM) was 100 percent accurate and if the processing logic were flawless, there would be no chance of inaccurate academic advice being given to students. All stakeholders should like this potential environment. With the current advising system, there are many human advisors who can easily make errors. And, unfortunately, many of these errors are not discovered until a student applies for graduation under a particular catalog and curriculum (BOM). Only then, at the last moment, is the advising error discovered with the potential of delaying graduation. As with any project, the longer it takes to uncover an error in “constructing” the student’s major, the more expensive and time consuming it is to fix the mistake.
CONCLUSIONS AND RECOMMENDATIONS TO PRACTITIONERS This system does meet the functional requirements for which it was designed. Using this DSS, a student with a minimum of computer knowledge and a little common sense can obtain an optimized listing of courses in which to enroll, without the assistance of a human advisor and in about five minutes. The system also allows an advisor to verify the mechanical requirements and thus focus more on substantive advising issues. While the system has only been implemented for a few majors and for one year’s catalog, it would be quite easy to implement the other majors and bulletins for the college of business administration [1]. To be complete, the DSS would need one file per major per bulletin issue. The modules currently implemented are internally documented in such a way as to make easier the process of modifying to reflect other majors.
284 Le Blanc, Rucks & Murray
Alternative DSS Generators This DSS is effective, but probably not optimal. A better automated academic advising system would be built on a more appropriate software architecture [9] [23]. A DBMS, such as Access or Paradox for microcomputer hardware platforms, would be preferable. The database is the most significant part of this DSS; and, the modeling component is difficult to separate from the structure of the data itself. Such a DSS generator would provide a more flexible user interface as well as superior data handling capability. A DBMS would also facilitate the updating of requirements for each major and catalog. In addition, using a microcomputer-based DBMS would make it easier to connect this DSS to other systems, such as the centralized student information system that maintains the official student records. The downside of using a DBMS is that it would possibly be more difficult and time-consuming to implement.
Advising and Registration The academic advising and course registration functions are not simultaneous. They are sequential, with advising preceding registration. Once a student has established the courses that he/she needs to take in a given semester, they then register for those courses. If the advising period was reduced from several weeks to the potential minimum of just seconds, registration could take place much earlier. There could be the difficulty of wanting to register for a class recommended by the academic advising DSS and the class not be offered when needed. But this is not a problem that the DSS could alleviate. Its solution must come from the academic department that offers the course.
Source Data for Advising A recent development in the management of university and college organizations is the integrated software system, known also as enterprise resource planning (ERP) software. This integrated administrative software for higher education (e.g., CMDS [Computer Management Development Services]), operating on various hardware platforms, provides student advising data for a variety of prototyping and application development tools, such as Powersoft’s Infomaker and Microsoft’s Access.
Geographic Information Systems 285
Chapter XVII
Geographic Information Systems: How Cognitive Style Impacts DecisionMaking Effectiveness Martin D. Crossland Oklahoma State University, USA Richard T. Herschel St. Joseph’s University, USA William C. Perkins Indiana University, USA Joseph N. Scudder Northern Illinois University, USA
A laboratory experiment is conducted to investigate how two individual cognitive style factors, field dependence and need-for-cognition, relate to decision-making performance for a spatial task. The intent of the investigation is to establish a methodology for measuring cognitive fit for spatial tasks. The experiment assesses the performance of 142 subjects on a site location task where the problem complexity and availability of a geographic information system are manipulated on two levels. Significant relationships are found for both field dependence and need-for-cognition with the two dependent performance variables, solution time and percent error. Copyright © 2002, Idea Group Publishing.
286 Crossland, Herschel, Perkins & Scudder
INTRODUCTION The question being investigated in this study is whether two measures of cognitive style, field dependence and need-for-cognition, are related to individual decision-maker performance for a spatial task, particularly when a geographic information system is involved as a graphical decision support aid. This study is exploratory in nature. In a comparison of the various forms of applications of computer graphics in business, Ives (1982) asserts that maps gain the most from the availability of computer graphics. He states that the time to manually produce maps has restricted their use to a limited set of well-funded business applications. Today, however, maps generated via computers can be developed at a fraction of the time and cost, and they can be quickly updated to reflect changes in boundaries or represented data. These developments led DeSanctis (1984) to argue that research into the use of computer maps is important, because it contributes to our understanding of the use of visual aids, including graphics, in the acquisition and comprehension of information and in problem solving. While the use of computer-generated maps has the potential to enhance analyses and improve decision-making performance, research into the factors impacting these issues has been extremely limited. In fact, research in computer-assisted spatial problem solving has been all but ignored in the information systems literature. This study examines the interactions of cognitive style with graphical representations of problem elements and spatial relationships of real objects. The focus of this research is to determine how this interaction contributes to the understanding and solution of problems requiring thought and reasoning. Currently, there are no studies that focus on the issue of cognitive style as it relates to decision-making on spatial problems. Hence, there are no studies that relate cognitive style with use of geographic information systems (GIS). Although Huber (1983) questions the utility of studies of cognitive style, Robey (1983), counters that failure to address cognitive differences may lead to neglect in the evaluation of human interface issues, which impact system usage and effectiveness.
PRIOR RESEARCH An important area of graphics research that lends assistance in asserting the importance of cognitive style is Image Theory. Image Theory is an established and active agenda for researchers (e.g., Addo, 1989; Crossland
Geographic Information Systems 287
1992; Crossland, Wynne, & Perkins, 1995; Davis, 1986; Joyner, 1989; Lauer, 1986; Tan & Benbasat, 1990; Yoo, 1985). Image Theory argues that graphical constructions such as “images” or “figurations” can be used to increase the efficiency of information assimilation and problem solving. Efficiency is defined by the proposition that if, in order to obtain a correct and complete answer to a given question, all other things being equal, one construction requires a shorter observation time than another construction, then it is more efficient for this question. This proposition is in accord with Zipf’s (1935) notion of “mental cost” applied to visual perception. Hence, efficiency is important when someone seeks to answer a question by reading a drawing. In most cases, the difference between an efficient drawing construction and an inefficient one is extremely clear in that there is a considerable difference in the amount of time it takes for the reader to answer the question. Our point is that the efficiency of a graphical representation affects one’s ability to comprehend graphical information. The idea of enhancing perceptual efficiency contributes to the formulation of Image Theory (Bertin, 1967). Image Theory states that graphical constructions used for problem solving can be categorized into two groups: 1. An image, which is a meaningful visual form, perceptible in the minimum instant of vision. In other words, an image is a graphical construction that is singularly sufficient to answer a question posed of it after simply a single, brief viewing of the graphic. Bertin asserts that an image will not accommodate more than three variables. 2. A figuration, which is a more complex construction comprising multiple images. Figurations are inherently less efficient than images. Image Theory predicts that viewers will require a longer viewing time for figurations than for images. Moreover, the more complex the figuration, the longer the viewing time (because complex graphical constructions become less efficient as they become increasingly more complex). Image Theory also asserts that the more complex the figuration, the more problematic the contribution of the information to a decision-making task. Using Image Theory, we can speculate that use of a graphical image to represent tabular data should facilitate understanding, depending on the nature of the graphical construction. Research by Bertin (1983) appears to support this idea. His work showed that the ability of a graphical representation to enhance cognitive efficiency might depend on the nature of the graphical representation. To demonstrate this, Berlin constructed one hundred graphical representations from the same set of tabular data, categorizing the various representations into two major groups (see Figure 1).
288 Crossland, Herschel, Perkins & Scudder
Figure 1: Types of graphic constructions (after Bertin, 1983) Should we use...
Browar Cranst Justin Grother Johnes Kratze Grayson Loanste Restrop Seward Datren Foarten Wetner Proipt Telans Chrost Sylat Quarten Menst Kakers
4456 345 3345 3245 2222 3234 2123 4587 456 567 3234 9896 3456 4567 3234 8765 3456 6547 8796 345 4456 345 3345 3245 2222 3234 2123 4587 456 567 3234 9896 3456 4567 3234 8765 3456 6547 8796 345
one diagram or three diagrams
a scatterplot or a distribution
a radar diagram or a scatterplot
a diagram
a map
a cartogram or a map
Figure 1 Types of graphic constructions (after Bertin, 1983)
circles or columns
one map or four maps
The study reported herein employs Image Theory to predict that use of a Geographic Information System (GIS) should enable more efficient perception than paper-based maps, because GIS can be used to alter an image to make it easier to read and understand. Further, this increase in efficiency should enable individuals using GIS to realize shorter problem solution times and/or greater decision-making accuracy compared to individuals using paper maps. Ives (1982) and Liberatore, Titus, and Dixon (1988) note that it is important to consider individual differences in any information systems decision-making research study. Also, Keen and Bronsema (1981) assert that cognitive skills are particularly relevant when performance rather than preference measures are being obtained. Therefore, since the study reported herein focuses on the impact of GIS relative to individual performance (decision speed and accuracy), cognitive skills are acknowledged as germane to this investigation.
Cognitive Style and Field Dependence Cognitive style is an umbrella term referring to enduring patterns of an individual’s cognitive functioning that remain stable across varied situations.
Geographic Information Systems 289
In some cases, cognitive style refers to patterns of processing such as field dependence and cognitive complexity. These constructs suggest that individuals have varying levels of cognitive ability to recognize different patterns and constructs. Other measures of cognitive style such as Petty and Cacioppo’s (1986) need-for-cognition construct suggest that individuals vary in regard to the predisposition to exercise cognitive effort. Cognitive style is also used to describe an individual’s preferred sensory channel for information acquisition. Some persons may be more adept at processing visual information while others may prefer auditory or tactile information. Some confusion has been created in the literature with the introduction of a learning style construct based upon multiple kinds of intelligence rather than the tradition intelligence quotient (I.Q.) approach. Overlap between cognitive and learning styles probably exists, but the demarcation is not clear at this time. We used cognitive style and field dependence as representations of cognitive skill in this research. In so doing, we were able to then address a commonly held proposition that performance equates to ability times effort. Indeed, such a relationship was clearly articulated by Vroom, Grant and Cotton’s (1969) Expectancy Theory three decades ago. In terms of cognitive skills, field dependence is clearly the ability-related variable and need-forcognition is the effort-related variable. Hence, while we acknowledge that field dependence and need-for-cognition do not exhaust the realm of potential cognitive skills measures, we believe that they are very relevant in helping to understand individual performance differences. Research by Zmud and Moffie (1983) suggests that field dependence is a cognitive skill construct that consistently discriminates among decision performances in information systems research. Field dependence concerns the ability to separate an item from an organized field or to overcome an embedded context (Witkin, Lewis, Hertzman, Machover, Meissner, and Wapner, 1954). Field dependent persons are comparatively passive receivers of information and tend not to structure or restructure a field, given situational demands. One would expect them to be disrupted most when information becomes more complex. Zmud and Moffie (1983) maintain that people with lower field dependence tend to outperform those with higher field dependence in structured decision tasks and that they tend to make more effective use of transformed information (e.g., aggregated values and graphical formats). Liberatore et al. (1988) state that field dependence has often been used by management and IS researchers as a measure of cognitive style and personality differences on performance-related tasks. However, in their own research Liberatore et al. report no significant differences in individual
290 Crossland, Herschel, Perkins & Scudder
performance related to field dependence and they claim that their findings extend those of prior comparative studies. Benbasat and Dexter (1985) conducted a study where field dependence of subjects was an independent variable and individual performance was a dependent variable. In their study, performance was measured in terms of both decision time and decision accuracy. Benbasat and Dexter find no significant relationship between decision time and field dependence, but they do find one between field dependence and task performance accuracy. They attribute poor individual performance on a visual computer-based task to a mismatch between information presentation and personality type. They suggest that proponents of graphical information presentation must qualify their applicability to task environments by ensuring that (1) there is a clearly defined rationale for the potential benefits of graphics usage, and (2) graphical reports are organized in a manner well suited to the task at hand. Both criteria were employed as necessary conditions for the research reported here. Lusk and Kersnick (1979) scrutinized the relevance of cognitive style, including field dependence, to decision making. Lusk and Kersnick focused on the relationship of cognitive style and report format to task performance, using the field dependence measure to classify subjects as high (low field dependence) and low (high field dependence) analytics. In a manner comparable to Liberatore et al. (1988), Lusk and Kersnick considered only solution accuracy as their performance measure (all subjects had the same amount of time to complete their task). They report no support for the claim that task accuracy depends on one’s assessed cognitive style.
Cognitive Style and Need-for-Cognition Need-for-cognition (NFC) is very different from field dependence because it is focuses more on an individual’s motivational orientation. That is, even if an individual has the ability to discriminate accurately among visual stimuli, his or her performance is still contingent upon their cognitive motivation. Cohen, Stotland, and Wolfe (1955) describe NFC as a need to structure relevant situations in meaningful, integrated ways, and as a need to understand and make reasonable the experiential world. Interestingly, Petty and Cacioppo (1986) have demonstrated that need-for-cognition is not correlated highly with the traditional concept of intelligence. Various researchers have explored the need-for-cognition construct. For example, Cohen (1957) describes research that supports his hypothesis that individuals with high NFC are more likely to organize, elaborate upon, and evaluate information than are persons judged to have low NFC. And research by Scudder, Herschel, and Crossland (1994) reveals a direct link between
Geographic Information Systems 291
NFC and assessment of solution alternatives, which in turn appears to affect decision quality and process satisfaction. In an effort that serves to relate the relevance of cognitive style to technology issues, Goodhue and Thompson (1995) propose a model of the relationship of task-technology fit (TTF) and individual performance. In doing this, they effectively define and operationalize the relationship between cognitive style (vis-à-vis field dependence and NFC). In their model, they contend that individual characteristics (e.g., cognitive style) and task characteristics (such as spatiality) are antecedents to task-technology fit. Tasktechnology fit is important because it affects whether a technology will be adopted and, if adopted, how it will be used. Hence, Goodhue and Thompson argue that task-technology fit has a direct bearing on the performance of an individual using the technology.
HYPOTHESES AND METHODOLOGY A Geographic Information System (GIS) is a technology that is capable of creating and displaying graphical images based on tabular data that can be used to facilitate decision making. Pursuant to the tenets of Image Theory, GIS may be considered to make a positive contribution to the decisionmaker’s task if it enables the technology user to reach (a) a more accurate solution and/or (b) a faster solution to a given problem. Because a GIS is capable of generating less complex graphical displays (as defined in Image Theory) than are conventional paper maps, a user of GIS should benefit from greater efficiencies as predicted by Image Theory. However, given prior research findings, it is also possible that the ability to realize differences in efficiencies via use of GIS may vary as a function of differences in cognitive style.
Hypotheses This study views maps as complex fields that contain embedded information. High field- dependent persons are comparatively passive receivers of information who tend not to structure or restructure a field, given situational demands. One would expect them to be disrupted most when graphical information becomes more complex. Therefore, we expect that they will perform less well than people whose cognitive style is less field dependent and we argue that field dependence may be viewed as a measure of aptitude for spatial problems. With this in mind, we explore the following hypotheses: H1: Individuals who are less field dependent will solve the experimental problem faster than individuals who are more field dependent.
292 Crossland, Herschel, Perkins & Scudder
H2: Individuals who are less field dependent will solve the experimental problem with a lower percent error than individuals who are more field dependent. This study also investigates the relationship of another cognitive style dimension to decision-making, the individual’s need-for-cognition (NFC) as described by Cacioppo and Petty (1982). Again, NFC measures an individual’s internal motivation to pursue and enjoy thinking activities. It is possible that NFC may affect decision task performance, aptitude notwithstanding. Therefore, it is hypothesized that individuals with higher NFC should exhibit a higher level of performance on a task than individuals with lower NFC. Hence, the following hypotheses are examined: H3: Individuals who score higher on the need-for-cognition (NFC) scale will solve the experimental problem faster than individuals scoring lower on the NFC scale. H4: Individuals who score higher on the NFC scale will solve the experimental problem with a lower percent error than individuals scoring lower on the NFC scale.
Methodology This study employs a laboratory experiment designed to assess the relationship between GIS availability, problem complexity, field dependence, and need-for-cognition relative to decision-maker performance on a spatial task. Testing these relationships requires the manipulation of GIS availability and problem complexity. In addition, measures of cognitive style relative to individual field dependence and need-for-cognition must be captured. The four independent variables in this study are listed and explained below: 1. Presence /absence of GIS technology. This variable is manipulated on two levels. On one level, the subjects have only paper maps and tabular data to determine a solution to the experimental problem. On the second level, subjects are also provided with a GIS that displays graphical results of common data manipulations that are typically available in most GIS. 2. Problem complexity. Task complexity is manipulated on two levels. The first level requires subjects to rank order five facility sites using three spatial criteria. The second level requires rank ordering of ten facility sites using seven spatial criteria.
Geographic Information Systems 293
3. 4.
Field dependence. Prior to engaging in the experiment, each subject’s degree of field dependence is measured by administering the Group Embedded Figures Test (Witkin et al., 1971) prior to working the problem. Need-for-cognition. Prior to engaging in the experiment, each subject’s NFC is assessed by administering the NFC questionnaire (Petty & Cacioppo, 1986).
Two dependent variables were measured and analyzed in the study: 1. Decision time. The overall time to process the problem statement, arrive at a solution, and record the solution is measured unobtrusively via computer. Subjects are given an unlimited amount of time to address the experimental problem. A subject’s start time and end time are automatically recorded in a computer database. 2. Decision quality. The solution determined by each subject is captured directly in a database. Because the problem is objective and has a predetermined correct solution, the computers are programmed to automatically score each subject’s solution against the correct solution. That is, a subject’s solution either meets or does not meet task parameters, and, based on these results, the computer assigns a predefined point score. The task requires subjects to prioritize a series of alternative facility sites based on various spatial criteria. An error score is generated by summing over the total problem space the absolute number of rank positions away from the correct position that each site was placed in a subject’s ranking. Because two levels of problem complexity are being considered, with two different site counts, the error score is converted to a percentage of total possible error for comparisons across cells in the research design matrix. Variables that could have a confounding impact on the study and that require control include: 1. Nature of the task. Each subject solves the same stated problem. Only problem complexity (number of items to rank and number of criteria to consider) and the presence or absence of a decision aid (the GIS) are manipulated. 2. Training. All subjects receive the same training by working the same short training problem. The only difference is that subjects with the GIS available receive additional instructions on how to retrieve the necessary GIS displays from the computer.
294 Crossland, Herschel, Perkins & Scudder
3.
Experimental setting. All subjects participate in the experiment in the same room, under the same physical conditions. All subjects use a computer with the same software for answering questionnaires and recording solutions. The GIS software used for the study was MapInfo from MapInfo Corporation of Troy, New York. MapInfo was the first company to develop and market PC-based mapping software for business applications. MapInfo is a multi-million dollar worldwide company with over 200,000 clients who use MapInfo in a variety of computing environments including standalone desktops, client/sever networks, data warehouses and the Internet. 4. Solution scoring rule. All subjects employ the same point-score solution rule to solve the experimental task. Each site is assigned specific point values based on each criterion. The total criteria points for each site are summed, and then sites are ranked based upon the point totals. 5. Subject pool and assignment to design cells. All subjects are recruited from various sections of the same introductory computer course and each is randomly assigned to one and only one of the experimental design cells.
Experimental Procedure The group of subjects receives brief introductory comments, followed by the administration of the NFC questionnaire (see Appendix). Next, the Group Embedded Figures Test (GEFT — a timed, written test used to measure field dependence) is administered to each participant. A short practice task is then given to the subjects to familiarize them with the methodology to be used, the organization of the printed materials, the nature of the task, and to show them how to manipulate the computer to record task answers. Subjects in the two design cells with GIS technology are also given instructions on how to retrieve and manipulate displays required to address the experimental task. Subjects are then administered the experimental task. The task requires subjects to assume the role of an operations analyst in an electric power company. According to the scenario, the company desires to replace some of its older coal-fired generating stations with a new fuel cell technology that is much cleaner and more efficient. The subjects then prioritize several potential sites (five in the less complex problem, ten in the more complex problem) against several criteria (three in the less complex problem, seven in the more complex problem) that include spatial components. The criteria deal with such factors as proximity of a natural gas pipeline, regions with endangered species, proximity to parks and recreational areas, and other such spatially referenced information. The priority ranking is based on a scoring rule that
Geographic Information Systems 295
assigns points to sites based on each criterion. Figure 2 shows one of the digital maps considered by subjects that worked the more complex problem using a GIS. Members of each of the four experimental groups are offered a cash performance prize, based first on decision accuracy, then on elapsed time as a tie breaker.
Experimental Design and Analysis A four-cell, 2x2 factorial design is employed in this study, with the unit of analysis being the individual decision-maker. The four experimental design cells for the treatments are: 1. No GIS, less complex problem 2. No GIS, more complex problem 3. GIS, less complex problem 4. GIS, more complex problem Figure 2: More complex problem, population/major markets criterion GIS screen
G1S
Population
Population Pipeline Recreation Fish Pop-Mkts Prks/Polit Endangered Zoom Quit
Over 50,000 Under 60,000
SPOPMAJ.CMF 360 mi 10:14 pm
Population and major markets
Major Markets
296 Crossland, Herschel, Perkins & Scudder
To analyze the individual cognitive style variables, each of these treatment cells is further subdivided once for analysis by categorizing each subject as high or low field dependent, then alternatively as high or low NFC. This process yields a pair of 2x2x2 factorial experimental designs (Campbell & Stanley, 1963). Field dependence and NFC are measured as interval level variables and then systematically categorized to nominal level, based on interval ranges. The two dependent variables, decision time and quality, are both interval level variables. Data are analyzed using descriptive statistics and univariate analysis of variance techniques.
RESULTS OF ANALYSIS The research design follows Campbell & Stanley’s Design 6 (1963), the posttest-only control group design, with two dimensions of manipulated factors and two dimensions of subject variables (Underwood, 1957). Thus, an analysis of variance statistical model, as recommended by Campbell & Stanley for such designs, is employed as the primary model for comparing the means of the dependent variables. A total of 142 subjects completed the experiment, including 88 men and 54 women. Again, the subjects were randomly assigned to one of the four experimental treatment groups.
Field Dependence Field dependence is measured pre-task by administering the Group Embedded Figures Test (Witkin et al., 1971). It is scored using the published scoring guide for the test. Scoring is accomplished by simply counting the number of correct tracings of the embedded geometric figures on the test, with possible scores ranging from 0 to 18. Scores of the 142 subjects in this study range from 0 to 18, with a median of 14.0, a mean of 12.35, and standard deviation of 4.40. Scores closer to 0 imply greater field dependence, while scores closer to 18 imply less field dependence. As the subjects are divided into four treatment groups, it is deemed appropriate to maintain similar groupings in the analysis of field dependence. Also, because the concept of high and low scores on the GEFT implies additional groupings for field dependence, a three-way ANOVA is appropriate to analyze both dependent variables against field dependence. Several methods of splitting the subjects into high versus low field dependence are analyzed, including at the sample mean, higher versus lower thirds, and both one-half and one standard deviation above and below the sample mean. The
Geographic Information Systems 297
strongest contrast in performance is observed when the subjects are split at one-half standard deviation below the mean. This process yields scores of 010 [high field dependence] and scores of 11-18 [low field dependence]. The results of ANOVA using this split are shown in Table 1 and Table 2. Of particular importance in this analysis is the significant (p<.01) main effect of field dependence on solution time and a significant (p<.05) interaction of field dependence with problem complexity. Thus, H1 is supported, whereas H2 is not.
Need-For-Cognition Need-for-cognition is measured by a pre-task questionnaire developed by Petty and Cacioppo (1986). The questionnaire is a set of eighteen questions that are answered on a nine-point Likert-type scale. After reverse-coding nine of the questions, the NFC score is derived by simply summing the values of the answers for each of the eighteen questions. This gives a potential minimum score of 18 and a potential maximum score of 162. Scores for the 142 subjects in this study range from a minimum of 42 to a maximum of 156, with a median of 107.5, a mean of 105.9, and standard deviation of 20.65. Table 1: Field dependence ANOVA for solution time (minutes) SOURCE Problem complexity (A) GIS availability (B) Field dependence (C) AXB AXC BXC AXBXC Residual 5749.0 Total 19780.9
SS 12828.1 392.1 387.5 105.4 242.9 14.1 18.8 134 141
df 1 1 1 1 1 1 1 42.9
MS 12828.0 392.1 387.5 105.4 242.9 14.1 18.8
F 299.00 9.14 9.03 2.50 5.66 0.33 0.44
p .000 .003 .003 .119 .019 .567 .510
Table 2: Field dependence ANOVA for percent error (percent) SOURCE Problem complexity (A) GIS availability (B) Field dependence (C) AXB AXC BXC AXBXC Residual 14223.3 Total 16114.7
SS 79.0 1659.5 64.4 63.3 20.4 5.8 1.8 134 141
df 1 1 1 1 1 1 1 106.1
MS 79.0 1659.5 64.4 63.3 20.4 5.8 1.8
F 0.74 15.64 0.61 0.60 0.19 0.54 0.02
p .390 .000 .437 .441 .662 .816 .897
298 Crossland, Herschel, Perkins & Scudder
In a manner comparable to the analysis for field dependence, several different methods for splitting the subjects into high versus low NFC are scrutinized. The greatest contrast is observed at a split one standard deviation above the sample mean. This yields scores of 126-156 classed as high, and of 0-125 classed as low need-for-cognition. The results of ANOVA using this split are shown in Tables 3 & 4. These results, albeit interesting, are surprising. A significant (p<.01) main effect is observed for NFC on percent error, and a significant (p<.05) interaction between NFC and availability (or not) of GIS. However, the direction of the main effect difference is opposite that stated in hypothesis H4. The data related to the interaction indicate that high NFC (i.e., NFC score > 125) subjects experienced on average about a three times higher percent error than subjects with lower NFC, both with and without GIS. Therefore, hypothesis H3 is not supported and hypothesis H4 is not supported.
DISCUSSION AND CONCLUSIONS Field dependence is related to task performance in this experiment. Two relationships are hypothesized, in H1 and H2, to relate field dependence with Table 3: Need for cognition ANOVA for solution time (minutes) SOURCE Problem complexity (A) GIS availability (B) Need for cognition (C) AXB AXC BXC AXBXC Residual 6328.7 134 Total 19780.9
SS 12828.1 392.1 32.2 118.9 7.8 29.7 30.7 47.2 141
df 1 1 1 1 1 1 1
MS 12828.1 392.1 32.2 118.9 7.8 29.7 30.7
F 271.61 8.30 0.68 2.52 0.17 0.63 0.65
p .000 .005 .411 .115 .685 .429 .422
Table 4: Need for cognition ANOVA for percent error (percent) SOURCE Problem complexity (A) GIS availability(B) Need for cognition (C) AXB AXC BXC AXBXC Residual 12580.1 Total 16114.7
SS 79.0 1659.5 833.7 15.8 55.4 463.7 323.9 134 141
df 1 1 1 1 1 1 1 93.9
MS 79.0 1659.5 833.7 15.8 55.4 463.7 323.9
F 0.84 17.68 8.88 0.17 0.59 4.94 5.38
p .361 .000 .003 .683 .444 .028 .065
Geographic Information Systems 299
solution time and percent error, respectively. Support is found for H1 — higher field dependent people have higher solution times for the task. However, no support is found for H2 — high field dependence does not yield a higher percentage of errors on the task. The findings of the current study are congruent with Lusk and Kersnick (1979) with regard to the lack of a significant relationship between field dependence and solution accuracy. However, the study findings conflict somewhat with evidence reported by Zmud and Moffie (1983). Zmud and Moffie find only a minimal association of field dependence with accuracy and confidence, and that association is only measurable for tasks of low complexity. Liberatore et al. (1988) report no significant differences in performance related to field dependence and they claim that their findings extend those reported in prior studies. However, the findings in this study indicate that field dependence does have a significant impact on solution time, but not on decision accuracy. Two relationships are hypothesized, in H3 and H4, to relate NFC with solution time and percent error, respectively. However, the results of this investigation show significant relationships in the opposite direction from what is stated in H3 and in H4. A significant interaction of NFC with problem complexity indicates that subjects with higher NFC took significantly longer to complete the task than did subjects with lower NFC. A significant main effect of NFC on percent error indicates that people with higher NFC had a significantly higher percent error than did people with lower NFC. A possible explanation for this finding may be that high NFC people approach the experimental task in this study with so much thoughtful consideration that they end up making the task more difficult than it actually is. In other words, they think too much about the problem. Indeed, Cohen (1957) found that individuals with high NFC are more likely to organize, elaborate on, and evaluate information to which they are exposed. Therefore, it may be that, in actuality, additional opportunities for cognitive activity exact a task performance penalty for high NFC individuals. This study has two limitations for researchers wishing to generalize its results. First, the task is specialized. However, we suggest that there may be opportunities to compare the results of this study to other experimental situations where the impact of technology on decision-making tasks involving spatial components are examined. Moreover, the study establishes a task environment and experimental methodology that should be portable to other similar tasks. Second, the decision-makers in the experiment were college sophomores. Generalizability of results obtained from using such surrogate
300 Crossland, Herschel, Perkins & Scudder
“real world” decision-makers has come under question (Gordon, Slade, & Schmitt, 1986). However, the validity of using students as surrogates for more experienced decision makers has also been vigorously defended in the literature (e.g., Greenberg, 1987; Briggs, Balthazard, & Dennis, 1996). The function of a GIS is as simple as its execution is complex; data from spreadsheets and databases are precisely tagged so that they can be portrayed in graphical map form. The power of the geographic information system is in its ability to store, integrate, model, analyze, and depict many kinds of data in many ways. Geographic information systems (GIS) can be an especially powerful information management tool for certain business applications, including site location, strategic decision making, and marketing. Increased corporate use of geographic information systems technology to improve customer service and cut costs is expected to significantly increase this market in the future (Jacobs, 1996). Also, CIOs are finding GIS to be a significant tactical tool for identifying product niches by geography, facilitating presentations, improving comprehension, identifying logistical problems, and developing marketing strategies (Hamilton, 1996). As visualization tools and the hardware required to support them fall in cost, more enterprises will draw on the power of advanced (3-D and beyond) visualization to provide insights into complex data. Visualization is no longer the exclusive domain of “Type A” organizations. In addition to traditional markets such as science, engineering, and medicine, the technology is becoming accessible to smaller and more costconscious enterprises; emerging markets include telecommunications, financial analysis, and data mining. Therefore, in the years to come, increasingly dazzling graphics, including 3-D and animation will make the use of GIS more vital in communicating information and in drafting corporate strategies. We argue that GIS can potentially also support the recent organizational activities emphasizing knowledge management. Knowledge management is a discipline that promotes an integrated approach to identifying, managing, and sharing all of an enterprise’s information assets in order to the harness the intellectual capital of the organization. Davenport and Prusak (1998) argue that one of the most efficient and effective ways of dealing with organizational knowledge is through the creation of knowledge maps that act as guides to people with implicit knowledge, or documents and databases that contain explicit knowledge. We believe that the application of GIS to this process offers a unique opportunity to portray organizational expertise and knowledge in a spatial context.
Geographic Information Systems 301
A leading computer-industry researcher argued that technologists often get too far into the design of a system without understanding who the target users are, the work that they do, and the context in which they do that work (Wildstrom, 1998). While we sympathize with this complaint, we also suggest that more research is needed to better understand the relationship between technology and differences among end users. Towards that end, we hope that this research serves to enhance our understanding about the use of GIS in decision-making activities.
APPENDIX The need for cognition questionnaire (from Petty & Cacioppo, 1986). Do not agree Agree at all completely 1.2.3.4.5.6.7.8.9 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
I would prefer complex to simple problems. I like to have the responsibility of handling a situation that requires a lot of thinking. Thinking is not my idea of fun. I would rather do something that requires little thought than something that is sure to challenge my thinking abilities. I try to anticipate and avoid situations where there a likely chance I will have to think in depth about something. I find satisfaction in deliberating hard and for long hours. I only think as hard as I have to. I prefer to think about small, daily projects to long-term projects. I like tasks that require little thought once I’ve learned them. The idea of relying on thought to make my way to the top appeals to me. I really enjoy a task that involves coming up with new solutions to problems. Learning new ways to think doesn’t excite me very much. I prefer my life to be filled with puzzles that I must solve. The notion of thinking abstractly is appealing to me. I would prefer a task that is intellectual, difficult, and important to one that is somewhat important but does not require much thought. I feel relief rather than satisfaction after completing a task that required a lot of mental effort. It’s enough for me that something gets the job done; I don’t care how or why it works. I usually end up deliberating about issues even when they do not affect me personally.
302 Staples
Chapter XVIII
Are Remote and NonRemote Workers Different? Exploring the Impact of Trust, Work Experience and Connectivity on Performance Outcomes D. Sandy Staples Queen’s University, Canada
Information technology (IT) is enabling the creation of virtual organizations and remote work practices. As this practice of employees working remotely from their managers and colleagues grows, so does the importance of making these remote end-users of technology effective members of organizations. This study tested a number of relationships that were suggested in the literature as being relevant in a remote work environment. Interpersonal trust of the employees in their managers was found to be strongly associated with higher self-perceptions of performance, higher job satisfaction and lower job stress. There was weak support for the impact of physical connectivity (i.e., the availability of IT) on job satisfaction, supporting the enabling role of IT. These findings were similar for both remote employees (i.e., those that worked in a different building than their managers) and nonremote employees. However, more frequent communications between the Copyright © 2002, Idea Group Publishing.
Are Remote and Non-Remote Workers Different? 303
manager and employee was associated with higher levels of interpersonal trust only with the remote workers. Cognition-based trust was also found to be more important than affect-based trust in a remote work environment, suggesting that managers of remote employees should focus on activities that demonstrate competence, responsibility and professionalism.
INTRODUCTION Working remotely is becoming more common with advances in information technology (IT). Information technology is enabling distributed work, both for IS professionals and other professionals. Many, if not all, remote workers will be end users of information technology. Making these end users effective in a remote environment holds many challenges for organizations. The purpose of this paper is to explore some of the challenges and issues. In recent years, there has been some research on telecommuting to understand one type of remote work practice, that of working remotely from home (Belanger & Collins, 1998; DeSanctis, 1984; Duxbury & Haines, 1991; Duxbury, Higgins, & Irving, 1987; Igbaria & Guimaraes, 1999; Johnson, 2001; McCloskey & Igbaria, 1998; Neufeld, 1997; Olson, 1988; Switzer, 1997). A key issue in telecommuting and virtual organizational structures is the management of employees who are located remotely from their manager (Beyers, 1995; Kepczyk, 1999; Pearlson & Saunders, 2001; Pinsonneault & Boisvert, 2001; Rooney, 2000; Tapscott & Caston, 1993). Managers’ roles are changing as traditional, hierarchical methods are no longer appropriate (Bogdanski & Setliff, 2000; Cascio, 2000; Grenier & Metes, 1995; Jenner, 1994; Lucas & Baroudi, 1994; Pearlson & Saunders, 2001; Raines & Leathers, 2001; Snell, 1994). The fear of lost managerial control is reported to be a significant factor preventing widespread adoption of telecommuting (DeSanctis, 1984; Duxbury et al., 1987; Duxbury & Haines, 1991; Goodrich, 1990; Pearlson & Saunders, 2001; Phelps, 1985; Risman & TomaskovicDevey, 1989; Roderick & Jelley, 1991). The objective of this research was to study remote work and remote management and to explore differences among remote workers and nonremote workers. This information can potentially assist organizations and managers in making their remote workers more effective. For this study, remote workers were defined as employees who work in a physically separate location from their managers. The employee’s location could vary considerably from working at another company office or in their home, to working at a customer’s location or out of their car. Employees working at home are by definition telecommuting; however, telecommuting is just one work arrange-
304 Staples
ment that results in remote management. Telecommuting is only a small part of the virtual workplace, in which people work together while being physically distant from each other (Belanger & Collins, 1998; Jenner, 1994; Pearlson & Saunders, 2001). A series of hypotheses identifying potentially important IT and management issues in remote work were developed based on the literature and suggestions from exploratory research carried out for this study. The hypotheses were tested with data gathered via a questionnaire. The exploratory research was carried out in order to identify key issues of working remotely, both from workers’ and managers’ viewpoints. The development of the hypotheses is presented in the next section. This is followed by a discussion of the methodology used for testing the hypotheses and then the findings are presented. The last section discusses the findings, their contributions and limitations.
DEVELOPMENT OF HYPOTHESES A series of testable hypotheses were developed based on suggestions in the literature and results from exploratory research carried out for this study. The purpose of the exploratory research was to identify key issues of working and managing remotely and possible practices to address these issues. Details of this exploratory research are briefly described below, and the hypotheses that were developed are then presented.
Exploratory Research The exploratory research was conducted using focus group interviews to collect the views of both people who were working remotely and managers who were managing remote workers. A total of 104 people from five different organizations participated in nineteen focus groups, split fairly evenly between managers of remote workers (58 participants; 56%) and remote workers (46 participants; 44%). Sixty percent (n=63) of the participants worked in Canada, 37% (n=38) worked in the United States, and 3 % (n=3) of the participants worked in England. Each focus group lasted for an average of 1.5 hours. After brainstorming about remote environment issues for about the first half of the meeting, each participant in the focus groups identified the top three issues from their perspective and ranked them in descending order. The last half of the focus group was spent discussing possible actions organizations could take to address the issues (see Staples [1997] for a full report on the results of these focus groups).
Are Remote and Non-Remote Workers Different? 305
Hypotheses The first four hypotheses deal with the role of trust in remote work. Both the literature and participants in the exploratory research suggested that trust between the manager and employee is an important factor for making remote work effective. Developing trust and minimal supervision expectations are important since it is very difficult to supervise and control remote employees due to limited face-to-face contact (Duxbury et al., 1987; Handy, 1995a; Lucas & Baroudi, 1994; Pearlson & Saunders, 2001; Savage, 1988; Snell, 1994). However, trusting employees often goes against a managerial tradition of control and a tradition that believes control and efficiency are closely linked and that control is necessary for efficiency (Handy, 1995a). Trust is the belief or confidence in a person or organization’s integrity, fairness and reliability (Lipnack & Stamps, 1997). In a remote work setting, where employees are working in different locations than their managers, the opportunity for face-to-face contact is limited. This means that the manager has significantly fewer opportunities to view employee behaviour than would exist in a conventional work setting (i.e., where the manager and employee work in the same building). Observing behaviours is no longer a feasible coordination and control mechanism in a remote workplace; trust needs to be used instead. From the remote employees’ perspective, interpersonal trust with their managers is very important since the potential for isolation is high. The informal communication and information-gathering opportunities for employees in virtual work environments are typically less than in non-virtual settings. The employees rely on their managers to keep them informed of necessary information and to support their activities with effective feedback and recognition. Davidow and Malone (1992) suggest that trust is the defining feature of a virtual enterprise and that all types of management in the era of virtual enterprises must be built on trust. Lipnack and Stamps (1997) suggest that “In the networks and virtual teams of the Information Age, trust is a ‘need to have’ quality in productive relationships.” (p. 225). Although the literature contains many suggestions about the importance of trust in remote work (Bogdanski & Setliff, 2000; Brown, 1994; Cascio, 2000; Caswell, 1995; Caudron, 1992; Duratta, 1995; Duxbury et al., 1987; Grensing-Pophal, 1997; Handy, 1995a; Hartman, Stoner & Arora, 1992; Klein, 1994; Miles & Snow, 1995; Posch, 1994), there has been little empirical research done on this to-date. The results of the exploratory research support the views in the literature. In the focus groups, remote employees were concerned about how to remotely build trust and a relationship between managers and employees. Managers
306 Staples
who managed remote employees also identified performance management issues as being common problems (ranking second in terms of weighted frequency). Many of these performance management issues involved trust. Key issues identified included how to build trust between managers and employees such that the manager feels confident about what their employees are doing, as well as how to measure productivity and shift towards a resultbased focus. From the above, the importance of trust in a remote work environment, where the employee works remotely from his/her manager, appears clear. However, in a non-remote environment, trust is also important and has been suggested to be related to performance and effectiveness (Golembiewski & McConkie, 1975; McAllister, 1995; McCauley & Kuhnert, 1992; Rotter, 1967). McAllister (1995) in his study of cognitive and affect-based trust, found significant correlations between both types of trust and performance. Therefore, a positive relationship was hypothesized between employee/manager trust and employee perceptions of the effectiveness of working remotely. Hypothesis 1. Higher levels of trust between the manager and employee will be associated with more positive perceptions of self-performance. Employees in the focus groups suggested that trust of the manager in their abilities and being able to trust the manager increased the enjoyment and satisfaction they received from their job. McCauley and Kuhnert (1992) lend support to these focus group participants’ ideas by suggesting that trust in management is associated with a number of job satisfaction dimensions, including development opportunities, job security and performance appraisal systems. Driscoll (1978) and Robinson (1996) also suggested that trust impacts satisfaction. Hollon and Gemmill (1977) found a significant positive association between trust and job satisfaction. Thus, the second hypothesis is: Hypothesis 2. Higher levels of trust between the manager and employee will be associated with higher levels of job satisfaction. As reviewed above, trust potentially has important impacts on the ability of an employee to perform effectively. Therefore, examining possible things that impact levels of trust is warranted. It was suggested in the focus groups that trust is developed through effective communications, both formal and informal. Voss (1996) supports this view by suggesting that open and spontaneous communication is the basis for building trust and establishing relationships. Grenier and Metes (1995) also suggest that communication builds trust, which in turn builds better communication. Therefore, it was hypothesized that:
Are Remote and Non-Remote Workers Different? 307
Hypothesis 3. More frequent communication between employee and manager will be associated with higher levels of trust. Trust has also been found to impact things other than performance and effectiveness. Hollon and Gemmill (1977) and Ross (1994) found significant negative relationships between trust and job stress. Employees who have high job stress can experience sleepless nights and work under a great deal of tension, and possibly show feelings of nervousness. Potentially, high levels of trust can reduce these feelings and behaviours. High levels of interpersonal trust imply that the manager and employee have an effective relationship where they care about each other, listen to problems, and the manager provides coaching advice and consistent feedback. This can potentially reduce feelings of isolation, an important issue identified by remote employees in the focus groups. Participants in the exploratory focus group research also specifically suggested that job stress declines as trust between the manager and employee increases. Therefore, it is suggested: Hypothesis 4. Higher levels of trust between the manager and remote employee will lead to lower levels of job stress for remote employees. Telework researchers have argued that working at home leads to new sources of stress and increased stress levels for the teleworkers, generally attributed to difficulties in attempting to balance work and family responsibilities (Di Martino & Wirth, 1990; Olson & Primps, 1984; Pearlson & Saunders, 2001). In support of this, McCormick’s (1992) study found that more than 70 percent of teleworking participants reported increased stress as a result of trying to deal with family issues during work hours. However, telework also has the potential to reduce job stress (Igbaria & Guimaraes, 1999; McCloskey, 2001). Explanations for decreased stress include: decreased commute time (Bailey, 1989; Cassidy, 1992; Maynard, 1994; Mayor, 1994; McNerney, 1994; Meall, 1993); relaxed social and political pressures (Metzger & Von Glinow, 1988; Olson & Primps, 1984); decreased interruptions (Olson & Primps, 1984); and improved ability to manage work and family demands (Cosgrove, 1992; Pierce, Newstrom, Dunham & Barber, 1989). Participants in the focus groups conducted for this study suggested that the extra burdens imposed by remotely working can create higher job stress for remote employees than non-remote employees. Potential contributors to the extra burdens were travel, more formal communications, and increased work/family conflict. While there appear to be both positive and negative impacts on job stress caused by remote work, and some of these are specific for work-at-home employees, for this study it was hypothesized that: Hypothesis 5. Remote employees will have higher job stress than nonremote employees.
308 Staples
The focus group participants also suggested that job stress declines as job experience increases. The logic suggested for this was that as one becomes more experienced, the employee develops ways to deal with the potential for work/family conflict. Routines for communication become established which is less stressful. Isolation also reduces as the employee builds networks and contacts. Travel demands may decline as the employee learns to be more discriminating in choosing when they really have to be there face-to-face (i.e., so there is less travel). Consistent with the focus group suggestions, Pearlson and Saunders (2001) also suggest that a balance between managerial and employee controls will develop as both parties gain more experience with the remote relationship. The amount of tension potentially decreases over time. Therefore, Hypothesis 6. Remote work experience will be negatively associated with job stress. Information technology (i.e., the level of connectivity) was suggested to be an important enabler of effective remote work by many of the focus group participants and by various authors in the literature (Freedman, 1993; Greengard, 1994; Handy, 1995a; Kepczyk, 1999; Lucas & Baroudi, 1994; O’Hara-Devereaux & Johansen, 1994; Rooney, 2000; Zabrosky, 2000). The technology allows tasks to be distributed in different places and executed at different times while providing integration and control for the process (Mowshowitz, 1994). The virtual workplace provides access to information needed to do a job anywhere, anytime, anyplace and the latest in communication technology is used to accomplish this (Jenner, 1994). Information technology issues were the second most frequently identified class of issue for remote employees. Focus group participants that were not well connected wanted more capabilities including voice-mail, electronic-mail, groupware, and the perceived ultimate capability, videoconferencing, as well as reliable, constant IT support and access to networks. Participants that were well connected realized the value of it. One participant summarized the feeling succinctly by stating that “IT was their lifeline” to the rest of their work group and the organization. Therefore, examining the level of connectivity and its impact on the employee’s perception of the effectiveness of working remotely and their job satisfaction was warranted. Hypothesis 7. Higher levels of connectivity will be positively associated with the remote worker’s performance. Hypothesis 8. Higher levels of connectivity will be positively associated with the remote worker’s job satisfaction. Several authors have suggested that working remotely may change the perceptions of the corporate culture in remote employees or that remote
Are Remote and Non-Remote Workers Different? 309
employees may develop different views of the corporate culture (Gainey, Kelley & Hill, 1999; Greengard, 1994; Illingworth, 1994; Raines & Leathers, 2001; Voss, 1996; Walsham, 1994). Consistent with these suggestions in the literature, maintaining or developing an appropriate corporate culture in a dispersed work environment was found to be a key issue with some of the focus group participants. This was especially true in companies where their corporate culture was explicitly viewed as a valuable asset. Replacing the informal sharing of values and stories that occurs naturally when people are physically together has to be replaced in a virtual setting by explicit efforts that will typically fall upon the manager’s shoulders. This can be difficult to do. Therefore, it may be that remote employees develop different or weaker perceptions of the organization’s culture than do non-remote employees. In order to test this idea, the following hypothesis was posited. Hypothesis 9. Remote employees will have a different perception than non-remote employees of the organization’s corporate culture. Two of the nine hypotheses developed above explicitly compare remote workers with non-remote workers (i.e., hypotheses 5 and 9). The other seven hypotheses deal with relationships between various concepts that are developed specifically for remote workers, although all of these seven relationships could also apply to non-remote workers. Participants of the focus groups suggested that working remotely is considerably different than working locally. The remote work literature also implicitly supports this suggestion, although only a few studies (e.g. Igbaria & Guimaraes, 1999; McCloskey, 2001) have explicitly examined empirical differences between teleworkers and non-teleworkers. Igbaria and Guimaraes (1999) found significant differences between teleworkers and non-teleworkers in their study of turnover intentions and its determinants. McCloskey (2001) found statistically significant differences between the two groups in terms of perceived autonomy, work-family conflict, career support and advancement prospects, and boundary spanning activities. Therefore, the strength and directions of the relationships suggested by the seven hypotheses may be different in the two settings. In order to examine if the relationships are different for remote versus local workers, the following proposition is suggested: Proposition 1. Hypotheses 1 through 4, and 6 through 8, will be supported more in a remote work setting than in a non-remote setting.
310 Staples
RESEARCH METHODOLOGY The Sample Data were gathered by sending a questionnaire to 1,343 individuals working in 18 North American organizations, which both (1) employed individuals who worked remotely from their managers, and (2) were interested in participating in a study of remote workers. A total of 631 questionnaires were returned, for an overall response rate of 47%. Although this response rate is somewhat low, raising potential concerns about nonresponse bias, use of the procedure suggested by Armstrong and Overton (1977) indicated no significant differences between respondents and nonrespondents on a variety of demographic variables included in the questionnaire. Thus, non-response bias did not appear to be a major problem. A total of 376 of the returned questionnaires were from remotelymanaged employees, as defined by the employee having their office in a different building than their manager. Forty-seven percent of these remote respondents worked in private sector high technology firms, 22% worked in private sector financial service firms, and the remaining 31% worked in the public sector. About 44% of the respondents had been with their organization 11 or more years. Approximately half of the respondents had been in their present position three or more years, and about 60% had worked for their present manager 2 years or less. Seventeen per cent of the remote-managed respondents worked at home, with the vast majority of these indicating that it was easy for them to so. The median distance between the respondents’ office and their manager’s office was 483 kilometres. Slightly under half of the respondents were locally managed (n=255; 40.4%). The demographic characteristics of the remotely-managed respondents were similar to those of the locally-managed respondents. As would be expected, more remotely-managed respondents worked from their home which meant that on average, the remotely-managed respondents had a shorter commute time. The remotely-managed respondents also appeared to have longer tenure in their position than the locally-managed respondents. MANOVA and Chi-Square tests of independence analyses were conducted to test if the differences between remotely-managed and locally-managed respondents were statistically significant. Only two significant differences were found. On average, remotely-managed respondents had been in their posi-
Are Remote and Non-Remote Workers Different? 311
tion longer, and a higher proportion of remotely-managed respondents were married than were locally-managed respondents (87.3% versus 79.0% respectively). It was judged that these two differences were not critical to the issues being studied here so it was acceptable to proceed with testing the hypotheses that dealt with differences between remote and non-remote employees.
Analysis Analysis of variance techniques were used to test the hypotheses. Specifically, MANOVA, which is a technique for analyzing differences between group means of categorical variables (i.e., the independent variables) in situations where there is more than one dependent variable, was used. MANOVA results are assessed in two steps. First, the overall test of significance is examined (i.e., the Omnibus test) which takes into account the intercorrelations of the dependent variables. If the overall test is significant, the dependent variables can then be examined individually. If an individual item’s F-test is statistically significant, then there are differences between the groups for that dependent variable. The rejection criteria for the individual Ftests are adjusted using a Bonferroni procedure (i.e., divide the nominal alpha by the number of dependent variables [Bray & Maxwell, 1985]).
Construct Measurement Where possible, the constructs were measured with proven scales taken from the literature. In order to achieve acceptable levels of measurement reliability and validity, both a pretest and a pilot study were carried out, following the guidelines suggested by Dillman (1978). Questionnaire pretesting was first completed using faculty, graduate student, and practitioner input. This information was used to refine the original survey instrument. A preliminary pilot study questionnaire was then administered to remote employees in one insurance firm, resulting in 64 responses. The resulting data were analyzed and used to further modify the questionnaire items for the full study. Measurement of the dependent variables is described below, followed by a description of the independent variables. The Dependent Variables. In the nine hypotheses, six unique dependent variables are used. Four of these dependent variables were measured with existing scales taken from the literature that had demonstrated acceptable psychometric properties in previous studies. These were: trust, job satisfaction, job stress, and organizational climate. Trust was measured using an 11 item scale developed by McAllister (1995). Warr, Cook and Wall’s (1979)
312 Staples
15 item scale was used to assess job satisfaction. A five item scale developed by Rizzo, House and Lirtzman (1970) was used to measure job stress. Fineman’s (1975) Job Climate Questionnaire was initially used to measure organizational climate. A measure of organization climate was chosen over a culture measure since organizational climate was seen to fit better with this study. Organizational climate has a somewhat shorter time frame (relatively enduring) than organizational culture (highly enduring) and climate is more practice oriented, operating at the level of attitudes and values (Moran & Volkwein, 1992). However, the results of the pilot test indicated that the Fineman scale had poor psychometric properties even though it was reported to have good reliability in the literature. Therefore, it did not appear to work in this context and it was replaced in the final version of the survey with a 5 item scale Higgins and Duxbury developed and validated in several in-house company surveys involving several thousand respondents. The loadings of the 5 items of this organizational climate scale consistently demonstrated good reliabilities (loadings of .8 or higher) in Higgins and Duxbury’s work (C.A. Higgins, personal communication, May 28, 1996). As shown in Table 1, Cronbach’s alpha for the four constructs were all above 0.8 in this study indicating adequate internal consistency. The dependent variable for hypotheses 1 and 7 was the respondents’ perceptions of their performance. Performance was operationalized via two measures that were developed for this study. The first measure collected information on remotely-managed employees perceptions of the effectiveness of working remotely. Six items were used to do this. The second measure assessed overall perceived productivity. Respondents were asked to indicate their agreement with eight statements regarding their overall effectiveness (2 items), efficiency (2 items), quality of work (3 items), and productivity (1 item). Internal consistency of these two measures was found to be adequate (Cronbach’s alpha of 0.82 and 0.87, respectively). Face validity was assessed Table 1: The reliability of the scales used to measure the dependent variables Name
Number of items
Cronbach's alpha
Perception of remote work effectiveness
6
.82
Perception of overall productivity
8
.87
Job satisfaction
15
.89
Trust
11
.94
Job stress
5
.84
Organizational climate
5
.87
Are Remote and Non-Remote Workers Different? 313
during the pretest and found to be acceptable. Principal components analysis with varimax rotation was conducted to examine the construct validity of these measures. The eight productivity items broke into two factors. One factor dealt with the 5 items that asked about the respondent’s own beliefs about their productivity and the other factor dealt with the 3 items that asked about the respondent’s beliefs about how other’s view their productivity (i.e., their manager and co-workers). The loadings were high within these factors (i.e., ranging from .70 to .90) and the cross-loadings were low on other factors (i.e. a maximum of .30). This indicated good discriminant validity and reasonable internal consistency within the factors. Overall, the results suggested that the productivity construct, as measured, had two sub-dimensions. This appeared reasonable given the items used. Since all the items dealt with productivity, and adequate internal consistency was indicated by the Cronbach’s alphas, using the 8 items together for the MANOVA analysis was judged to be reasonable. Similar results were found for the remote work effectiveness measure. The six items broke into two factors, one of which was comprised of the three items asking about changes in the respondent’s productivity since they started working remotely. The other factor was made up of three items asking about perceptions of working remotely in general. The loadings were high within these factors (i.e., ranging from .77 to .91) and the cross-loadings were low on other factors (i.e. a maximum of .39). These results also suggested that the effectiveness of working remotely construct, as measured, had two subdimensions which appeared reasonable given the items used. Since all the items dealt with perceptions of working remotely, and adequate internal consistency was indicated by the Cronbach’s alphas, using the 6 items together for the MANOVA analysis was again judged to be reasonable. Principal components analysis was then conducted using the 14 performance items and the items that measured trust and connectivity (i.e., the independent variables in the relevant hypotheses). The results indicated good discriminant validity between the measures as all eleven of the trust items collapsed into one factor, as did the three connectivity measures (described below). Crossloadings of items onto constructs that they were not designed to measure were all low. The Independent Variables. To test the nine hypotheses, categorical variables were required to measure five constructs: trust, frequency of communications, remote work, experience, and connectivity. A dichotomous measure of trust was created by summing the 11 trust items and splitting the respondents into two groups at the midpoint. A dichotomous measure of the frequency of communication was created. Respondents indicated how fre-
314 Staples
quently they used six different media (face-to-face meetings, written correspondence, telephone, e-mail, groupware, and videoconferencing) for four different activities (i.e., receiving coaching feedback and performance feedback, discussing other information, and staying in touch with the manager). The responses were summed to create one variable that was split at the midpoint to create two groups. Remote workers were defined as those who indicated that their primary office was in a different building than their managers. Experience was assessed in three different ways. A dichotomous variable measuring the length of time the respondent had been remotely managed was used with the break point being 3 years. The length of time the respondent had been working for their company was assessed with a variable that had five categories, ranging from “less than one year” to “over 20 years”. The experience the respondents had in their current position was assessed with a four category variable, with the responses ranging from “less than one year” to “over 5 years”. Three items were developed to measure the connectivity construct. The measures used in this study only assessed physical connectivity (i.e., the degree to which IT tools were available). Multiple items were used to allow a more complete assessment of the degree to which the respondents had IT tools available. The first item assessed respondents’ access to IT communication systems. Specifically, this was access to voice mail, e-mail, groupware, and videoconferencing systems, and the item was created by summing responses to four questions which determined whether or not they had access to each of the specific technologies/systems at their place of work. The second item was a sum of the responses to questions dealing with respondents’ use of various IT tools associated with enabling remote work (i.e., laptops, modems, fax, cellular phones, and pagers). The third item addressed remoteaccess capability and was created by summing items which asked respondents about their ability to use their e-mail, groupware, and telephone / voicemail systems from remote locations. All three of the connectivity items loaded onto one factor in the principal components analysis, described previously.
RESULTS The results of the MANOVA analysis used to test the hypotheses are presented in Table 2. Support was found for five of the nine hypotheses. These were hypotheses 1 through 4, and hypothesis 8 which are described more fully below. The other four hypotheses were not supported. Remote employees did not have significantly different levels of stress (H5) nor different perceptions
Are Remote and Non-Remote Workers Different? 315
of organizational climate (H9) than did the locally managed employees. Greater experience, either with the organization, with the current position, or with working remotely, did not significantly reduce the job stress perceived by remotely-managed employees (H6). Hypothesis 7 was also not supported, although the omnibus test was significant for the impact of connectivity on worker’s attitudes toward remote work in two cases (H7c and H7d). However, in both these cases, there were no significant differences between the group responses for the individual items which means that the hypotheses were not supported. Hypotheses 1 through 4 were supported. Trust was significantly associated with remote employees’ perceptions of their performance in a remote work environment, both in terms of perceptions of overall productivity (H1a) and perceptions of remote work (H1b). Trust was also significantly associated with the remote employees’ levels of job satisfaction (H2) and job stress (H4). Individuals with high levels of trust had significantly higher job satisfaction and perceptions of working remotely, and lower job stress. The last column in Table 2 indicates how many of the dependent variables were found to have statistically significant differences via the individual F-tests. The individual items that were significantly different are listed in Table 3. Three of the eight items regarding overall productivity were significant (H1a). Three of the six attitudes towards remote work items (H1b) had significant differences between respondents with high trust and those with low trust. In all cases, the perceptions were more positive for those people who had higher levels of trust between themselves and their manager. Thirteen of the 15 job satisfaction items had significant differences between high and low trust respondents (H2). Again, in all cases, job satisfaction scores were higher for the high trust respondents versus low trust respondents. Two of the five items that measured stress (H4) had significant differences between the high and low trust groups (see Table 3). Respondents with high trust levels between their managers and themselves felt that they worked under less tension and felt less fidgety and nervous than did the respondents with lower trust levels. Partial support was also found for the impact of connectivity on remote workers’ job satisfaction (H8). Specifically, hypothesis 8b, where the independent variable was the use of remote access tools, was found to be significant, and one individual item was found to be significant (Table 3). Employees that used remote access tools more frequently in their job had higher satisfaction with the freedom to choose their own method of working. Proposition 1 was examined by testing hypotheses 1 through 4 and 6 through 8 using the responses from the respondents who did not work remotely. The results of the OMNIBUS MANOVA tests are presented in
316 Staples
Table 2: A summary of the results of the analysis of variance H#
Hypothesis Description
Omnibus Test
Significance Level
No. of sig. items
Impact of trust on remote employees’ perceptions of overall productivity
Hotelling's T2 = 0.12 F (8,358) = 5.41
p < .001
3 out of 8
H1a
Impact of trust on perceptions of remote work
Hotelling's T2 = 0.05 F (6,359) = 3.27
p = .004
3 out of 6
Impact of trust on perceptions of remote employees’ job satisfaction
Hotelling's T2 = 0.67 F (15,356) = 16.00
p < .001
H2
13 out of 15
Impact of frequency of communications of employee/manager trust
Hotelling's T2 = 0.094 F (11,361) = 3.08
p = .001
6 out of 11
H3
Trust reduces job stress
Hotelling's T2 = 0.04 F (5,367) = 3.04
p = .011
2 out of 5
Remote employees have higher job stress than do locally-managed employees
Hotelling's T2 = 0.01 F (5,621) = 1.04
p = .395
H5
Greater tenure in organization reduces job stress
Wilks Lambda = 0.94 F (20,1212) = 1.10
p = .345
H6a
Greater experience in the current position reduces job stress
Wilks Lambda = 0.97 F (15,1008) = 0.79
p = .685
H6b
More experience at working remotely reduces job stress
Hotelling's T2 = 0.01 F (5,363) = 0.94
p = .454
H6c
Impact of having IT communication systems on remote worker’s perceived productivity
Wilks Lambda = 0.98 F (24,995) = 1.09
p = .345
H7b
Impact of use of remote access tools on remote worker’s perceived productivity
Hotelling's T2 = 0.04 F (8,357) = 1.93
p = .055
H7c
Impact of remote access to systems on remote worker’s perceived productivity
Hotelling's T2 = 0.06 F (8,359) = 2.55
p = .010
None
Impact of having IT communication systems on remote worker’s attitudes toward remote work
Wilks Lambda = 0.92 F (18,976) = 1.72
p = .030
None
H7e
Impact of use of remote access tools on remote worker’s attitudes toward remote work
Hotelling's T2 = 0.01 F (6,358) = 0.80
p = .570
H7f
Impact of remote access to systems on remote worker’s attitudes toward remote work
Hotelling's T2 = 0.03 F (6,360) = 1.58
p = .151
Impact of having IT communication systems on remote worker’s job satisfaction
Wilks Lambda = 0.81 F (45,1017) = 1.66)
p = .005
None
H8b
Impact of use of remote access tools on remote worker’s job satisfaction
Hotelling's T2 = 0.12 F (15,356) = 2.74
p = .001
1 out of 15
H8c
Impact of remote access to systems on remote worker’s job satisfaction
Hotelling's T2 = 0.05 F (15,358) = 1.27
p = .218
Remote employees have a different perception of organizational climate than do locally-managed employees
Hotelling's T2 = 0.01 F (5,621) = 1.55
p = .173
H1b
H4
H7a
H7d
H8a
H9
Table 4. Support was found for hypotheses 1, 2, and 4. Partial support was found for hypothesis 8. The results for both the remotely-working respondents and the non-remote respondents are summarized in Table 5. This shows
Are Remote and Non-Remote Workers Different? 317
Table 3: The significant dependent variables from the MANOVA analysis Individual Test Results *
Item Description
Mean Value of Low Score Group
Mean Value of High Score Group
Hypothesis 1a
Impact of trust on perceptions of remote employees’ productivity
F(1,365) = 8.89, p < .05
I am a highly productive employee
5.65
5.96
F(1,365) = 27.20, p < .01
My manager has recently (i.e., within the last three months) been impressed with the quality of my work
5.03
5.69
F(1,365) = 30.85, p < .01
My manager believes I am an efficient worker
5.34
5.92
Hypothesis 1b
Impact of Trust on Perceptions of remote work
F(1,364) = 15.03, p < .01
Working remotely is an effective way to work
5.09
5.66
F(1, 364) = 10.27, p < .01
It is not difficult to do the job being remotely managed
5.45
5.98
F(1, 364) = 12.32, p < .01
Working remotely is an efficient way to work
5.10
5.64
Hypothesis 2
Impact of trust on perceptions of remote employees’ job satisfaction
F(1,370) = 13.54, p < .001
Satisfaction with the freedom to choose your own method of working
5.60
6.04
F(1,370) = 50.51, p < .001
Satisfaction with the recognition you get for good work
4.16
5.20
F(1,370)= 203.14, p < .001
Satisfaction with your immediate boss
4.53
6.21
F(1,370) = 53.56, p < .001
Satisfaction with the amount of responsibility you are given
5.13
6.06
F(1,370) = 15.00, p < .001
Satisfaction with your rate of pay
4.03
5.68
F(1,370) = 39.00, p < .001
Satisfaction with the opportunity to use your abilities
4.85
5.71
F(1,370) = 37.01, p < .001
Satisfaction with industrial relations between management and employees in your firm
4.40
5.16
F(1,370) = 23.30, p < .001
Satisfaction with your chance of promotion
3.48
4.27
F(1,370)= 143.74, p < .001
Satisfaction with the way you are managed
4.22
5.76
F(1,370)= 119.65, p < .001
Satisfaction with the attention paid to the suggestions you make
4.43
5.79
F(1,370) = 8.94, p < .05
Satisfaction with your hours of work
4.90
5.39
F(1,370) = 18.02, p < .001
Satisfaction with the amount of variety in your job
5.25
5.80
318 Staples
Table 3: The significant dependent variables from the MANOVA analysis (continued) Individual Test Results *
Item Description
Mean Value of Low Trust Group
Mean Value of High Trust Group
F(1,370) = 19.42, p < .001
Satisfaction with your job security
4.24
4.93
Hypothesis 3
Impact of frequency of communications on interpersonal trust
F(1,371) = 12.42, p < .01
My manager and I have a sharing relationship. We can both freely share our ideas, feelings and hopes
4.74
5.32
F(1,371) = 9.83, p < .05
We would both feel a sense of loss if one of us was transferred and we could no longer work together
3.75
4.28
F(1,371) = 9.90, p < .05
My manager approaches his/her job with professionalism and dedication
5.46
5.88
F(1,371) = 9.86, p < .05
Given my manager’s track record, I see no reason to doubt his/her competence and preparation for the job
5.11
5.60
F(1,371) = 12.03, p < .01
Most people, even those who aren’t close friends of my manager, trust and respect him/her as a coworker
4.74
5.28
F(1,371) = 16.90, p < .01
Work associates of mine who must interact with my manager consider him/her to be trustworthy
4.83
5.43
Hypothesis 4
Impact of trust on job stress
F(1,371) = 9.25, p < .05
I work under a great deal of tension
4.71
4.19
F(1,371) = 9.63, p < .05
I have felt fidgety or nervous as a result of my job
3.99
3.40
Hypothesis 8b
Impact of using remote access tools on 5.62
6.03
remote worker’s job satisfaction F(1,370) = 12.31, p < .05
Satisfaction with the freedom to choose your own method of working
that the pattern of results is quite similar. Hypotheses 1, 2 and 4 were supported for both groups of respondents. Also, hypothesis 8 was partially supported for both groups. The only difference was that hypothesis 3, the impact of the frequency of communications on trust, was only supported in the remote workers’ analysis. Given the fairly similar results between the remote and non-remote groups, it was concluded that there was little support for proposition 1. The relationships between the independent variables and the dependent variables specified in the hypotheses do not appear to be very different for remote workers versus local workers.
Are Remote and Non-Remote Workers Different? 319
Table 4: A summary of the results of the analysis of variance for non-remote workers H#
Hypothesis Description
Omnibus Test
Significance Level
No. of sig. items
Impact of trust on employees’ perceptions of overall productivity
Hotelling's T2 = 0.13 F (8,238) = 3.98
p < .001
4 out of 8
1a
Impact of trust on perceptions of employees’ job satisfaction
Hotelling's T2 = 0.81 F (15,235) = 12.76
p < .001
2
13 out of 15
Impact of frequency of communications of employee/manager trust
Hotelling's T2 = 0.04 F (11,239) = 0.88
p = .556
3
Trust reduces job stress
Hotelling's T2 = 0.16 F (5,244) = 7.99
p < .001
Greater tenure in organization reduces job stress
Wilks Lambda = 0.90 F (20,800) = 1.23
p = .224
Greater experience in the current position reduces job stress
Wilks Lambda = 0.99 F (15,668) = 0.36
p = .987
Impact of having IT communication systems on worker’s perceived productivity
Wilks Lambda = 0.87 F (24,682) = 1.44
p = .079
7b
Impact of use of remote access tools on worker’s perceived productivity
Hotelling's T2 = 0.06 F (8,238) = 1.69
p = .101
7c
Impact of remote access to systems on worker’s perceived productivity
Hotelling's T2 = 0.07 F (8,236) = 2.12
p = .035
None
Impact of having IT communication systems on worker’s job satisfaction
Wilks Lambda = 0.72 F (45,693) = 1.82
p = .001
3 out of 15 items
8b
Impact of use of remote access tools on worker’s job satisfaction
Hotelling's T2 = 0.10 F (15,236) = 1.61
p = .073
8c
Impact of remote access to systems on worker’s job satisfaction
Hotelling's T2 = 0.10 F (15,234) = 1.64
p = .066
4 6a 6b
7a
8a
5 out of 5
Table 5: Proposition 1–Comparing the results of hypothesis testing between remote workers and non-remote workers Hypothesis
Remote Workers
Non-Remote Workers
H1a: Impact of trust on employees’ perceptions of overall productivity
Supported
Supported
H2: Impact of trust on perceptions of employees’ job satisfaction
Supported
Supported
H3: Impact of frequency of communications of employee/manager trust
Supported
Not Supported
H4: Trust reduces job stress
Supported
Supported
H6: Experience reduces job stress
Not Supported
Not Supported
H7: Impact of connectivity on worker’s perceived productivity
Not Supported
Not Supported
Partially Supported
Partially Supported
H8: Impact of connectivity on worker’s job satisfaction
320 Staples
DISCUSSION Information technology (i.e., the level of connectivity) was suggested to be an important enabler of effective remote work by many of the focus group participants and by various authors in the literature (Freedman, 1993; Greengard, 1994; Handy, 1995a; Kepczyk, 1999; Lucas & Baroudi, 1994; O’Hara-Devereaux & Johansen, 1994; Rooney, 2000; Zabrosky, 2000). Partial support was found for this suggestion from the empirical data in this study. For the remote employees, frequent communications were significantly related to higher levels of interpersonal trust (H3). Since communications in a remote setting are often done via IT, this finding supports the need for good connectivity. However, connectivity did not seem to significantly impact perceptions of performance (H7), although if the significance criteria was relaxed somewhat (i.e., to 0.10), the relationship between connectivity and perceived overall productivity would become significant in many of the tests. Partial support was found for an effect of connectivity on job satisfaction (H8). Overall, these findings are consistent with previous research that has suggested that the impact of IT would be one of many things influencing outcomes (Barley, 1990; Kling, 1980 & 1987; Symons, 1991). IT does appear to be a necessary enabler but not sufficient condition to strongly impact individual outcomes. Fulk, Flanagin, Kalman, Monge and Ryan (1996) suggest that there are two different types of connectivity: physical and social. The definition (and set of measures) of connectivity used in the current study dealt with the level of physical connectivity available to the respondents. However, individuals must also be willing and able to use such connectivity. Future research on remote work should broaden the definition of connectivity to include social connectivity, as well as physical connectivity. Social connectivity would capture the individual’s willingness and ability to use the physical connectivity. This type of connectivity may well have a stronger impact on individual outcomes such as performance. The strongest finding of this study centered around the role of trust. Trust was found to significantly impact perceptions of performance, job satisfaction and job stress, as was hypothesized. The findings were consistent for both remote and non-remote workers. Since trust was found to be a key variable, additional analysis was carried out to see if different types of trust had different impacts in remote settings versus non-remote settings. The trust instrument used in this study was McAllister’s (1995) interpersonal trust scale. McAllister suggested that there are two dimensions of interpersonal trust: cognition-based and affect-based trust. Cognition-based trust is based on “what we take to be ‘good reasons’ constituting evidence of trustworthi-
Are Remote and Non-Remote Workers Different? 321
ness such as demonstrated responsibility and competence” (Lewis & Weigert 1985, p. 970). Affect-based trust consists of emotional bonds between two parties who express genuine care and concern for the welfare of each other (McAllister 1995). Both of these dimensions of trust were part of the instrument used in this study so it was possible to refine the analysis to examine the impact of affect-based trust versus the impacts of cognitionbased trust. Hypotheses 1, 2 and 4 (impact of trust on performance, job satisfaction, and job stress) were retested for each of the two trust subdimensions and for each of the datasets (remote and non-remote workers). The results are summarized in Table 6. Table 6: Comparing the results of hypothesis testing between remote and nonremote workers for the two different dimensions of trust Hypotheses Supported? Non-Remote Workers OMNIBUS Test
No. of significant items
Remote Workers OMNIBUS Test
No. of significant items
H1a: Trust to employees’ perceptions of overall productivity Cognition-based Trust
YES Hotelling's T2 = 0.09 F (8,238)=2.81; p=.005
2 out of 8 items
YES Hotelling's T2 = 0.08 F (8,358)=3.79; p<.001
2 out of 8 items
Affect-based Trust
YES Hotelling's T2 = 0.16 F (8,238)=4.73; p<.001
6 out of 8 items
YES Hotelling's T2 = 0.16 F (8,358)=6.99; p<.001
2 out of 8 items
H1b: Trust to remote employees’ perceptions of remote working Cognition-based Trust
Not applicable
YES Hotelling's T2 = 0.05 F (6,359)=2.78; p=.012
3 out of 6 items
Affect-based Trust
Not applicable
YES Hotelling's T2 = 0.05 F (6,359)=3.04; p=.006
4 out of 6 items
H2: Trust to perceptions of employees’ job satisfaction Cognition-based Trust
YES Hotelling's T2 = 0.83 F (15,235)=12.97; p<.001
12 out of 15 items
YES Hotelling's T2 = 0.47 F (15,356)=11.14; p<.001
10 out of 15 items
Affect-based Trust
YES Hotelling's T2 = 0.75 F (15,235)=11.68; p<.001
14 out of 15 items
YES Hotelling's T2 = 0.65 F (15,356)=15.52; p<.001
12 out of 15 items
Cognition-based Trust
YES Hotelling's T2 = 0.10 F (5,244)=4.65; p<.001
3 out of 5 items
YES Hotelling's T2 = 0.07 F (5,367)=5.30; p<.001
3 out of 5 items
Affect-based Trust
YES Hotelling's T2 = 0.12 F (5,244)=5.88; p<.001
4 out of 5 items
NO Hotelling's T2 = 0.02 F (5,367)=1.60; p=.159
H4: Trust reduces job stress
322 Staples
The impact of cognition-based trust on the dependent variables appears to be similar for both remote workers and non-remote workers. For both groups, cognition-based trust was statistically significantly associated with perceptions of overall productivity, job satisfaction and job stress. However, the results suggest that the role of affect-based trust is stronger for non-remote employees than it is for remote employees. Affect-based trust was significantly related to overall productivity, job satisfaction, and job stress for nonremote employees while it was only found to be related to overall productivity and job satisfaction for remote employees. Affect-based trust was not significantly related to job stress for the remote workers, and it seemed to have a somewhat weaker effect on job satisfaction and overall productivity perceptions, as evidenced by the fewer individual items being significant. Therefore, it appears that managers of remote workers should concentrate on building cognition-based trust since that has a bigger impact than affect-based trust. Cognition-based trust can be built by focussing on activities that lead to employees trusting managers based on their demonstrated competence, responsibility and professionalism. The results suggest that managers and employees should work hard at developing a relationship based on trust, both in a remote and a non-remote setting. Knowing what can be done to build this trust is therefore an important avenue for future research. In the current study, one antecedent of trust was hypothesized, frequency of communication (H3). Support for this hypothesis was found in the remote workers’ responses but not in the non-remote workers’ responses. Splitting trust into the two dimensions and reanalyzing the hypothesis found similar results (i.e., the frequency of communication significantly impacted both dimensions of trust for the remote workers and had no significant effect for the non-remote workers). These findings support suggestions in the literature that frequent communications are an important step to building trust, presumably because this facilitates the sharing of information between the manager and employee on each others’ activities and feelings. This sharing builds a relationship between the two parties over time. The findings also suggest that communications in a non-remote setting are not as important for building trust, possibly because the nonremote employee has other avenues for gathering information that he/she uses to form trust judgements. Examination of other factors that affect the creation of trust in both a remote and non-remote setting are important avenues for future research. Nonsignificant results were found for Hypothesis 9, suggesting that remote employees and non-remote employees had similar perceptions of organizational climate. In the study, the responses from several organizations
Are Remote and Non-Remote Workers Different? 323
were combined to test the hypotheses. Gainey, Kelley and Hill (1999) suggestions imply that this could have had a confounding effect (i.e. not controlling for organization could have led to the nonsignificant finding). Gainey et al. (1999) suggest that the impact of remote work on organizational culture will vary depending on the type of organizational culture present. They used Handy’s (1995b) four types of organizational cultures in their discussion. Handy based his typology on Greek mythology: the Zeus culture, the Athena culture, the Apollo culture, and the Dionysus culture. The Zeus culture depends heavily on similarity of members, powerful, central leaders, extensive personal contact, and very few procedural guidelines. The Athena culture values creativity, enthusiasm, and unique solutions, and is characterized by frequent interactions within a loose network of teams. The Apollo culture is a bureaucratic culture where there are carefully documented policies and procedures. In the Dionysys culture, individual talents and accomplishments are key, leading to minimal contact among individuals. Gainey et al. (1999) suggest that the Zeus culture will be most heavily impacted by remote work, followed by the Athena culture. Both the Apollo and Dionysus cultures would be more suited to remote work, since isolation would be less likely to weaken those cultures. In order to explore the potential confounding effect of combining responses from several organizations further, post-hoc analysis was carried out by splitting the sample on organizational membership and retesting Hypothesis 9 for the sub-samples from each organization. Only three of the organizations in the current study had enough respondents to make up a suitably-sized sub-sample for analysis (i.e. n = 86, 243, and 104). MANOVA analysis was run on the 5 organizational climate items for each of the three sub-samples. The OMNIBUS MANOVA test was significant for one subsample (Hotelling’s T2 = 0.18, F (5, 80) = 2.89, p = .019); however, after Bonferroni adjustment, none of the individual items had statistically significant F-tests. The OMNIBUS MANOVA tests for the other two sub-samples were not statistically significant (Hotelling’s T2 = 0.04, F (5, 236) = 2.02, p = .077 and Hotelling’s T2 = 0.11, F (5, 97) = 2.20, p = .061). Therefore, controlling for the organization in the current study did not create statistically significant differences between remote and non-remote respondents’ perceptions of organizational climate. However, it is worth noting that controlling for organizational membership did increase the statistical significance of the findings (i.e. p was .173 for the entire sample; it was considerably small for each sub-sample), lending some support to the suggestions of Gainey, Kelley and Hill. It is also worth noting that we did not explicitly look for organizations that had the types of climates/cultures that Gainey, Kelley and Hill
324 Staples
(1999) suggest would be significantly impacted by isolation and remote work (i.e. Zeus and Athena cultures). Future research should be done to select organizations that fit each culture profile and study the impact of remote work on organizational culture within these four types of cultures. As with all studies, this one has a number of limitations and opportunities for future studies, some of which have been previously identified. Since this was a cross-sectional study, the ability to make causal statements is severely limited. In addition to replicating the cross-sectional research to enhance the external validity of the findings, more qualitative research, such as longitudinal case studies, would be valuable. While questionnaires lend themselves to quantitative analysis, case studies would gather richer, deeper information. This information would be valuable in examining issues such as the role of informal communications in a remote work setting and the way trust can be built effectively in a remote environment. The sample used in this study included respondents from several organizations who did a wide variety of job functions. While this is a strength in terms of the ability to generalize the results, it can also be considered a limitation since there was no control for potential confounding factors such as job task or organizational culture (as previously discussed). Consequently, it must be left to future research work to determine, for example, whether the results are consistent between high technology and non-technology workers, or whether remote work in the public and private sectors is fundamentally different. In summary, the results from this research study make a contribution for practitioners by identifying that increasing the trust employees, both remote and non-remote, have in their managers will be beneficial to organizations since it positively impacts a number of important outcomes. Managers of remote employees can do this through frequent communication. Managers of remote employees should also focus on building cognition-based trust with their employees, since it has a potentially greater payback to the organization. The findings also help guide future research by identifying important aspects of remote work to focus on such as the causes of trust. As the practice of working remotely grows, it will become increasingly important for organizations to understand what they can do to make their remote work environment effective.
Measuring the Impact of Information Systems on Organizational Behavior 325
Chapter XIX
Measuring the Impact of Information Systems on Organizational Behavior R. Wayne Headrick New Mexico State University, USA George W. Morgan Southwest Texas State University, USA
Information systems design’s traditional concentration on short-term, readily quantifiable functional factors has resulted in the development of systems that are usually quite capable of manipulating data in the desired manner to produce the required output, but often fail to promote the general behavioral climate objectives of the organization. Failure to consider such behavioral objectives in the design process can result in information systems that have an impact on the organization that is intrusive in nature. To design information systems that not only meet functional objectives, but also promote objectives related to the organization’s behavior, their impact on organizational behavior must be understood and quantified. Toward that end, a methodology that can measure the impact of an information system has on the behavioral climate of the organization has been developed and tested. Utilizing pre- and post-implementation assessments of an organization’s behavioral climate, this methodology enables information systems developers to identify specific potential design criteria which, when implemented, will increase the degree to which the organization’s behavioral goals and objectives are met. Consideration of such organizational behavior goals and Copyright © 2002, Idea Group Publishing.
326 Headrick & Morgan
objectives when designing information systems can result in significant progress toward ensuring the acceptance and long-term survival of those information systems. Information systems, like any other production-oriented system, have as their goal the transformation of raw materials (data) into finished goods (information) in an effective, efficient manner. In the context of information systems, this means meeting informational requirements with a minimum expenditure of available resources. To develop such information systems, design and development strategies ranging from the traditional top-down or bottom-up techniques to Joint Application Design (JAD) to object-oriented have been advanced. Traditional top-down systems design revolves around the functional decomposition of the business activity under consideration until the resultant sub-activities are of manageable size and complexity. This emphasis on functionality often overshadows other organizational considerations in the information systems design and development process. Along this line, Zmud (1983) noted that in traditional systems design, the use of a set of functional support oriented requirements to aid the design process was considered to be the key to a successful information system, with little consideration given to the impact of the system on the organization. JAD, one of a number of systems design methods developed to increase user involvement in the design process while ensuring the functionality of the resultant information systems, makes use of a structured mechanism to increase the voice of the user community in the design of the information system. By actively involving the users of the system in the design effort, the focus on business needs they bring to the process should result in a system that is more organizationally relevant than would otherwise be the case. Although not specifically developed to enhance user interaction in systems design efforts, object-oriented systems design methods tend to model business operations in such a way that users can more easily relate to and understand them. Because the functionality of individual objects, as well as interactions among them, are based on the actual functions and processes of the organizational activities being modeled, the organization’s behavior should, to some extent, be reflected in the information system developed from the design modeled. Implicit in this recognition of the need to involve system users in information systems design efforts is the understanding that the system is an integral part of the organization within which it exists. As such, it is important that the system fits into the behavioral climate of that organization.
Measuring the Impact of Information Systems on Organizational Behavior 327
CONSIDERING ORGANIZATIONAL BEHAVIOR Early advocates of incorporating organizational goals and objectives into information systems evaluation, Ahituv and Neumann (1982) noted that, while functional objectives are important to the development of an information system, such an effort is not merely a technological project; it also has significant managerial, organizational and behavioral implications. Zmud (1983) developed an informal mechanism for appraising expected organizational impacts (both positive and negative) in the specification of system requirements. Atteberry and Doke (1982) suggest that personal/organizational considerations should be included in the evaluation of information systems performance, with user attitude toward the systems and the rate of consumption of human resources being typical of the criteria by which such considerations can be measured. Burch and Grudnitski (1990) noted that an organization may be readily viewed as a system made up of several sub-systems, including an information sub-system. Because the information sub-system is an integral part of the organization, changes in it are likely to affect the entire organization, possibly bringing about changes in the organization’s behavioral characteristics. To understand those changes, the behavioral characteristics of the organization must be identified, quantified and measured. In an early effort to explain the behavior of social systems such as organizations, Getzels (1958) concluded that the observed behavior of such systems was a function of the interactions between two dimensions that exist within systems, the institutional (nomothetic) dimension and the individual (ideographic) dimension. The two-dimensional model developed by Getzels, graphically depicted in Figure 1, illustrates the interactions, and potential conflicts, between the structural, norm-producing and sanction-bearing features of the institution and the internal motivation systems of the individuals within the organization. Because the interactions across these two dimensions lead to the observed behavior of the system, supplanting institutional roles and expectations with the processes and/or procedures of an information system creates a parallel model which should help explain the behavioral climate as it relates to an information system. Extending the work of Getzels, Halpin (1966) developed an organizational behavior evaluation measure that focuses on eight factors, four that relate to the behavior of the employee and four that relate to the employee’s perception of the behavior of the organization. The four factors related to the behavior of the employee are: • Disengagement - refers to the employee’s tendency to be “not with it” or just “going through the motions” with respect to the task at hand.
328 Headrick & Morgan
• • •
Hindrance - refers to the employee’s feeling that the system burdens them with requirements that are viewed as unnecessary busywork. Esprit - refers to the level to which their social needs are being satisfied, and the extent to which they enjoy a sense of accomplishment on the job. Intimacy - refers to the employee’s enjoying friendly social relations with each other.
The four factors that relate to the employee’s perception of the behavior of the organization are: • Aloofness - refers to formal and impersonal guidance, and is characterized by the use of rules and procedures rather than informal, face-to-face situations in dealing with employees. • Production emphasis - refers to behavior associated with communication that is downward only, with no sensitivity to employee feedback. • Thrust - refers to behavior characterized by evident effort directed toward trying to “move the system.” High thrust implies that the system expects no more out of the employees than is rational for job/ system goal congruence. • Consideration - refers to the behavior characterized by an inclination to treat the employee “humanly.” The behavioral climate of the organization should be congruent with management desires and expectations, and the organization’s information systems should foster the desired behavioral climate. To do that, information systems development efforts must be concerned with aligning each of Halpen’s eight behavioral factors with the desired levels. This means the information systems design process must be concerned with encouraging movement in one or more of the eight factors while maintaining the positions of the remaining factors. In turn, this requires that the organization’s behavioral climate as described by the eight behavioral factors must be measured using an appropriate instrument prior to the implementation of the information system. Of course, to determine the information system’s impact on the organization’s behavioral climate, the instrument must be administered subsequent to the system’s implementation.
METHODOLOGY To effectively integrate organization behavior considerations into the information systems design process: • the current behavioral climate must be measured and understood, • the desired behavioral climate must be determined,
Measuring the Impact of Information Systems on Organizational Behavior 329
•
the new/modified information system must be designed using the knowledge of the current and desired behavioral climates, and • the behavioral climate must be measured after implementation of the new/modified information system to determine the effectiveness of the design in fostering the desired organizational behavior. Measurement of the present behavioral climate can be accomplished by viewing the organization in terms of Halpin’s eight factors. Using an instrument designed to measure the behavioral climate that exists between any subordinate level and its related superior level, knowledge of the degree of association that exists for each of the eight factors can be developed; with a statement of perceived behavioral climate being developed from the resultant factor classifications. This knowledge can then be presented to the organization’s management, not for the purpose of making judgements regarding the style of management, but rather to allow management to decide if the current behavioral climate is the one desired. At this point, management is able to determine the desired priority and disposition for each of the eight factors. This prioritization, in turn, enables the designers of the new/modified information system to understand management’s perception of the importance of each factor as well as how it should be made to behave. After the new information system has been implemented and declared operational, the behavioral climate can again be assessed using the same instrument used to evaluate the pre-implementation climate, with the results of this post-implementation assessment being compared to both the preimplementation assessment results and the desired behavioral characteristics as determined by management. These comparisons would provide management with an excellent view of the impact that the information system has had on the behavioral climate. If the review indicates that the desired levels for each of the factors have been met, a positive result is indicated. If not, the direction in which each of the factors in variance should be moved can be determined. This knowledge can, in turn, be used to provide direction for further adjustments in the information system.
Situation A Southeastern United States recreation-oriented facility was preparing to change the process through which their employees accepted, changed, and verified customer reservations from one that was completely manual to one that is completely automated. The goal of the company was to accomplish the implementation of the new on-line, real-time reservation system without adversely affecting the current organizational climate. Due to the radical
330 Headrick & Morgan
nature of the changes, the employees’ relative unfamiliarity with computers, and the requirement for a massive amount of data entry upon start-up, the stated goal seemed a bit optimistic.
Pre-Implementation Climate Using an instrument developed to measure Halpin’s eight factors, the employees were asked to indicated their perceptions of the organization’s behavioral climate prior to the introduction of the new information system. As indicated in Figure 2, the employees felt average levels of disengagement, hindrance and intimacy, and somewhat higher than average esprit. They viewed the organization as exhibiting high consideration and thrust, while having low levels of production emphasis and aloofness. After determining the organization’s pre-implementation behavioral climate, and establishing the degree to which management’s expectations are met, directional goals for each of the eight behavioral factors were established. Toward producing the desired results when implementing the new information system, design considerations were identified for each of the factors. • Reduce disengagement by providing initial and on-going training, personalized man-machine-procedures interfaces, and user-oriented procedure manuals tend to reduce disengagement. • Reduce hindrance by including functions such as prompting command structures for systems operation, system-generated record keeping and exception reporting, inquiry capability for displaying information and automated data editing. • Maintain esprit by making employees aware of their importance to the system, and incorporation of self-correcting and self-regulating features in the system. • Increase intimacy by minimizing isolating system interface activities and rotating work schedules. • Maintain low levels of aloofness by dealing with employees face-to-face whenever possible. • Maintain low levels of production emphasis by minimizing perceived system size and complexity, and incorporating formal feedback mechanisms. • Maintain thrust by enhancing the employee’s identification with the system and making clear the job’s value to the overall success of the system. • Maintain consideration including error messages that are threat free, system performance of repetitive operations, etc., also by allowing the employee to function primarily in the role of customer service rather than as simply a button pusher.
Measuring the Impact of Information Systems on Organizational Behavior 331
Post-Implementation Climate Six months after implementation of the new information system, the instrument used to ascertain the organization’s pre-implementation behavioral climate was applied again; this time to determine the post-implementation behavioral climate. A cursory comparison of the pre- and postimplementation levels of the eight factors did not highlight any major changes (ref. Figure 2). However, comparing the expected (pre-implementation) and observed (six-month post-implementation) levels of employee agreement/ disagreement with the individual statements on the instrument using a ChiSquared “Goodness of Fit” test with d.f.=1, five of the factors show a statistically significant (a=0.01) change. Table 1 indicates the results of these comparisons. As a shift in the organization’s behavioral climate was indicated by changes in several of the eight organizational climate factors, it was important to gain an understanding of the reasons behind those changes. It was especially important to the organization being studied, because all of the changes were in a direction away from a climate desired by the organization’s management. Further study identified some reasons for the changes in the organization’s behavioral climate as perceived by the employees. • Disengagement increased because employees were too busy interfacing with new system requirements and less able to concentrated on assigned tasks. • Esprit decreased because the new job activities were more demanding than the old ones, and because these activities were more system oriented rather than people oriented. The employees were not as happy about coming to work as before. • Intimacy decreased as employees felt more isolated on the job. Concentrating on learning new jobs skills left less opportunity for the fulfillment of social needs. Also of interest was a decrease in conflict among employees resulting from less employee interaction. • Aloofness increased as supervisory personnel spent less time engaging in interpersonal activities and more time on technical problems associated with the new information system. • Production Emphasis increased because employees were no longer consulted, the system “knows all.” As supervisors worked less with the employees and more with the system, barriers to upward communications were constructed. Taken individually, none of these changes are seen as being particularly important. Viewed together, however, it becomes apparent that the employees felt they were becoming isolated from both their fellow employees and
332 Headrick & Morgan
their supervisors, with the with the cause of that isolation being the new information system. If allowed to continue without being adequately addressed, the problems would almost certainly grow and adversely affect the effectiveness of the new information system. Follow-up analyses of the training procedures, operations manuals, workplace design, system/ employee interaction, etc., were accomplished to develop system design change proposals specifically directed toward alleviating the problems previously identified. Proposed system design changes that were not dependent upon software modifications were implemented as quickly as practical, beginning shortly after the results of the six-month post-implementation behavioral climate evaluation were presented to management. Those design change proposals that required modification of the information system software were incorporated into the system functionality enhancement change proposals provided to the software developer for inclusion in the next system upgrade. Approximately six months after the original software and non-software system design change proposals had been implemented, and two years after the original implementation of the information system, a third application of the instrument was accomplished to ascertain employee perceptions of the organization’s behavioral climate at that point in time. When viewed from an “average” employee perspective (ref. Figure 3), five of the eight measured factors moved some distance one way or the other. The perceived levels of disengagement and hindrance were reduced and esprit increased, all changes that were desired by the organization’s management. However, the employees’ perceived levels of organizational aloofness continued to increase despite efforts by supervisory personnel to spent more time on interpersonal activities and less time working with the new information system Also, organizational thrust decreased, indicating that the employees’ identification with the system and understanding of their works’ contribution to the overall success of the system needed to be improved. Although it is useful to evaluate changes in the organization’s behavioral climate from an “average” employee perspective, additional information can be gained by analyzing the results in terms of employee agreement/disagreement with the instrument’s statements. Analyzed using a Chi-Squared “Goodness of Fit” test with d.f. = 1, six of the factors show a statistically significant difference (a=0.01) between expected (six-month post-implementation) and observed (two-year post-implementation) levels (ref. Table 2). It is interesting to note that the average perceived level of intimacy in the organization increased only a relatively small amount and the level of production emphasis was essentially unchanged. However, significant in-
Measuring the Impact of Information Systems on Organizational Behavior 333
creases in the percentage of employees who perceived positive levels of both organizational intimacy and production thrust compared to their perceptions six months after implementation of the new information system are highlighted by the statistical comparison. On the other hand, a relatively large decrease in the average perceived level of hindrance in the organization is not reflected in the statistical analysis of the employees’ positive vs. negative perceptions.
CONCLUSIONS Many management information systems development efforts have been unsuccessful because of the lack of acceptance of the system by the organization’s management and/or employees, often because the information systems designers failed to understand the organization and its behavioral climate. Incorporating organizational behavior goals and objectives into the design of new information systems and modifications to existing systems will almost certainly enhance the likelihood of the acceptance and long-term survival of those systems. This paper has demonstrated that the information systems designer can measure the behavioral climate of an organization. By measuring the behavioral climate prior to designing a new information system or modifications to an existing system, identifying management behavioral goals, and then using that knowledge to guide the information systems design process, the resultant system can be made to impact the organization’s behavioral climate in the desired direction. It has also been shown that the post-implementation behavioral climate can be measured, and the results can be used to make subsequent enhancements to the system calculated to again move the organization’s behavioral climate in the desired direction. Incorporation of behavioral objectives into the information systems design process will inevitably lead to the development of information systems that are tailored to the organization. Such information systems will not only be able to accomplish the necessary functional (data processing) activities, but will also be viewed as being integral parts of the organization.
334 Mahmood, Sullivan & Tung
Chapter XX
A New Approach to Evaluating Business Ethics: An Artificial Neural Networks Application Mo Adam Mahmood and Gary L. Sullivan University of Texas at El Paso, USA Ray-Lin Tung Taiwan
Stimulated by recent high-profile incidents, concerns about business ethics have increased over the last decade. In response, research has focused on developing theoretical and empirical frameworks to understand ethical decision making. So far, empirical studies have used traditional quantitative tools, such as regression or multiple discriminant analysis (MDA), in ethics research. More advanced tools are needed. In this exploratory research, a new approach to classifying, categorizing and analyzing ethical decision situations is presented. A comparative performance analysis of artificial neural networks, MDA and chance showed that artificial neural networks predict better in both training and testing phases. While some limitations of this approach were noted, in the field of business ethics, such networks are promising as an alternative to traditional analytic tools like MDA. Copyright © 2002, Idea Group Publishing.
A New Approach to Evaluating Business Ethics 335
INTRODUCTION Stimulated by the proliferation of incidents such as tax evasions, defense contractor scandals, insider trading, golden parachutes, executive salaries and bonuses and the savings and loan fiasco, concerns about business ethics have increased significantly over the last decade. Consequently, practitioners and academics are showing increased interest in ethical issues in business. Businesses are updating codes of ethics. Academics are authoring an increasing number of research articles and books. Research studies have focused on developing theoretical and empirical foundations for understanding the ethics of decision making. Empirical studies have used traditional quantitative analytic tools such as multiple regression and multiple discriminant analysis to investigate ethical issues. The present research considers a new procedure, artificial neural networks (ANNs), to analyze ethical decision data. It investigates whether ANNs can outperform discriminant analysis in understanding ethical dilemmas. This comparative test uses ethical judgment data obtained from college students. Using ANNs and discriminant analysis, relationships between these factors and attitudinal variables are assessed. Several studies of ethical decision making are summarized next. A short presentation of ANNs and discriminant analysis used in analyzing students’ ethical perceptions follows. Then, the results of the empirical test are presented along with a discussion of implications. Concluding remarks, including suggestions for future research, complete the paper.
LITERATURE REVIEW As stated earlier, both public and scholarly interest in business ethics have increased significantly over the past decade (Vogel 1991). In the next few paragraphs, some recent empirical work in business ethics is reviewed. Empirical work is emphasized in this study because of its centrality to the present research. This review of empirical studies focuses on business students’ and practitioners’ judgments regarding ethical issues. For example, DePaulo (1987) examined students’ perceptions of the incorrectness of sellers’ deceptive bargaining tactics. Interestingly, students were more critical of sellers than were buyers. Claypool, Fetyko and Pearson (1990) compared the responses of CPAs and theologians to ethical dilemmas. When faced with potential ethical dilemmas, both groups indicated that the concepts of “confidentiality” and “independence” were more consequential than “seriousness of breach” and “recipient of responsibility.”
336 Mahmood, Sullivan & Tung
Stanga and Turpen (1991) investigated judgments of male and female accounting students on ethical situations relevant to accounting practice. They found no significant gender differences in ethical judgments. Using a nationwide sample of small business employees, Serwinek (1992) investigated the effects of demographic variables such as age, gender, marital status, education, dependent children status, region of the country and years in business on ethical perception. Age was found to be the most significant factor in predicting ethical perception. Premeaux and Mondy (1993) used marketing managers to investigate the link between ethics and management behavior. The authors established that, even with recent heightened concerns for ethical issues in business, this link has not changed much since the mid-1980s. When making business decisions, practitioners still depend on utilitarian ethical philosophy. Galbraith and Stephenson (1993) studied business policy students to investigate whether males and females use different decision rules when making ethical value judgments. The authors found that there are situations where genders use different decision rules and situations where they use the same rules. In the past, empirical ethics research studies used traditional statistical tools such as discriminant analysis, factor analysis, cluster analysis, regression analysis and Linear Structural Relation (LISREL) analysis. For example, Cohen, Pant and Sharp (1993) used factor analysis and regression analysis to validate a multidimensional ethics scale proposed by Reidenbach and Robin (1990). Souter, McNeil and Molster (1994) used discriminant analysis and cluster analysis to examine ethical conflict experienced by employees in Western Australia. Akaah and Lund (1994) used a LISREL model to investigate the influence of personal and organizational values on marketing professionals’ ethical behavior. The present research begins the analysis and discussion of ANNs= applicability in the field of business ethics. Even though prior research in the area has addressed ethical issues concerning AI (Khalil 1993; Dejoice, Fowler and Paradice 1991; Mason 1986) no study has yet used ANNs to study ethical decisions. Additional considerations favoring ANNs versus traditional tools like discriminant analysis include: 1) ANNs improve by using one data element at a time (increasing learning potential) while discriminant analysis considers all training data simultaneously. 2) ANNs do not require a priori model specification (increasing the potential for serendipity in exploratory studies) while discriminant analysis needs an a priori model. 3) Lippman (1987) has suggested that ANNs may be more robust than alternative models. Because ANN is untested in ethics research, it is difficult to predict performance compared to more established methodologies. Because ANN appears to have great, if untested, potential for ethical inquiry, ANN was pitted
A New Approach to Evaluating Business Ethics 337
against multiple discriminant analysis (MDA) to compare predictive potential. Relationships between students’ ethical judgments and their attitudes were investigated. The context of this research is timely on content grounds because “researchers within the field of business ethics are quite interested in business students’ and practitioners’ ethical judgments regarding ethical issues within business” (Barnett, Bass and Brown 1994, p. 470). The methodological dimension of the present research also contributes by providing procedures to design, train, and test ANNs for ethical decision making. Future researchers interested in using ANNs in ethical studies should find these helpful.
RESEARCH METHODS Artificial Neural Networks Artificial neural networks (ANNs) are information processing tools. An ANN consists of processing elements called neurons (also known as neurodes). Neurons communicate with each other through weighted paths called connections. Each connection can be characterized as excitatory or inhibitory (Zahedi 1993). An excitatory connection from a neuron increases the strength of a receiving neuron, while an inhibitory connection reduces its strength. The network learns connection weights through a training process where cases from a training set are repeatedly fed to the network. Neurons in an ANN are organized in layers. The first layer is called the input layer, the last layer is known as the output layer. The middle layer is designated as the hidden layer. There can be more than one hidden layer even though the most common type of neural network consists of one hidden layer. Through the repetitious feeding of training examples, the input layer neurons receive facts about a decision problem or an opportunity. These neurons, as a result, become energized and send outputs to neurons in the hidden layer(s). The hidden layer(s)’ neurons facilitate generalizations on the part of the network by transforming externally derived input facts into higher level features. These results are then sent to neurons in the output layer. The output layer neurons, in turn, communicate system results to the user. Each neuron has an activation level. Activation level refers to the strength of the neuron. It is derived by a linear or nonlinear function associated with the neuron that combines incoming connection weights to the neuron into a single output. The design of an artificial neural network for a given problem consists of deciding on (Zahedi 1993): a network topology (i.e., number of neuron layers as well as interlayer and intralayer connections), an activation function
338 Mahmood, Sullivan & Tung
for the nodes to convert inputs to outputs and a training process to allow the network to learn from the training examples fed. The network designed for the present research is a fully connected feedforward backpropagation network with three layers: an input layer, an output layer and a hidden layer. deVilliers and Bernard (1992) investigated the classification and training performance of fully connected feedforward backpropagation artificial neural networks with one and two hidden layers. Their conclusion was that artificial neural networks with two hidden layers do not perform better than artificial neural networks with one hidden layer. Cybenko (1989) and Funahashi (1989) also found that one hidden layer is enough to approximate any continuous function. In the network designed for the present research, each layer is fully connected to the succeeding layer and there are no intralayer connections. The fully connected configuration means that each neuron on each layer is connected to every neuron on the succeeding layer. The present network also uses feedforward connection, in which the neurons on each layer send their output to neurons on the succeeding layer. Thus, data flows are all in one direction and there are no feedback loops from a neuron or layer to a previous one. The artificial neural network designed for the present research uses the backpropagation learning algorithm (Rumelhart, Hinton and Williams 1986). Backpropagation networks with feedforward connections have been identified as highly efficient categorizers (Waibel et al. 1989, Barnard and Casasent 1989, Burr 1988). Though it does not ensure an optimal solution, solutions generated through the algorithm are found to be close to optimal (Rumelhart, Hinton and Williams 1986). Backpropagation is a gradient descent based algorithm which uses a training sample to arrive at an optimal output. During the training phase, the network is provided with the input and desired output which is compared against the actual output generated by the network. If any difference exists between the actual and desired output, the network adjusts the relevant connection weights to reduce this difference and, in the process, uncovers the pattern underlying the relationship between the input and the desired output (Kirrane 1990). The artificial neural network designed for the present research uses the generalized delta rule to adjust connection weights because for most backpropagation networks it is the “common choice” (Using NWorks 1991). Another issue in building an artificial neural network is the type of activation function used in generating the output in a neuron. The network designed for the present research uses the sigmoid activation function. The sigmoid transfer function is used to adjust connection weights associated with each input neuron. It is common to use the sigmoid transfer function with the backpropagation learning algorithm (Using NWorks 1991).
A New Approach to Evaluating Business Ethics 339
The software package, NeuralWorks Professional II/PLUS developed by Neuralware, Inc., running on 486 IBM compatible personal computer was used for designing, training and testing the artificial neural networks used in this study. This particular package is used because it provides a complete artificial neural network development and deployment environment (Using NWorks 1991, p. 7). This system is especially appealing because it allows a fast and easy way to create a backpropagation network and modify its learning parameters during the training phase which may influence the network’s learning time and performance (Reference Guide to NeuralWorks Professional II/Plus 1991). Once trained, the system allows the user to view connection weights and outputs related to the network.
Instrument Because this research is exploratory and focuses on the application of a new analytic tool, with theory development a secondary consideration, no reliability and validity tests were conducted on the instrument which measured students’ attitudes and reactions to ethical situations. There were three parts to the questionnaire. Part I of the instrument consisted of six demographic questions (see Appendix A). Three open-ended questions assessed age, major and high school location. Three closed-ended items sought information on gender, student status and work experience. Part II of the instrument presented respondents with four situations (e.g., SQ1, SQ2, SQ3 and SQ4) faced by consumers which may have ethical content. These situations were developed following a review of relevant literature. The intent was not to include all ethical situations faced by consumers but to get a representative sample of such situations. Clearly, it would be impossible to cover the complete range of ethical situations faced by consumers within a single research instrument. Part III asked participants six attitudinal questions (e.g., SQ5 (has 15 subquestions), SQ6, SQ7, SQ8, SQ9 and SQ10). These questions sought insight into students’ attitudes toward different groups of people, businesses and work environments.
The Data Set One hundred-sixty responses to a survey investigating the ethical perceptions of students attending a western U.S. college were gathered. Participants were asked to evaluate the ethics of a variety of groups and to forecast their own behavior in various ethical situations.
340 Mahmood, Sullivan & Tung
Four ethical situations (depicted by SQ1, SQ2, SQ3 and SQ4) were analyzed. Responses to SQ5, SQ6, SQ7, SQ8, SQ9 and SQ10 were used as inputs to the MDAs and ANNs designed for the present research. Responses to SQ1, SQ2, SQ3 and SQ4 served as outputs for ANNs and MDAs. The objective was to investigate the possible relationship between students’ perceptions of an ethics-related situation and their attitudes toward different people, businesses and work environments. Before it can be used in MDA and to design, train and test an ANN, a data set must be carefully evaluated. Generally, good data contains representative samples of sufficient examples and known inputs and outputs for each example before these can be used in the design of an ANN. In other words, for the network to comprehend an ethical decision situation, the data set must contain sufficient examples in each of its subsamples (Lawrence and Andriola 1992). Following this general rule, categories with 10 or fewer observations were eliminated. This left 6, 5, 3 and 5 subcategories under SQ1, SQ2, SQ3 and SQ4, respectively. ANNs create their unique classifying rules by training on part of the available data. These rules are then tested on the rest of the data. A similar practice is followed for MDAs. A part of the available data set is used to create initial MDA models. Following model estimation on this data, models are tested using the remaining data. To facilitate this process, the data set under each of the ethical situations (e.g., SQ1, SQ2, SQ3 and SQ4) was randomly divided into two equal size groups, a training group and a testing group.
Training the Artificial Neural Networks In this research, a set of backpropagation ANNs with feedforward connections has been developed. The inputs to the ANNs will be business students’ attitudes toward a variety of people, businesses, and work environments and the outputs from ANNs will be their judgments regarding ethical issues within business. More specifically, using the aforementioned software, an ANN was designed for each set consisting of responses to SQ5, SQ6, SQ7, SQ8, SQ9 and SQ10 as inputs and responses to SQ1, SQ2, SQ3 or SQ4 as outputs. Supervised learning (in which the network was told the correct answer) was conducted to train the networks. Because the total subcategories for SQ5 to SQ10 were 85 (see Appendix B), the number of nodes in the input layer, for all networks, was specified as 85. And because the subcategories under SQ1 through SQ4 were 6, 5, 3 and 5, the numbers of nodes in the output layer were 6, 5, 3 and 5 for SQ1 through SQ4, respectively. Unfortunately, there is no definitive method (since artificial neural networks are still under development) to determine the number of nodes in the
A New Approach to Evaluating Business Ethics 341
hidden layer. This is done by trial and error. The present research used the following procedure. For each network, the number of output nodes was selected as the initial number of hidden nodes. One node was added in each of the next eleven steps. Because each of SQ1 to SQ4 has different output nodes, the ranges attempted for each data set were various. The numbers tried were: Set ——SQ1 : SQ2 : SQ3 : SQ4 :
Nodes in Hidden Layer ——————————— 6, 7, ..., 17 5, 6, ..., 16 3, 4, ..., 14 5, 6, ..., 16
The objective was to select, for each set, the network with the highest accuracy rate. It is possible that multiple networks (having different numbers of nodes) will tie at the highest accuracy rate. If so, the best network would be the one with fewer nodes as it takes less time to train a network with fewer hidden nodes and is less sensitive to available degrees of freedom. Further attempts were made to improve the results obtained through the networks. For example, the networks having the best performances in the previous step were selected and then the number of hidden nodes was doubled at each step (subject to the maximum imposed by the number of nodes in the input layer) until classification error was no longer reduced (Marquez et al. 1991). This resulted in no further improvement in results. Another approach suggested by Salchenberger, Cinar and Lash (1992) was also tried. They suggested that the number of nodes in the hidden layer should be 75% of the number of nodes in the input layer. This formula resulted in 64 as the theoretical number of nodes in the hidden layer for all four ethical decision artificial neural networks (recall that the number of nodes in the input layer was 85). Using 64 hidden nodes failed to improve results. Next, a sequence of advanced refinements on the previous results was attempted by varying momentum, learning coefficient, epoch, and threshold. The effectiveness and convergence of the backpropagation learning algorithm depend on the value of these parameters. The momentum and learning coefficients were used to accelerate the convergence of the backpropagation learning algorithm. The momentum term represents the proportion of the weight adjustment attributed to the recent classification error. Typically, the momentum term is chosen between 0.1 and 0.8 (Zurada 1992). The learning coefficient determines the effect of past weight changes on the current direction
342 Mahmood, Sullivan & Tung
towards successful convergence of the backpropagation learning algorithm. There is no single learning coefficient value suitable for different training cases (Zurada 1992). This value depends on the problem being solved. Momentum and learning coefficient default values are given by NeuralWorks Professional II/PLUS. These were used as the starting point. Accordingly, the learning coefficient and momentum were set at .5 and .4, respectively. Both the learning coefficient and momentum were adjusted upward and downward during the training phase in an effort to improve performance. These adjustments introduced no improvements and, in fact, impaired the networks’ performance. Threshold refers to a convergence threshold which, when reached, is used to stop learning. Epoch, on the other hand, is the number of sets of training data (learning cycles) presented to the network between weight updates. Training begins with arbitrary values for the weights (they might be random numbers) and proceeds iteratively. Each iteration is called an epoch. In each epoch, the network adjusts the weights in the direction that reduces error (the difference between the current outputs and the target outputs). As the iterative process of incremental adjustments continues, weights gradually converge on the optimal set of values. Usually, many epochs are required before training is complete (Smith 1993). Default values of threshold and epoch are given by NeuralWorks Professional II/PLUS. These were used as the starting point. Accordingly, threshold and epoch were set at .05 and 16, respectively. Both threshold and epoch were adjusted upward and downward during training to improve the performance of the network. These adjustments produced a slight improvement in classification rates, ranging from 1.3 percent to 2.9 percent. The final network configurations are presented in Table 1.
EMPIRICAL RESULTS Table 2 presents the comparative performance of ANNs on the training data and the test data. These results indicate that ANNs had 99.32 percent mean predictive accuracy for the training data set versus 48.48 percent for the test data. Table 4 presents the results of MDAs. These results show that MDAs had 62.44 percent mean predictive accuracy compared to 32.35 percent for the test data. Although the accuracy rate for ANNs fell sharply under test conditions, the ANNs= accuracy was significantly higher than chance levels (see Table 3).
A New Approach to Evaluating Business Ethics 343
Table 1: Actual performance of ANN Training Phase Testing
SQ1 98.67
SQ2 98.61
SQ3 100
SQ4 100
Mean 99.32%
36.0
41.7
70.6
45.6
48.48%
Table 2: Chance performance of ANN Chance Probability
SQ1 16.67
SQ2 20.00
SQ3 33.33
SQ4 20.00
Mean 22.50
Table 3: Actual performance of MDA Training Phase Testing Phase
SQ1 57.33
SQ2 58.33
SQ3 72.06
SQ4 62.03
Mean 62.44%
29.33
26.39
38.24
35.44
32.35%
DISCUSSION The present exploratory research suggests that results obtained through artificial neural networks (ANNs) are superior to both the chance criterion and the multiple discriminant analysis (MDA) approach in both the training phase and the testing phase of model development. More specifically, in predicting students’ judgments regarding ethical issues within business, ANNs outperformed MDAs for all four data sets. In the training phase, ANNs exceeded MDA’s mean predictive accuracy by about 37 percentage points. In the testing phase, ANNs topped MDA’s mean predictive accuracy by over 16 percentage points. More importantly, ANNs outscored the chance probability by about 68 percentage points in the training and fully 17 points in the testing phase of model development. As stated earlier, the mean accuracy rate for test data for ANNs is 49 percent of the level attained using the training data (the MDA’s decline was comparable). There are many possible causes for this decline in predictive power. First, 80 cases of training dataset and 80 cases of testing dataset (with 85 network input nodes) may not adequately represent the sampling popula-
344 Mahmood, Sullivan & Tung
Table 4: Final configurations and results of ANN models Input PEs
Hidden PEs
Output PEs
Epoch
Threshold
Momentum
Learning Co-efficient
SQ1
85
6
6
16
0.02
0.4
0.5
SQ2
85
5
5
32
0.05
0.4
0.5
SQ3
85
9
3
16
0.02
0.4
0.5
SQ4
85
9
5
16
0.05
0.4
0.5
tion. Lawrence and Andriola (1992) stated that the data must include examples of sufficient variety for a network to generalize. Second, in contrast to other ANN applications (e.g., banking, bankruptcy and stock prediction), ethical perception is subjective. The subject matter may have caused contradictions and ambiguities in the data which lessened the networks’ predictive ability. Third, the degrees of freedom were highly constrained by the quantity of available cases. While the findings of the present research are preliminary, it appears that ANNs may be more helpful in developing accurate predictive models in business ethics than traditional analytic tools like MDA. The investigators believe that the ANN methodology holds promise as a tool for empirical research in the field of business ethics. More specifically, ANNs have important research advantages including the ability to adjust rapidly to reflect changes in the real world. And, unlike MDA, networks do not require that data be normally distributed or have equal variance. Unfortunately, current applications of ANN technology have several limitations. An inherent problem of ANNs is that there are no formal guidelines for selecting a network configuration for a given prediction task. Without guidelines, the selection of structures and parameters will continue as a trial and error process. It is hoped that more systematic guidelines will be developed soon. Explanatory capability is another problem associated with ANNs. Once trained, ANNs are essentially considered “black boxes” even though there are methods in the literature which try to extract rules for a network. The internal structure of the black box makes it difficult to trace the steps by which the output is reached. The information is contained in its nodes, links and weights which cannot be translated into an algorithm that would be intelligible or useful outside the ANN. Thus, there is no way to verify if the network’s structure
A New Approach to Evaluating Business Ethics 345
conforms with theorized concepts and causal relationships. The only way to evaluate the network for consistency and reliability is to monitor its output. This may not be a serious disadvantage when the concern is mainly prediction, as in the present ethical perception study. But, if the characteristics of each group and the significance of each input are a concern, the present technology may not be adequate. Finally, networks require advanced hardware and software. And training an ANN demands more computation time than other classification methods.
CONCLUSIONS AND RECOMMENDATIONS FOR FUTURE RESEARCH In this exploratory research, we have presented a new approach to classifying, categorizing and analyzing ethical decision situations. A comparative performance analysis of ANNs versus MDA and chance indicated that artificial neural networks are better predictors in both training and testing phases. While some limitations of this approach were noted, in the field of business ethics, these networks possess considerable potential as an alternative to traditional analytic tools like MDA. More empirical research studies must be conducted before the full potential of ANNs can be established in the ethics field. Future research should utilize carefully designed and validated instruments for data collection and larger samples for more stable results. Techniques such as factor analysis and stepwise multiple discriminant analysis should be used to purify input data (to reduce noise) prior to using ANNs to analyze ethical decision making. Later studies should seek input from practitioners as well as students to broaden understanding of issues in business ethics.
346 Mahmood, Sullivan & Tung
APPENDIX Graduate Research Student Ethics Survey Questions Your Age:
___
Check one:
Sex: Grad
M
F
Major
Undergrad
Where did you graduate from high school (state or country)? How would you describe the majority of your work experience? A. Part-time B. Full-time C. No work experience Please answer the following questions honestly. circle the choice that describes what YOU WOULD DO in the situation, not what you think is right to do: SQ 1. You go to a university that has an arrangement allowing students to buy IBM software at a 35% discount provided they sign a written certification that it is for personal use. A good friend asks you to buy some software for him at a discount. He will pay you. He says he needs the software but that he cannot afford to pay the full price ($295). He says if he can’t get it at a discount he will find someone who has the software and make an illegal copy. You know that many students buy software for friends and relatives and you assume the school knows this. You would: a. Buy it for him. It is a common practice and if IBM or the school wanted to stop it they would have developed better safeguards. b. Buy it for him. Since you would have bought the software for yourself, neither the school nor IBM loses anything. c. Buy it only if you believe your friend really needs it and can’t afford to pay the full price, though you think it is wrong to say it is for your personal use when it isn’t. d. Buy it though you think it is wrong to do so, since it would be worse if your friend made an illegal copy; at least this way IBM gets something.
A New Approach to Evaluating Business Ethics 347
e. Make up an excuse as to why you can’t buy it for him for example, tell him that there are additional requirements about the software relating to a current course. f. Tell him that you won’t do it but try to help him get an illegal copy from someone who has the software. g. Tell him politely that you won’t do it because using the discount in this way is wrong. h. Tell him politely but firmly that you won’t lie for him and that you think it would be wrong to make an illegal copy. SQ 2. Your car is rear-ended by another car, damaging your rear bumper. The other driver is insured. When you go to a body shop for an estimate, the estimator suggests that he can also fix a rear fender dent that you had before the accident. He says that you can claim that the damage was caused by the recent collision. Otherwise, fixing the fender will cost $375. He assures you that he has done it many times before and that you will have no trouble with the insurance company. You would: a. Include the finder in your claim if you think your insurance rates are too high and you have not had any previous claims. b. Include the fender in your claim since the estimator suggested it and it is apparently an expected and common practice. c. Include the fender in your claim because it is an unexpected opportunity and you could not otherwise afford to fix the fender, though you think it is wrong to do so. d. Politely decline the opportunity and only put in a claim for the actual damage because no matter what he says you could get caught and be accused of fraud. e. Politely decline the opportunity and only put in a claim for the actual damage because it would be wrong to file a false claim. f. Firmly decline the suggestion and tell him that you think it is wrong. g. Firmly decline the estimators’ suggestion and report him to the insurance company.
348 Mahmood, Sullivan & Tung
SQ 3. You work for a large toy manufacturer. Two months before Christmas you discover that your company’s best-selling toy has a defect, making it potentially dangerous to children. Your boss says the risk of injury is small and that a recall is out of the question. You disagree. He adds that your job could be in jeopardy if you pay further attention to the situation. What would you do? a. Ignore the situation and hope for the best. b. Write a memo outlining your concern to your boss and your boss’s superior, suggesting that the toy be recalled. c. Quit your job. d. Make a confidential call to the federal Consumer Products Safety Commis sion (CPSC) and tell them what is going on. e. Make a confidential call to your local newspaper and tell them what is going on. f. Ignore the situation for now, but work toward change in the future.
SQ 4. You work in a foreign office of an American manufacturer of heavy equipment. One of your goals is to get a substantial contract from the local government. The competition is fierce and to get the contract you’ve been told to pay off several foreign officials. What would you do? a. Make the payoffs. b. Refuse to participate. c. Quit and get another job. d. Ask to be transferred to the United States. e. Other.
A New Approach to Evaluating Business Ethics 349
SQ 5. How do you rate the overall honesty and ethics of the following groups: 5 4 3 2 1
= = = = =
Excellent Very Good Good Bad Very Bad CIRCLE ONE
1. Elected public officials 2. Successful business executives 3. Journalists 4. Judges 5. Lawyers 6. Police Officers 7. Famous Athletes 8. Famous Musicians 9. Teachers at Your College 10.Your Parents 11.Students at Your College 12.Your Friends 13.People over 30 14.People under 30 15.Yourself
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
4 4 4 4 4 4 4 4 4 4 4 4 4 4 4
5 5 5 5 5 5 5 5 5 5 5 5 5 5 5
SQ 6. Would you invest in a company with substantial holdings in South Africa? 1. Yes 2. No SQ 7. If you were employed by a company that produced/manufactured goods that were harmful to human beings, would you quit? 1. Yes 2. No SQ 8. Is it okay to put your personal mail (1 or 2 pieces) in the company mail basket? 1. Yes 2. No
350 Mahmood, Sullivan & Tung
SQ 9. It is okay to have a company pay for your education knowing that you don’t plan on staying with them? 1. Yes 2. No SQ 10. Would you hire a friend of yours, who was qualified for a job you had open, even though there are applicants that are more qualified? 1. Yes 2. No
Comprehensive Bibliography
351
Comprehensive Bibliography The contents of this bibliography by no means represent an exhaustive list of references in the end user computing area. These, however, provide a cross-section of some of the advanced research studies conducted in the area. This is true in spite of the fact most of the research studies listed in the bibliography came from the chapters included in this book. It is believed that these references will be immensely helpful to researchers who are interested in conducting research in the end user computing area. We also believe that these will be helpful to end user managers who are interested in using end user computing technology to help end users accept and understand information technology better. Aaen, I. (1986). Systems Development and Theory—In Search of Identification. Quality of Work Versus Quality of Information Systems, Report of the Ninth Scandinavian Research Seminar on Systemeering, Lunds Universitet, 203-223. Aaker, David A. (1981) Multivariate Analysis in Marketing. CA: Palo Alto, Scientific Press. Adams, D. A., Nelson, R. R., & Todd, P. A. (1992). Perceived usefulness, ease of use, and usage of information technology: A replication. MIS Quarterly, 227-247. Addo, T.B.A. (1989). Development of a valid and robust metric for measuring question complexity in computer graphics experimentation. Unpublished doctoral dissertation, Indiana University. Aggarwal, A.K. (1998). End user training - revisited. Journal of End User Computing, 10 (3), 32-33. Ahituv, N. & Neumann, S. (1982). Principles of Information Systems for Management. Dubuque, IA: W.C. Brown Company. Ajzen, I., & Fishbein, M. (1980). Understanding attitudes and predicting social behavior. Englewood Cliffs, NJ: Prentice-Hall. Ajzen, I. (1991). “The theory of planned behavior,” Organizational Behavior and Human Decision Processes, 50, 179-211. Al-Jabri, I.M., & Al-Khaldi, M.A. (1996). Effects of end user characteristics on computer attitude among undergraduate business students. Journal of End User Computing, 9 (2), 16-22. Copyright © 2002, Idea Group Publishing.
352
Comprehensive Bibliography
Alavi, M., and J. C. Henderson (1981). “Evolutionary Strategy for Implementing a Decision Support System,” Management Science, 27(11), 1309-1323. Allbritton, C. (1999), Scary Name, No Big Deal, ABCNEWS.com, April 26, http:/ /www.abcnews.go.com/sectiona/tech/DailyNews/virus990426.html. Allwood, C. M. (1990). “Learning and Using Text-Editors and other Application Programs,” In Falzon, P. (Ed), Cognitive Ergonomics: Understanding, Learning and Designing Human-Computer Interaction, Academic Press, New York, 85-105. Alter, S. L., “A Taxonomy of Decision Support Systems,” Sloan Management Review, 1971, 19(1) 39-56. Amoroso, D.L. (1992). Using end user characteristics to facilitate effective management of end user computing. Journal of End User Computing, 4 (4), 5-15. Amoroso, D.L., & Cheney, P.H. (1991). Testing a causal model of end user application effectiveness. Journal of Management Information Systems, 8 (1), 63-89. Anderson, T. W. (1958) An Introduction to Multivariate Statistical Analysis. New York: John Wiley and Sons. Anderson, E. A, Bergeron, D, & Crouse, B. J. (1994). Recruitment of family physicians in rural practice. Minnesota Medicine, 77, 29-32. Anderson, J. (1997). Clearing the way for physicians’ use of clinical information systems. Communications of ACM. 40(8):83-90. Anshel, Jeffrey. (1997). Computer Vision Syndrome: Causes and Cures. Managing Office Technology, (July), 42(7), 17-19. Arinze, B. (1991).“A Contingency Model of DSS Development Methodology, Journal of Management Information Systems, 8(1), 149-166. Armstrong, J.S. & Overton, T.S. (1977). Estimating non-response bias in mail surveys. Journal of Marketing Research, 14, 396-402. Atkinson, C., & Peel, V. (1998). Transforming a Hospital by Growing not Building an Electronic Patient Record. Methods of information in medicine, 37, 285-93. Atkinson C.J. and Peel V. (1997) Transforming a hospital through growing, not building, an Electronic Patient Record system, Pre-publication paper, Health Services Management Unit, University of Manchester, Manchester. Atkinson C.J. (1992) The information function in health care organisations, Health Services Management Unit, University of Manchester, Manchester. Atteberry, J.W. & Doke, E.R. (1982). A model for evaluating information system performance. Proceedings of the 14th Annual AIDS Meeting.
Comprehensive Bibliography
353
Audit Commission. (1997). A Study of Information Management in Community Trusts. London: Audit Commission for Local Authorities and the National Health Service in England and Wales. Avgerou, C. (2000). IT and organizational change: an institutionalist perspective, Information Technology and People, 13(4), 234-262. Avgerou, C. (2001). The significance of context in information systems and organizational change, Information Systems Journal, 11(1), 43-63. Avison, D. and Wood-Harper, A. (1990). An Exploration of Information Systems Development. Oxford: Blackwell Scientific Publications. Ayersman, D.J. & Reed, W.M. (1996) The effects of learning styles, programming, and gender on computer anxiety. Journal of Research on Computing in Education, 28(2), 148-161. Aydin, C. E. & Forsythe, D. E. (1998). Implementing computers in ambulatory care: Implications of physician practice patterns for system design. Proceedings of the 1998 AMIA Annual Fall Symposium, Orlando, FL, 677-81. Babbie, E. (1990). Survey research methods (2nd edition). Belmont, CA: Wadsworth Publishing Company. Bachner, John Philip. (1997). Eliminate Those Glaring Errors. Managing Office Technology, (July), 15-16. Bailey, A. (1989). A Hard Day Ahead for the Office? Director, 43, 31-32. Bailey, J.E., & Pearson, S. (1983). Development of a tool for measuring and analysing computer user satisfaction. Management Science, 29(5), 530545. Bandura, A. (1991). “Social cognitive theory of self-regulation.” Organizational Behavior and Human Decision Processes, 50, 248-287. Bandura, A. (1982). Self-efficacy mechanism in human agency. American Psychologist, 37, 122-147. Bandura, A. (1977). Social learning theory. Englewoods, NJ: Prentice-Hall. Barley, S. R.(1990). The alignment of technology and structure through roles and networks. Administrative Sciences Quarterly, 35, 61-103. Barnes, B., Bloor, D., & Henry, J. (1996). Scientific knowledge: A sociological analysis. London: Athlone. Barki, H. & Hartwick, J. (1989). Rethinking the Concept of User Involvement. MIS Quarterly, 13(1), 53-63. Barki, H. & Hartwick, J. (1994). Measuring user participation, user involvement and user attitude. MIS Quarterly, 18(1), 59-82. Bartkus, B.R. (1997). Employee ownership as catalyst of organisational change. Journal of Organisational Change Management, 10(4), 331-344.
354
Comprehensive Bibliography
Bates D.W., Cohen M., Leape L. L., Overhage J.M., Shabot M.M., & Sheridan T. (2001) Reducing the Frequency of Errors in Medicine Using Information Technology. J Am Med Informatics Assoc. 8, 299-308. Bates J. (1995) Implementation planning, In Information management in health care, Series of Handbooks, Handbook D, Issue 3, Section D1.4, The Institute of Health Management, Longman Health Management, London. Beath, C. (1991) Supporting the information technology champion. MIS Quarterly, 15(3), 354-371. Belanger, F. & Collins, R. W. (1998). Distributed work arrangements: A research framework. The Information Society, 14(2), 137-151. Benamati, J. & Lederer, A.L. (2001). Rapid Information Technology Change, Coping Mechanisms, and the Emerging Technologies Group.” Journal of Management Information Systems, 17, 4, pp. 183-202. Benbasat I., Goldstein D.K. and Mead M. (1987) The case study research strategy in studies of information systems, MIS Quarterly, September, 369-386. Benbasat, I., and A. S. Dexter, “Individual Differences in the Use of Decision Support Aids,” Journal of Accounting Research, 1982, 20, 1-11. Benbasat, I. & Dexter, A.S. (1985). An experimental evaluation of graphical and color-enhanced information presentation. Management Science, 31(11), 1348-1364. Bentler, P.M. (1990). Comparative Fit Indexes in Structural Models. Psychological Bulletin 107(2), 238-246. Bentler, P.M. & Bonnet, D.G. (1980). Significance Testing and Goodness-offit in the Analysis of Covariance Structure. Psychological Bulletin, 88(3), 588-606. Berger, P. L. & Luckmann, T. (1966). The Social Construction of Reality. Doubleday and Company Inc, Garden City, NY. Berry, J. (1996). ‘Database marketing: A Potent New Tool for Selling’, in Lovelock, C. Service marketing, Prentice Hall. Bertin, J. (1967). Semiologie Graphique. Paris: Mouton-Gautier. Bertin, J. (1983). Semiology of graphics: Diagrams, networks, maps. Trans. Berg, W.J. Madison: University of Wisconsin Press. Beyers, M. (1995). Is there a future for management? Nursing Management, 26(1), 24-25. Beynon-Davies, P. (1995). Information systems ‘fail-ure’: the case of the London Ambulance Service’s Computer Aided Despatch project. European Journal of Information Systems, 4, 171-184.
Comprehensive Bibliography
355
Beynon-Davies, P., MacKay, H. & Slack, R. (1997). User Involvement in Information Systems Development: The Problem of Finding the Right User. In R. Galliers, C. Murphy, H. R. Hansen, R. O’ Callaghan, S. Carlsson and C. Loebbecke (Eds.) Proceedings of the 5th European Conference on Information Systems, Volume II, University College Cork, Ireland, 659-675. Biagioli, M. (Ed.). (1999). The science studies reader. London: Routledge. Blair, E., & Burton, S. (1987). Cognitive processes used by survey respondents to answer behavioral frequency questions. Journal of Consumer Research, 14, 280-288. Blignaut P.J. & McDonald T. (1999) The user interface for a computerized patient record system for primary health care in a third world environment. Journal of End User Computing 11(2), 29-33. Bloom, A. J., & Hautaluoma, J. E. (1990). Anxiety management training as a strategy for enhancing computer user performance. Computers in Human Behavior, 6, 337-349. Bloomfied B.P., Coombs R., Cooper D.J. and Rea D. (1992) Machines and manoeuvres: responsibility accounting and the construction of hospital information systems, Accounting, Management and Information Technology, 2(4), 197-219. Bloomfield B.P. (1995) Power, machines and social relations: delegating to information technology in the National Health Service, Organization, 2(3/4), 489-518. Blyth, A. J. C. (1998). Identifying requirements for the management of medical information technology. International Journal of Technology Management, Special Issue on Management of Technology in Health Care, 15(3/4/5), 256-269. Bogdanski, C. & Setliff, R. J. (2000). Leaderless supervision: A response to Thomas. Human Resource Development Quarterly, 11(2), 197-201. Bødker, K. & Pedersen, J. (1991). Workplace cultures: looking at artifacts, symbols and practices. In J. Greenbaum and M. King, (Eds.), Design at Work: Collaborative Design of Computer Systems, Lawrence Erlbaum Associates, Hillsdale, NJ, 121-136. Bohlen, G.R., & Ferratt, T.W. (1997). End user training: An experimental comparison of lecture versus computer-based training. Journal of End User Computing, 9 (3), 14-27. Boland, R.J. (1978). The Process and Product of Systems Design. Management Science, 24(9), 887-898. Bollen & J.S. Long (Eds): Testing Structural Equation Models, Sage Publications.
356
Comprehensive Bibliography
Bollen, K.A. (1989). Structural Equations with Latent Variables. New York: Wiley. Bonczek, R. H., C. W. Holsapple, and A. B. Winston (1980). “The Evolving Roles of Models in Decision Support Systems,” Decision Sciences, 11(2), 337356. Borgman, C. (1986). “The User’s Mental Model of an Information Retrieval System: An Experiment on a Prototype On-Line Retrieval Catalog,” International Journal of Man-Machine Studies, 24(1), 47-64 Bostrom, R., Olfman, L., & Sein, M. (1990). “The importance of learning style in end-user training.” MIS Quarterly, 14(1), 101-119. Bowman, B.J., Grupe, F.H., & Simkin, M.G. (1995). Teaching end user applications with computer-based training: Theory and an empirical investigation. Journal of End User Computing, 7(2), 12-18. Bozionelos, Nicholas, (1997) Psychology of computer use: XLIV. Computer Anxiety and Learning Style, Perceptual and Motor Skills, 84, 753-754. Bray, J. H. & Maxwell, S. E. (1985). Multivariate analysis of variance. Beverly Hills, CA: Sage Publications. Bridwell, L., and P. Tippett (2000), ICSA Labs 6th Annual Computer Virus Prevalence Survey 2000, http://truesecure.com/html/tspub/index.shtml. Briggs, R., Balthazard, P., & Dennis, A. (1996). Graduate business students as surrogates for executives in the evaluation of a social technology. Journal of End User Computing. 8(4). 11-19. British Tourist Authority, (1994). Visits to Tourists Attractions. London: British Tourist Authority Research Services. British Tourist Authority, (1995). Visits to Tourists Attractions. London: British Tourist Authority Research Services. Brooks, L. W. & Dansereau, D. F. (1987). “Transfer of Information: An Instructional Perspective”, In Cormier, S.M. & Hagman, J.D.(eds), Transfer of Learning: Contemporary Research and Applications, Academic Press Inc., San Diego. Brown, M. S. Physicians on the Internet: ambivalence and resistance about to give way to acceptance. Medicine on the Net, January 1998. Brown, T. L. (1994). Managing the invisible employee. Industry Week, 243(13), 27. Browne, M.W. & Cudeck, R. (1993). Alternative Ways of Assessing Model Fit. In AK.A. Brunner, D.T. (1988). Easy to Use Simulation ‘Packages:’ What Can You Really Model? (Panel Discussion). In Proceedings of the 1988 Winter Simulation Conference, Society for Computer Simulation, San Diego, California, 887-891.
Comprehensive Bibliography
357
Buchanan, D. A., & Bessant, J. (1985). Failure, uncertainty, and control: The role of operators in a computer integrated production system. Journal of Management Studies, 22, 292-308. Burch, J.G. & Grudnitski, G. (1990). Information Systems: Theory and Practice, 5ed. New York: John Wiley and Sons. Busch, T. (1995) Attitudes towards computers. Journal of Educational Computing Research, 12(2), 147-158. Butler M.A. and Bender A.D. (1999) Intensive care unit bedside documentation systems. Realizing cost savings and quality improvements, Computing Nursing, January-February, 17(1), 32-40. Butler, T. (1998) Towards a hermeneutic method for interpretive research in information systems. Journal of Information Technology, 13(4), 285-300. Butler T. (2000). Transforming information systems development through computer-aided systems engineering (CASE): lessons from practice. Information Systems Journal, 10( 3), July, 167-193. Butler, T. & Murphy, C. (1999). Shaping Information and Communication Technologies Infrastructures in the Newspaper Industry: Cases on the Role if IT Competencies. In ‘An IT Vision of the 21st Century’, Proceedings of the 20th International Conference on Information Systems, Charlotte, NC, 364-377. Butler,T. & Fitzgerald, B. (1997). A Case Study of User Participation in the Information Systems Process. In E. R. McClean and R. J. Welke (Eds.), Proceedings of the 18th International Conference on Information Systems, Atlanta, Georgia, 411-426. Burns, F. (1998). Information for Health. Leeds: NHS Executive. Cacioppo, J.T. & Petty, R.E. (1982). The need for cognition. Journal of Personality and Social Psychology, 42(1), 116-131. Callon, M. (1986). Some elements of a sociology of translation: domestication of the scallops and the fishermen of St Brieuc Bay. In J. Law (Ed.), Power, Action and Belief: A New Sociology of Knowledge? (pp. 196233). London: Routledge & Kegan Paul. Campbell, D.T. & Stanley, J.C. (1963). Experimental and Quasi-Experimental Designs for Research. Boston: Houghton Mifflin Company. Canada, K & Brusca, F. (1993). The technology gender gap: Evidence and recommendations for educators and computer-based instruction designers. Educational Technology Research and Development, 39(2), 43-51. Carroll, J.M. (1990). The Nurnberg Funnel: Designing Minimalist Instruction for Practical Computer Skill, The MIT Press, Cambridge Massachusetts. Carter, J.B. and Banister, E.W. (1994). Musculoskeletal Problems in VDT Work: A Review. Ergonomics, 37(10), 1623-1648.
358
Comprehensive Bibliography
Cascio, W. F. (2000). Managing a virtual workplace. Academy of Executive Review, 14(3), 81-90. Cassidy, T. (1992). Commuting-Related Stress: Consequences and Implications. Employee Counselling Today, 4(2), 15-21. Caswell, J. G. (1995). Going virtual: How we did it. Journal of Accountancy, 180(6), 6467. Caudron, S. (1992). Working at Home Pays Off. Personnel Journal, 71, 40-49. Cosgrove, N.D. (1992). The Office at Home Has Inviting Sound. Office, 115, 4243. Cavaye, A. L. M. (1995). The Sponsor-Adopter Gap –Differences Between Promoters and Potential Users of Infor-mation Systems that Link Organizations. International Journal of Information Management, 15(2), 8596. Cavaye, A.L.M. (1996). Case study research: A multi-faceted research approach for IS. Information Systems Journal, 6, 227-242. Ceniceros, Roberto. (1997). California Oks New Standard on Ergonomics. Business Insurance, 31(24), (June), 1 and 33. Center for Office Technology Page. (1998a). Office Place RMIs Decrease for Third Year in a Row Despite Increased Computer Usage. http:// www.cot.org/blsmay.html. Center for Office Technology Page. (1998b). Jury Again Finds No Link Between Keyboards and MSD Conditions. June 17. http://cot.org/ digital.html. Center for Office Technology. (1997). The Laptop User’s Guide: Practical Recommendations for the Road Warrior, Metaphase Publishing. Cervero, R. M. (1988). Effective continuing education for professionals. San Francisco, CA: Jossey-Bass. Chan, S. L. (1995) Computerized Image Processing System (CHIPS) - Ward Ordering Subsystem: System Manual, Operations Manual, Procedure Manual, Test Plan, Implementation Plan, and Contingency Plan Version 2. Checkland, P. B. (1981). Rethinking a Systems Approach. Journal of Applied Systems Analysis 8(3): 3-14. Checkland, P. B. (1993). Systems Thinking, Systems Practice. Chichester: John Wiley and Sons Ltd. Cheney, P. H., Mann, R. I., & Amoroso, D. L. (1986). Organizational factors affecting the success of end-user computing. Journal of Management Information Systems, 3, 65-80. Chesler, M. (1987). Professionals’ views of the “dangers” of self-help groups. (CRSO Paper 345), Centre for Research on Social Organisation. MI:
Comprehensive Bibliography
359
Ann Arbor. Chin, W.W., & Newsted, R.N. (1995). The Importance of Specification in Causal Modeling: The Case of End-User Computing Satisfaction. Information Systems Research, 6(1), 73-81. Christensen, Rhonda & Knezek, Gerald (2000). Internal consistency reliabilities for 14 computer attitude scales. Journal of Technology and Teacher Education, 8(4), 327-36. Clarke, L. (1994). The Essence of Change. Hemel Hempstead: Prentice-Hall. Clarke, S.A. and B. Lehaney(1998). Information Systems Intervention: A Total Systems View. Modeling for Added Value. R. Paul and S. Warwick (Eds). London: Springer-Verlag, 103-115. Clarke, S. A. and B. Lehaney (1999). Human-Centred Methods in Information Systems Development: Is There a Better Way Forward? Managing Information Technology Resources in Organisations in the Next Millennium. Hershey, PA, U.S.A: Idea Group Publishing. Clegg, C., Axtell, C., Damadoran, L., Farbey, B., Hull, R., Lloyd-Jones, R., Nicholls, J. Sell, R., & Tomlinson, C. (1997). Information Technology: A study of performance and the role of human and organisational factors, Ergonomics, 40(9), 851-871. COACH. (1996). Resource Guide. Healthcare Computing & Communications Canada 10(3). Cobb, S. (1998), Taming wild code, Information Security Magazine, April. Gordon, S., R. Ford, and J. Wells (1997), Hoaxes & Hypes, http:// www.research.ibm.com/antivirus/SciPapers/ Gordon/HH.html. Coffin, R.J. & MacIntyre, P.D. (1999). Motivational influences on computerrelated affective states. Computers in Human Behavior, 15, 549-569. Cohen, J., & Cohen, P. (1983). Applied multiple regression/correlation analysis for the behavioral sciences. Hillsdale, NJ: Lawrence Erlbaum Associates. Cohen, A.R., Stotland, E., & Wolfe, D.M. (1956). An experimental investigation of the need for cognition. Journal of Abnormal and Social Psychology, (51), 291-294. Cohen, A.R. (1957). Need for cognition and order of communication as determinants of opinion change. In C. I. Hovland (ed.), The order of presentation in persuasion. New Haven, CT: Yale University Press. Coiera, E. (1995). Medical informatics. BMJ,310, (6991), 1381-1387. Coleman W.P., Siegel J.H., Giovannini I., Sanford D.P. and De Gaetano A. (1993) Computational logic: a method for formal analysis of the ICU knowledge base, International Journal of Clinical Monitoring and Computing, February, 10(1), 67-69.
360
Comprehensive Bibliography
Collins, H. M., & Pinch, T. (1993). The golem: What everyone should know about science. Cambridge: Cambridge University Press. Compeau, D. (1992). Individual reactions to computing technology: A Social cognitive theory perspective. Unpublished Ph.D. dissertation, The University of Western Ontario, London, Ontario, Canada. Compeau, D. R., & Higgins, C. A. (1995). “Application of social cognitive theory to training for computer skills.” Information Systems Research, 6, 118-143. Conte, S. J., Imershein, A. W., & Magill, M. K. (1992). Rural community and physician perspectives on resource factors affecting physician retention. The Journal of Rural Health, 8 (3), 185-196. Cook, T.D., & Campbell, D.T. (1979). Quasi-Experimentation: Design and Analysis Issues in Field Settings. Boston: Houghton Mifflin Company. Coombs, C.R., Doherty, N.F., & Loan-Clarke, J. (1998). The factors determining the success of a CIS: A case study. Information Technology in Nursing, 10(3), 9-15. Coombs, C.R., Doherty, N.F., & Loan-Clarke, J. (1999). Factors affecting the level of success of community information systems. Journal of Management in Medicine, 13(3), 142-153. Cormier, S.M. & Hagman, J.H. (1987). “Introduction”, In Cormier, S.M. & Hagman, J.D.(eds), Transfer of Learning: Contemporary Research and Applications, Academic Press Inc., San Diego. Creasey, D. (1996). BMA says data security problems can be solved. British Journal of Healthcare Computing and Information Management, 13(4), 6. Crookston, B. B., “A Developmental View of Academic Advising as Teaching,” Journal of College Student Personnel, 1972, 13(1), 12.-17. Crossland, M. D. (1992). Individual decision-maker performance with and without a geographic information system: an empirical investigation. Unpublished doctoral dissertation, Indiana University. Crossland, M.,Wynne,B. & Perkins, W. (1995). Spatial decision support systems: An overview of technology and a test of efficacy. Decision Support Systems, 14(3), 219-235 Cox, J. F., and R. R. Jesse, “An Application of Material Requirements Planning to Higher Education,” Decision Sciences, 1981, 12(2), 261275. Czara, S.J., Hammond, K., Blascovich, J.J., & Swede, H. (1989). Age related differences in learning to use a text-editing system. Behavior and Information Technology, 8 (4), 309-319.
Comprehensive Bibliography
361
Dambrot, F.H., Silling, S.M., & Zook, A. (1988). Psychology of computer use: Sex differences in prediction of course grades in a computer language course. Perceptual and Motor Skills, 66, 627-636. Darke, P., Shanks, G., & Broadbent, M. (1998). Successfully completing case study research: Combining rigour, relevance and pragmatism. Information Systems Journal, 8, 273-289. Davenport, T. & Prusak, L. (1998). Working Knowledge. Boston: Harvard Business School. Davidow, W. H. and Malone, M. S. (1992). The Virtual Corporation: Structuring and Revitalizing the Corporation for the 21st Century. New York: Harper Collins Publishers. Davis, D.L., & Davis, D.F. (1990). The effect of training technique and personal characteristics on training end users of information systems. Journal of Management Information Systems, 7, 93-110. Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 319-339. Davis, F. D. (1993). User acceptance of information technology: system characteristics, user perceptions and behavioral impacts. International Journal of Man-Machine Studies, 38, 475-487. Davis, F. D., & Venkatesh, V. (1995). Measuring user acceptance of emerging information technologies: An assessment of possible method biases. Proceedings of the 28th Annual Hawaii International Conference on System Sciences, 729-736. Davis, F. D., & Venkatesh, V. (1996). A critical assessment of potential measurement biases in the technology acceptance model: three experiments. International Journal of Human-Computer Studies, 45, 19-45. Davis, F.D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly, 13(4), 319-340. Davis, F. D., Bagozzi, R. P., & Warshaw, P. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Sciences, 35(8), 982-1003. Davis, L.R. (1986). The effects of question complexity and form of presentation on the extraction of question-answers from an information presentation. Unpublished doctoral dissertation, Indiana University. Davis, S. & Bostrom, R. (1993). “Training End-Users: An Experimental Investigation of the Roles of the Computer Interface and Training Methods,” MIS Quarterly, 17(1), 61-85. de Bono, E. (1977). Lateral Thinking. Aylesbury, U.K: Pelican Books, Hazell Watson & Viney Ltd. DeLone, W.H., & McLean, E.R. (1992). Information systems success: The
362
Comprehensive Bibliography
quest for the dependent variable. Information Systems Research, 3(1), 6095. DeLoughry, Thomas J. (April 28, 1993). 2 researchers say ‘technophobia’ may afflict millions of students. Chronicle of Higher Education, A25A26. De, P., and A. Sen (1981).“Logical Data Base Design in Decision Support Systems,” Journal of Systems Management, 32, 28-33. DeSanctis, G. (1984). Computer graphics as decision aids: directions for research. Decision Sciences, 15, 463-487. Department of Health. (1997). The New NHS: modern, dependable (Cm 3807): The Stationary Office. Detmer, W.M. and Friedman, C.P. (1994). Academic physicians’ assessment of the effects of computers on health care. Proceedings of AMIA. 55862. Diamantopoulos, A., & Souchon, A.L. (1996). Instrumental, conceptual and symbolic use of export information: An exploratory study of UK firms. Advances in International Marketing, 8, 117-144. Dillman, D. A. (1978). Mail and Telephone Surveys: The Total Design Method. New York: Wiley. DiMaggio, P.J. & Powell, W.W. (1991). The Iron Cage Revisited: Institutional Isomorphism and Collective Rationality in Organizational Fields, in W.W. Powell and P.J. DiMaggio (Eds), The New Institutionalism in Organizational Analysis, London: University of Chicago Press. Di Martino, V. & Wirth, L. (1990). Telework: A New Way of Working and Living. International Labour Review, 129(5), 529-554. Dimopoulou, M., and P. Miliotis (2001). “Implementation of a University Course and Examination Timetabling System,” European Journal of Operational Research, 130(1), 202-213. Dixon P. & Gabrys, G. (1991). “Learning to operate complex devices: effects of conceptual and operational similarity”, Human Factors 33(1), 101-120. Doherty, N.F., King, M., & Marples, C.G. (1998). Factors affecting the success and failure of Hospital Information Support Systems. Failure and Lessons Learned in Information Technology Management, 2, 91-105. Doktor, R.H. & Hamilton, W.F. (1973). Cognitive style and acceptance of management science recommendations. Management Science, 19(8),884893. Doll, W.J, Hendrickson, A., & Deng, X. (1998). Using Davis's Perceived Usefulness and Ease-of-use Instruments for Decision Making: A Confirmatory and Multigroup Invariance Analysis. Decision Sciences 29(4), 839-869.
Comprehensive Bibliography
363
Doll, W. J., & Torkzadeh, G. (1988). The Measurement of End-User Computing Satisfaction. MIS Quarterly, 12(2), 259-274. Doll, W.J., Xia, W., & Torkzadeh, G. (1994) A Confirmatory Factor Analysis of the End-User Computing Satisfaction Instrument. MIS Quarterly, 18(4), 453-461. Donaldson L.J. (1996) From black bag to black box: will computers improve the NHS? British Medical Journal, 312(7043), 1371-1372. Donaldson, T., & Preston, L. E. (1995). The stakeholder theory of the corporation: concepts, evidence, and implica-tions. Academy of Management Review, 20(1), 65-91. Donovan, J., and S. Madnick, “Institutional and Ad Hoc DSS and Their Effective Use,” Data Base, 1977, 8(3), 79-88. Driscoll, J. W. (1978). Trust and participation in organizational decision making as predictors of satisfaction, Academy of Management Journal, 21, 44-56. Drucker P.F. (1996) The information executives truly need, Harvard Business Review, January-February, 54-62. Dunn, Rita, Dunn, Kenneth, & Price, Gary E. (1979) Identifying individual learning styles. In Student Learning Styles: Diagnosing and Prescribing Programs, Reston VA: National Association of Secondary School Principals, 39-61. Durutta, N. (1995). Communicating for real results in the virtual organization. Communication World, 12(9), 15-19. Duxbury, L. & Haines, G., Jr. (1991). Predicting alternative work arrangements from salient attitudes: A study of decision makers in the public sector. Journal of Business Research, 23(1), 83-97. Duxbury, L. E., Higgins, C. A. & Irving, R. H. (1987). Attitudes of managers and employees to telecommuting. Infor, 25(3), 273-285. Dyck, Jennifer L, Gee, Nancy R., & Smither, Janan Al-Awar, (1998) The changing construct of computer anxiety for younger and older adults. Computers in Human Behavior, 14(1), 61-77. East T.D. (1992) Computers in the ICU: panacea or plague? Respiratory Care, February, 37(2), 170-180. Elder, V.B. Gardner, E.P. & Ruth, S.R. (1987) Gender and age in technostress: Effects on white-collar productivity. Government Finance Review, 3, 17-21. Eldredge, D.L., & Watson, H.J. (1996). An Ongoing Study of the Practice of Simulation in Industry. Simulation and Gaming, 27(3), 375-386. Elimam, A. A.(1991). “A Decision Support System for University Admission Policies, European Journal of Operational Research, 50(2), 140-156.
364
Comprehensive Bibliography
Ellis, H.C. (1965). The Transfer of Learning, Macmillan Publishers, New York. Ender, S., “Impediments to Developmental Advising, NACADA Journal, 1994, 14(2), 105-107. Ennals, R. (1995). Preventing Information Technology Disasters. London.: Springer. Erlandson D. A., Harris E.L., Skipper, B. L. & Allen, S.D. (1993). Doing Naturalistic Inquiry: A Guide to Methods. Sage Publications Inc, London. Euchner, J. Sachs, P. & The NYNEX Panel (1993), The Benefits of Internal Tension. Communications of the ACM, June, 36 (4), pp. 53. Evans, G.E., & Simkin, M.G. (1989). What best predicts computer proficiency? Communications of the ACM, 32 (1), 1322-1327. Fairey, M. (1998). Editorial: “... is paved with good intentions”. British Journal of Healthcare Computing and Information Management, 15(7), 3. Faust, D. (1982). A needed component in prescriptions for science: Empirical knowledge of human cognitive limitations. Knowledge: Creation, Diffusion, Utilisation, 3, 555-570. Figura, Susannah Zak. (1996). Healthy Keyboarding: What You Should Know. Managing Office Technology, (July), 41(7), 27-28. Findlay, P. N., “Decision Support Systems and Expert Systems,” Computers and Operations Research, 1990, 17(6), 535-543. Finley, M. (1996). What’s your techno type – and why you should care? Personnel Journal, January, 107-109. Fineman, S. (1975). The influence of perceived job climate on the relationship between managerial achievement motivation and performance. Journal of Occupational Psychology, 48, 113-124. Fisher, Sandra Lotz. (1996). Are Your Employees Working Ergosmart? Personnel Journal, (December), 75(12), 91-92. Fleishman, E.A. (1987). “Foreword”, In Cormier, S.M. & Hagman, J.D.(eds), Transfer of Learning: Contemporary Research and Applications, Academic Press Inc., San Diego. Fletcher, K. (1995). Marketing Management and Information Technology. London: Prentice Hall. Fletcher, Meg. (1997). Core Components Crucial in Ergonomics Programs. Business Insurance, (September 15), 31(37), 71. Flowers S. (1996) Software failure: management failure, John Wiley & Sons, London. Flowers, S. (1997). Towards predicting information systems failure. In D. Avison, (Ed.), Key Issues in Information Systems, Maidenhead: McGrawHill, 215-228. Ford, J. K., Quinones, M., Sego., & Speer, J. (1991). Factors affecting the
Comprehensive Bibliography
365
opportunity to use trained skills on the job. Paper presented at the Annual Conference of the Society of Industrial and Organizational Psychology, St. Louis, Missouri. Forti, E. M., Martin, K. E., Jones, R. L., & Herman, J. M. (1995). Factors Influencing Retention of Rural Pennsylvania Family Physicians. JABFP, 8 (6), 469-474. Freedman, D. H. (1993). Quick change artists. CIO, 6(18), 32-28. Friesdorf W., Gross-Alltag F., Konichezky S., Schwilk B., Fattroth A. and Fett P. (1994) Lessons learned while building an integrated ICU workstation, International Journal of Clinical Monitoring and Computing, May, 11(2), 89-97. Fuchs, S. (1992). The professional quest for truth: a social theory of science and knowledge. Albany: State University of New York Press. Fulk, J., Flanagin, A. J., Kalman, M. E., Monge, P., & Ryan, T. (1996), Connective and communal public goods in interactive communication systems, Communication Theory, 6, 60-87. Gable, G.G. (1994). Integrating case study and survey research methods: An example in information systems. European Journal of Information Systems, 3(2), 112 - 126. Gagné, R.M.(1985). The conditions of Learning and Theory of Instruction. New York: Holt Rinehart & Winston. Gainey, T. W., Kelley, D. E. & Hill, J. A. (1999). Telecommuting’s impact on corporate culture and individual workers: Examining the effect of employee isolation. SAM Advanced Management Journal, 64(4), 4-10. Gallagher, K., & McFarland, M. A. (1996). The wired physician: Current clinical information on the Internet. Missouri Medicine, 93 (7), 334-339. Galliers, R.D. (1992). Choosing information systems research approaches. In R.D. Galliers, (Ed.), Information Systems Research, London: Blackwell Scientific Publications, 144-162. Gammack J.G. (1999) Constructive design environments: implementing end-user systems development. Journal of End User Computing 11(1), 15-23. Gardner, Donald, G., Discenza, Richard & Dukes, Richard L. (1993) The measurement of computer attitudes: An empirical comparison of available scales. Journal of Educational Computing Research, 9(4), 487-507. Garner, Rochelle. (1997). Painful Lessons. Computerworld, (January 20), 31(3), 89. Gatewood, R.D., & Field, H.S. (1998). Human resource selection (4th edition). Fort Worth, Texas: The Dryden Press. Gatian A.W. (1994). Is user satisfaction a valid measure of system effectiveness? Information and Management, 26, 119-131.
366
Comprehensive Bibliography
Gentner, D. (1983). “Structure mapping: a theoretical framework for analogy,” Cognitive Science, 7, 155-170. Gentner, D. & Stevens, A. (Eds) (1983). Mental Models, Lawrence Earlbaum Associates Publishers, Hillsdale, New Jersey. Getzels, J.W. (1958). Administration as a social process. In Administration Theory in Education, edited by A.W. Halpin. Chicago: Midwestern Administration Center, 150-65. Gick, M.L. & Holyoak, K.J. (1987). “The Cognitive Basis of Transfer”, In Stephen M. Cormier and Joseph D. Hagman (eds.), Transfer of Learning: Contemporary Research and Applications, Academic Press Inc., San Diego. Ginzberg, M.J. (1981). Key Recurrent Issues in the MIS Implementation Process. MIS Quarterly, 5(2), 47-59. Ginzberg, M.J., Schultz, R., & Lucas, H.C. (1984). A structural model of implementation. In R.L. Schultz & M.J. Ginzberg (Eds.), Applications of Management Science: Management Science Implementation. Greenwich, CN: JAI Press. Gist, M.E. & Mitchell, T.R. (1992). “Self-Efficacy: A Theoretical Analysis of its Determinants and Malleability”, Academy of Management Review, 17(2), 183-211. Gist, M.E., Schwoerer, C. & Rosen, B. (1989). Effects of alternative training methods on self-efficacy and performance in computer software training”, Journal of Applied Psychology, 74, 884-891. Gjertsen, Lee Ann. (1997). Study: Little Benefit From “Ergonomic” Keyboards. National Underwriter Property & Casualty-Risk & Benefits Management, (January 20), no. 3, 8. Golembiewski, R. T. & McConkie, M. (1975). The centrality of interpersonal trust in group processes. In C. L. Cooper (Ed.), Theories of Group Processes, London: Wiley, 131-185. Golumbic, M. R., M. Markovich, and M. Tiomkin, “A Knowledge Representation Language for University Requirements,” Decision Support Systems, 1991, 7(1), 33-45. Goodhue, D.L. and Thompson, R.L. (June, 1995). Task-technology fit and individual performance, MIS Quarterly, 213-236. Goodrich, J., N. (1990). Telecommuting in America. 33(4), 31-37. Gordon, M.E., Slade, L.A., & Schmitt, N. (1986). The “science of the sophomore” revisited: from conjecture to empiricism, Academy of Management Review, 11 (1), 191-207. Gorry, G. A., and M. S. Scott Morton (1971). “A Framework for Management Information Systems” Sloan Management Review, 13(1) 55-70.
Comprehensive Bibliography
367
Govindarajulu, C. & Reithel, B.J. (1998). Beyond the information centre: An instrument to measure end user computing support from multiple sources. Information and Management, 33, 241-250. Grajewski, Barbara, Schnorr, Teresa M., Reefhuis, Jennita, Roeleveld, Nel, Salvan, Alberto, Mueller, Charles A., Conover, David L., and Murray, William E. (1997). Work With Video Display Terminals and the Risk of Reduced Birthweight and Preterm Birth. American Journal of Industrial Medicine, 32, 681-688. Grantham, C.E., & Vaske, J.J. (1985). Predicting the usage of an advanced communication technology. Behaviour and Information Technology, 4(4), 327-35. Grasso, P., Parazzini, F., Chatenoud, L., Di Cintio, E., and Benzi, G. (1997). Exposure to Video Display Terminals and Risk of Spontaneous Abortion. American Journal of Industrial Medicine, 32, 403-407. Green, Paul E. (1978) Analyzing Multivariate Data. Hinsdale, Ill.: Dryden Press. Greenberg, J. (1987). The college sophomore as guinea pig: setting the record straight. Academy of Management Review, 12 (1), 157-159. Greengard, S. (1994). Making the Virtual Office a Reality. Personnel Journal, September, 66-79. Grenier, R. & Metes, G. (1995). Going Virtual: Moving your organization into the 21st century. New Jersey: Prentice Hall. Grensing-Pophal, L. (1997). Employing the best people—from afar. Workforce, 76(3), 30-38. Griffiths, C. (1995). Terminal Failures. Computer system disasters have cost billions and dinosaur bosses must take the blame. London: Independent on Sunday. December 3rd. Guarascio-Howard, Linda. (1997). Ergonomic Software Can Ease RMI Risk. National Underwriter Property & Casualty-Risk & Benefits Management, (October 6), 101(40), 12 and 24. Guba, E. G. & Lincoln, Y. S. (1994) Competing Paradigms in Qualitative Research. In N. K. Denzin and Y. S. Lincoln (Eds.), Handbook of Qualitative Research, Sage Publications, CA, 105-117. Guimaraes, T., Igbaria, M., & Lu, M. (1992). The Determinants of DSS Success: An Integrated Model. Decision Sciences, 23(2), 409-430. Guimaraes, T. & Mc Keen, J. D. (1995). Successful Strategies for User Participation in Systems Development. In G. Doukidis, B. Galliers, T. Jelassi, H. Kremar and F. Land (Eds.), Proceedings of the 3rd European Conference on Information Systems, Athens, Greece, 879-899. Guimaraes, T., & McKeen, J.D. (1993). User Participation in Information
368
Comprehensive Bibliography
System Development: Moderation in All Things. In D. Avison, J. E. Kendall, & J.I. DeGross, (Eds.), Human, Organisational and Social Dimensions of Information Systems Development. North Holland: Elsevier Science Publishers. Hackathorn, R. D., and P. G. W. Keen, “Organizational Strategies for Personal Computing in Decision Support Systems,” MIS Quarterly, 1981, 5(3) 21-26. Hackney R., Kawalek, J., & Dhillon, G. (1999) Strategic information systems planning: perspectives on the role of the ‘end-user’ revisited. Journal of End User Computing 11(2), 3-12. Hagland M. (1998) Intensive care: the next level of IT, Health Management Technology, December, 19(13), 18-21, 23, 27. Hair, J.F., Anderson, R.E., Tatham, R.L., & Black, W.C. (1992). Multivariate data analysis with readings (3rd edition). New York: MacMillian Publishing Company. Halpin, A.W. (1966). Theory and Research in Administration. New York: MacMillian Company. Hamilton, D. (March 15, 1996). A mappable feast. CIO, 9(11), 64-66. Hammer M. & Champy J. (1993) Reengineering the Corporation - A Manifesto for Business Revolution, Nicholas Brealey. Handy, C.B. (1993). Understanding Organisations. 4th edn, London: Penguin. Handy, C. (1995a). Trust and the Virtual Organization. Harvard Business Review, (May-June), 40-50. Handy, C. (1995b). Gods of management: The changing work of organizations. New York: Oxford University Press. Harmon, P., and D. King, Expert Systems: Artificial Intelligence in Business, John Wiley & Sons, Inc., New York, NY, 1985. Harmon, P., and B. Sawyer, Creating Expert Systems for Business and Industry, John Wiley & Sons, Inc.: New York, NY, 1990. Harned, M. A. (1993). The saga of rural health care. The West Virginia Medical Journal, 89 (55), 54-5. Harrington, K.V., McElroy, J.C., & Morrow, P.C. (1990). Computer anxiety and computer based training: A laboratory experiment. Journal of Educational Computing Research, 6 (3), 343-358. Harrison, A.W. & Rainer, R.K. (1992) The influence of individual differences on skill in end-user computing. Journal of Management Information Systems, 9(1), 93-111. Hartman, R. I., Stoner, C. R. & Arora, R. (1992). Developing successful organizational telecommuting arrangements: worker perceptions and
Comprehensive Bibliography
369
managerial prescriptions. SAM Advanced Management Journal, 57, 3542. Hatcher, L., Prus, J.S., Englehard, B., & Farmer, T.M. (1991). A measure of academic situational constraints: Out-of-class circumstances that inhibit college student development. Educational and Psychological Measurement, 51, 953-962. Hayduk, L.A. (1987). Structural Equation Modeling with LISREL. Baltimore: Johns Hopkins University Press. Hayek, L.M., & Stephens, L. (1989). Factors affecting computer anxiety in high school computer science students. Journal of Computers in Mathematics and Science Teaching, 8, 73-76. Hayes, R. (1996). Moving IS beyond conflict. The Journal of Systems Management. May. Heathfield H., Hudson P., Kay S., Klein L., Mackey L., Marley T., Nicholson L., Peel V., Roberts R., Williams J. and Protti D. (1997) Research evaluation of ICWS demonstrators, Report for NHSE IMG Integrated Clinical Workstation Programme Board, January, Health Services Management Unit, University of Manchester, Manchester. Hendrickson, A.R., Massey, P.D., & Cronan, T.P. (1993). On the Test-Retest Reliability of Perceived Usefulness and Perceived Ease of Use Scales. MIS Quarterly, June, 227-230. Heydt, S. (1999). Helping physicians cope with change. Physician Executive, 25(2), 40-43. Hirschheim, R. & Newman, M. (1988). Information Systems and User Resistance: Theory and Practice. The Computer Journal, 31(5), 1-11. Hirschheim, R. A., (1983). Assessing Participative Systems Design: Some Conclusions from an Exploratory Study. Information and Management, 6, 317-328. Hoadley, E.D. (1988).The effects of color and performance in an information extraction task using varying forms of information presentation. Unpublished doctoral dissertation, Indiana University. Hochstrasser, B., & Griffiths, C. (1991). Controlling IT Investment. London: Chapman & Hall. Hoffman T. (1997) Techo-phobic MDs refuse to say ‘Ah!’ ER doctors wary of computerized records, Computerworld, Feb 24th, 31(8), 75-76. Hollon, C. J. and Gemmill, G. R. (1977). Interpersonal trust and personal effectiveness in the work environment, Psychological-Reports, 40(2), 454. Houle, Philip A. (1996) Toward understanding student differences in a computer skills course. Journal of Educational Computing Research, 14(1), 25-48.
370
Comprehensive Bibliography
Howcroft, D. and Mitev, N.N. (2000) An empirical study of Internet usage and difficulties among medical practice management in the UK, Journal of Internet Research: electronic networking applications and policy, 10(2), 170-181. Howell, D. (2000), Love & Attachments – Did a Woman Create the “I Love You Virus?”, Net Culture, May 11, http://netcultrure.about.com/library/ weekly/aa050700a.htm. Huber, G.P. (1983). Cognitive styles as a basis for MIS and DSS designs: Much ado about nothing? Management Science, 29(5), 567-579. Hwang, M.I. & Thorn, R.G. (1999). The effects of user engagement on system success: A meta-analytical integration of research findings, Information and Management, 35(4), 229-236. Hyman, J. & Mason, B. (1995). Managing Employee Involvement and Participation. Sage Publications, London. Igbaria, M., & Iivari, J., & Maragahh, H. (1995). Why do individuals use computer technology? A Finnish case study. Information and Management, 29, 227-238. Igbaria, M., & Nachman, S. (1990). Correlates of user satisfaction with EUC. Information and Management, 19 (2), 73-82. Igbaria, M. & Guimaraes, T. (1999). Exploring differences in employee turnover intentions and its determinants among telecommuters and nontelecommuters. Journal of Management Information Systems, 16(1), 147-164. Illingworth, M. M. (1994). Virtual Managers. InformationWeek, June 13, 42-58. Introna, L. D. (1997). Management, Information and Power. Basingstoke: Macmillan. Ireland R.H., James H.V., Howes M. and Wilson A.J. (1997) Design of a summary screen for an ICU patient data management system, Medical Biological Engineering Computing, July, 35(4), 397-401. Ives, B. & Olson, M.H. (1984). User Involvement and MIS Success: A Review of Research. Management Science, 30(5), 586-603. Ives, B., Hamilton, S. & Davis, G.B. (1980). A Framework for Research in Computer-Based Management Information Systems. Management Science, September, 26(9), 910-934. Ives, B. (1982). Graphical user interfaces for business information systems. MIS Quarterly, Special Issue, 15-42. James B. (1995) 80% of all NHS treatments simply may not work, Mail On Sunday Review, 18th June. Jarvenpaa, S.L., Dickson, G.W., & DeSanctis, G. (1985). Methodological Issues in Experimental IS Research: Experiences and Recommendations. MIS Quarterly, 9(2), 141-156.
Comprehensive Bibliography
371
Jawahar, I.M. (in press). The influence of dispositional factors and situational constraints on end user performance: A replication and extension. Journal of End User Computing. Jawahar, I.M., & Elango, B. (1988). Predictors of performance in software training: Attitudes toward computers versus attitudes toward working with computers. Psychological Reports, 83, 227-233. Jawahar, I.M., & Elango, B. (2001). The effects of attitudes, goal setting and self-efficacy on end user performance. Journal of End User Computing, 13 (2), 40-45. Jawahar, I. M., Stone, T. H., & Cooper, W. H. (1992). Activating resources in organizations. In R.W. Woodman and W. A. Pasmore (Eds.), Research in Organizational Change and Development, 6, 153-196. JAI Press. Jay, T.B. (1981) Computerphobia: What to do about it. Educational Technology, 21, 47-48. Jenner, L. (1994). Are you ready for the virtual workplace? HR Focus, 71(7), 1516. Jiang, J.J., Muhanna, W.A., & Klein, G. (2000). User resistance and strategies for promoting acceptance across system types. Information and Management 37(1), pp. 25-36. Johnson, N. J. (2001). Telecommuting and virtual offices: Issues & opportunities. Hershey, USA: Idea Group Publishing. Johnson, Richard A. & Wichern, Dean W. (1982) Applied Multivariate Statistical Analysis. Englewood Cliffs, NJ: Prentice Hall. Jones, Paul E. & Wall, Robert E. (1989-90) Components of computer anxiety. Journal of Educational Technology Systems, 18(2). Jöreskog, K.G., & Sörbom, D. (1993). LISREL 8: Structural equation modeling with the SIMPLIS command language. Chicago: Chicago Scientific Software, Inc. Joshi, K. (1990). An investigation of equity as a determinant of user information satisfaction. Decision Sciences, 21(4), 786-807. Joshi, K. (1992). A causal path model of the overall user attitudes toward the MIS function - The case of user information satisfaction. Information and Management, 22, 77-88. J. Larsen, L. Levine, & J. I. DeGross (Eds.), Information systems: Current issues and future changes (pp. 341-358). Laxenburg: Austria. Jacobs, A.. (June 10, 1996). GIS technology makes inroads. Computerworld. 30(24), 71. Johnson, D., “A Database Approach to Course Timetabling,” The Journal of the Operational Research Society, 1993, 44(5), 425-433. Joyner, E.R. (1989). The development of a metric for assessing the informa-
372
Comprehensive Bibliography
tion complexity of time-series business graphs. Unpublished doctoral dissertation, Indiana University. Kamienska-Zyla, M. and Prync-Skotniczny, K. (1996). Subjective Fatigue Symptoms Among Computer Systems Operators in Poland. Applied Ergonomics, (June), 27(3), 217-220. Kaplan B. and Maxwell J.A. (1994) Qualitative research methods for evaluating computer information systems, In Evaluating health care information systems: methods and applications, edited by J.G. Anderson, C.E. Aydin and S.J. Jay, Sage, Thousand Oaks, California, 45-68. Kaplan, B. (1994). Reducing barriers to physician data entry for computerbased patient records. Topics in Health Information Management. 15(1):24-34. Kearney, A.T. (1990). Barriers to the Successful Application of Information Technology. London: DTI & CIMA. Keefe, J.W. (1979a) School applications of the learning style concept. In Student Learning Styles: Diagnosing and prescribing programs, Reston VA: National Association of Secondary School Principals, 123-132. Keefe, J.W. (1979b) Learning style: An overview. In Student learning styles: Diagnosing and prescribing programs. Edited by the National Association of Secondary School Principals, Reston VA: National Association of Secondary School Principals, 1-17. Keele, S.W. (1970). Effects of input and output modes on decision time. Journal of Experimental Psychology, 85(2), 157-164. Keen, J. (1994). Information Management in Health Services. Buckingham: The Open University Press. Keen, P. G. (1981). Information systems and organiza-tional change. Communications of the ACM, 24(1), 24-33. Keen, P.G.W. & Bronsema, G.S. (1981). Cognitive style research: a perspective for integration. Proceedings: International Conference on Information Systems, 21-52. Keil, M. (1995). Pulling the plug: Software project man-agement and the problem of project escalation. MIS Quarterly, 19(4), 421-447. Kennedy, T.C.S. (1975). Some behavioral factors affecting the training of naïve users of an interactive computer system. International Journal of Man-Machine Studies, 7, 817-834. Kepczyk, R. H. (1999). Evaluating the Virtual Office. The Ohio CPA Journal, 58(2), 16-17. Kernan, M. C., & Howard, G. S. (1990). Computer anxiety and computer attitudes: An investigation of construct and predictive validity issues. Educational and Psychological Measurement, 50, 681-690.
Comprehensive Bibliography
373
Kieras, D. & Polson, P. (1985). “An Approach to the Formal Analysis of User Complexity,” International Journal of Man-Machine Studies, 22, 365394. Keirsey, David M. (1998) http://www.keirsey.com/cgi-bin/keirsey/newkts.cgi. King, J.L., Gurbaxani, V., Kraemer, K.L., McFarlan, F.W., Raman, K.S. & Yap, C.S. (1994). Institutional Factors in Information Technology Innovation. Information Systems Research, 5(2), 139-169. Klein, J.D., Knupfer, N. & Crooks, S.M. (1993) Differences in computer attitudes and performance among re-entry and traditional college students. Journal of Research on Computing in Education, 25(4), 499-505. Klein, M. M. (1994). The virtue of being a virtual corporation. Best’s Review, 96(6), 88-94. Kling, R. & Iacono, S. (1989). The institutional character of computerized information systems. Office: Technology and People, 5(1), 7-28. Kling, Rob (1987). Defining the Boundaries of Computing Across Complex Organizations. Critical Issues in Information Systems Research. Eds. Richard J. Boland and Rudy A. Hirschheim. Chichester: John Wiley, 307-362. Kling, Rob (1980). Social Analyses of Computing: Theoretical Perspectives in Recent Empirical Research. Computing Surveys, 12(1), 61-110. Knights D. and Murray F. (1994) Managers divided: organisation politics and information technology management, John Wiley & Sons, London. Kolb, D.A. (1981) Learning styles and disciplinary differences. In The Modern American College. Edited by A.W. Chickering and Associates., San Francisco: Jossey-Bass, 232-255. Kolb, D.A. (1985) Learning-style Inventory Self-Scoring Inventory and Interpretation Booklet. Boston: Hay/McBer, 1985. Komito L. (1998) Paper ‘work’ and electronic files: defending professional practice. Journal of Information Technology 13, 235-246. Konvalina, J., Stephens, L., & Wileman, S. (1983). Identifying factors influencing computer science aptitude and achievement. AEDS Journal, 16 (2), 106-112. Koohang, Alex A. (1989) A study of attitudes toward computers: Anxiety, confidence, liking, and perception of usefulness. Journal of Research on Computing in Education, 137-150. Kotler, P. (1991). Marketing Management: Analysis, Planning, Implementation and Control. London: Prentice Hall. Kotter, J.P., & Schlesinger, L.A. (1979). Choosing strategies for change. Harvard Business Review, 57(2), 106-114.
374
Comprehensive Bibliography
Kozar, K.A. & Mahlum, J.M., (1987). A User Generated Information System: An Innovative Development Approach. MIS Quarterly, 11(2), 163-173. Kozloff, J., “Delivering Academic Advising: Who, What, and How?” NACADA Journal, 1985, 5(2), 69-75. Krovi, R. (1993). Identifying the causes of resistance to IS implementation ; A change theory perspective. Information & Management, 25, 327-335. LaBar, Gregg. (1997). Ergonomics for the Virtual Office. Managing Office Technology, (October), 22-24. Land, F. & Hirschheim, R. (1983). Participative Systems Design: Rationale, Tools, and Techniques. Journal of Applied Systems Analysis, 10, 91-107. Langenberg C.J. (1996) Implementation of an electronic patient data management system (PDMS) in an intensive care unit, International Journal of Biomedical Computing, July, 42(1-2), 97-101. Lassila, K.S. & Brancheau, J.C. (1999).”Adoption and Utilization of Commercial Software Packages: Exploring Utilization Equilibria, Transitions, Triggers, and Tracks.” Journal of Management Information Systems, 16, 2, pp. 63-90. Latour, B. (1987). Science in action: How to follow scientists and engineers through society. Cambridge, MA: Harvard University Press. Lau, H-S, and M. G. Kletke, “A Decision Support Software on Bidding for Job Interviews in College Placement Offices,” Management Science, 1994, 40(7), 842-845. Lauer, T.W. (1986). The effects of variations in information complexity and form of presentation on performance for an information extraction task. Unpublished doctoral dissertation, Indiana University. Lawrence, M. & Low, G. (1993). Exploring Individual User Satisfaction Within User- Led Development. MIS Quarterly, 17(4), 195-208. Le Blanc, L. A., and M. T. Jelassi (1989). “DSS Software Selection: A Multiple Criteria Decision Methodology,” Information & Management, 17, 49-65 Lee F., Teich J. M, Spurr C. D, & Bates D.W. (1996). Implementation of physician order entry: user satisfaction and self-reported usage patterns. J Am Med Informatics Assoc. 3, 42-55. Lee, R.S. (1970). Social attitudes and the computer revolution. Public Opinion Quarterly, 34, 53-59. Lehrer, R. & Littlefield, J. (1993). “Relationships among cognitive components in logo learning transfer.”, Journal of Educational Psychology, 85, 2, pp. 317-330. Lemos, R. (1999a), What Will Happen in Melissa’s Wake?, ZDNet News, April 4, http://www.zdnet.com/zdnn/stories/ news/0,4586,2235129,00.html. Lemos, R. (1999b), CIH Computer Virus Toll Tops 540,000, ZDNet News, April
Comprehensive Bibliography
375
27, http://www.zdnet.com/zdnn/stories/news/0,4586,2248566,00.html. Leonard-Barton, D. (1995). Well-Springs of Knowledge: Building and Sustaining the Sources of Innovation. Harvard Business School Press, Boston, MA. Levine, Tamar & Donitsa-Schmidt, Smadar (1998) Computer use, confidence, attitudes, and knowledge: A causal analysis. Computers in Human Behavior, 14(1), 125-146. Levine, Tamar & Gordon, Claire (1989) Effect of gender and computer experience on attitudes toward computers. Journal of Educational Computing Research, 5(1), 69-88. Lewis, J. D. & Weigert, A. (1985). Trust as a social reality, Social Forces, 63, 967-985. Lewis, P. (1994). Information Systems Development. London: Pitman Publishing. Li, E.Y. (1997). Perceived importance of information systems success factors: A meta analysis of group differences. Information & Management, 32(1), 1528. Liberatore, M.J., Titus, G.J., & Dixon, P.W. (1988). The effects of display formats on information systems design. Journal of Management Information Systems, 5(3), 85-99. Lin, W.T. & Shao, B.B.M. (2000). The relationship between user participation and system success: a simultaneous contingency approach, Information and Management, 37(6), 283-295. Lincoln, Y. & Guba, E. (1985). Naturalistic Inquiry. Sage Publications, CA. Lintern, G., Roscoe, S.N., & Segal, L.D., (1990). Transfer of landing skills in beginning flight training, Human Factors, 32, 3, pp. 319-327. Lipnack, J. & Stamps, J. (1997). Virtual Teams: Reaching Across Space, Time, and Organizations with Technology. New York: John Wiley & Sons, Inc. Lock C. (1996) What value do computers provide to NHS hospitals? British Medical Journal, 312(7043), 1407-1410. Locke, E. A., & Latham, G. P. (1990). A theory of goal-setting and task performance. Englewoods, NJ: Prentice-Hall. Locke, E. A., Shaw, K. N., Saari, L. M., & Latham, G. P. (1981). Goal-setting and task performance:1969-1980. Psychological Bulletin, 90, 125-152. Lord, Mary. (1997). Is Your Mouse a Trap? U.S. News & World Report, (March 31), 122(12), 76 and 78. Lorenzi, N.M., Riley, R.T., Blyth, A.J.C., Southon, G. and Dixon, B.J. (1997). Antecedents of the people and organizational aspects of medical informatics: review of the literature. Journal of American Medical Informatics Association. 4(2):79-93.
376
Comprehensive Bibliography
Lorenzi, H.M. and Riley, R.T. (2000). Managing change: An overview. Journal of American Medical Informatics Association. 7(2):116-124. Loyd, B.H. & Gressard, C. (1984a) The effects of sex, age and computer experience on computer attitudes. AEDS Journal, 40, 67-77. Loyd, B.H. & Gressard, C.P. (1984b) Reliability and factorial validity of computer attitude scales. Educational and Psychological Measurement, 44, 501-505. Loyd, Brenda H. & Gressard, Clarice P. (1986) Gender and amount of computer experience of teachers in staff development programs: Effects on computer attitudes and perceptions of the usefulness of computers. AEDS Journal, 19(4), 302-311. Lucas, H.C. (1978). Empirical evidence for a descriptive model of implementation. MIS Quarterly, 2, 27-41. Lucas, H.C. (1981). Implementation: The Key to Successful Information Systems. New York: McGraw Hill Book Company Lucas, H. C., Jr. & Baroudi, J. (1994). The role of information technology in organization design. Journal of Management Information Systems, 10(4), 9-23. Lundberg, G. D. (1998). Medical information on the Internet. The Journal of the American Medical Association, 280. Lusk, E.J. & Kersnick. (1979). M. Effects of cognitive style and report format on task performance: The MIS design consequences. Management Science, 25, 787-798. Lyytinen, K. and Hirschheim, R (1987). Information Systems Failures - a survey and classification of the empirical literature. Oxford Surveys in Information Technology. Oxford University Press. Lyytinen, K. (1987). Different Perspectives on Information Systems: Problems and Solutions. ACM Computing Surveys, 19, 5-46. MacKenzie I.S. & Chang L. (1999) A performance comparison of two handwriting recognizers. Interacting with Computers 11, 283-297. Mackesy, R. (1993). Physician satisfaction with rural hospitals. Hospital & Health Services Administration, 38 (3), 375-385. Mahmood, A.A., Burn, J.M., Gemoets, L.A., & Jacquez, C. (2000). Variables affecting information technology end-user satisfaction: A meta-analysis of the empirical literature. International Journal of Human-Computer Studies, 52, 751-771. Mahmood, M.A. (1987). Systems Development Models—A Comparative Investigation. MIS Quarterly, 11(3), 293-311. Marcoulides, George A. (1988) The relationship between computer anxiety and computer achievement. Journal of Educational Computing Re-
Comprehensive Bibliography
377
search, 4(2), 151-159. Markus, M. L. (1983). Power, politics and MIS implemen-tation. Communications of the ACM, 26(6), 430-444. Markus, L. & Robey, D. (1988). Information Technology and Organizational Change: Causal Structure in Theory and Research, Management Science, 34,(5), 583-598. Marsh, H.W. (1985). The Structure of Masculinity/Femininity: An Application of Confirmatory Factor Analysis to Higher-Order Factor Structures and Factorial Invariance. Multivariate Behavioral Research, 20, 427-449. Marsh, H.W., & Hocevar, D. (1985). Application of Confirmatory Factor Analysis to the Study of Self-Concept: First- and Second-Order Factor Models and their Invariance Across Groups. Psychological Bulletin, 97(3), 562-582. Marsh, H.W., & Hocevar, D. (1988). A New, More Powerful Approach to Multitrait-Multimethod Analysis: Application of Second-Order Confirmatory Factor Analysis. Journal of Applied Psychology, 73(1),107-117. Marshall, C. and Rossman, G. (1989) Designing qualitative research, Sage Publications, Thousand Oaks, CA. Massoud, S.L. (1991) Computer attitudes and computer knowledge of adult students. Journal of Educational Computing Research, 7(3), 269-291. Maurer, M.M. (1994) Computer anxiety correlates and what they tell us: A literature review. Computers in Human Behavior, 10(3), 369-376. Maurer, Matthew M. & Simonson, Michael R. (1993-94) The reduction of computer anxiety: Its relation to relaxation training, previous computer coursework, achievement, and need for cognition. Journal of Research on Computing in Education, 26(2), 205-219. Mawhinney, Charles H. & Saraswat, Satya Prakash (1991) Personality type, computer anxiety and student performance: An empirical investigation. The Journal of Computer Information Systems, 31(3), 101-103. Mayer, R.E. (1979). “Can advance organizers influence meaningful learning?,” Review of Educational Research, 49, pp. 371-383. Mayor, T. (1994). No place like home. CIO, 8, 64. Maynard, R. (1994). The growing appeal of telecommuting. Nation’s Business, 82, 61-62. Mazzoleni, M.C., Baiardi, P., Giorgi, I., Franchi, G., Marconi, R. and Cortesi, M. (1996). Assessing users’ satisfaction through perception of usefulness and ease of use in the daily interaction with a hospital information system. Cimino J (ed). Proceedings 20th AMIA. 752-6. Hanley & Belfus, Inc. Philadelphia. McAllister, D. J. (1995). Affect- and cognition-based trust as foundations for
378
Comprehensive Bibliography
interpersonal cooperation in organizations. Academy of Management Journal, 38(1), 24-59. McCafferty, C. (1996). Securing the NHSnet. British Journal of Healthcare Computing and Information Management, 13(8), 24-26. McCauley, D. P. & Kuhnert, K. W. (1992). A theoretical review and empirical investigation of employee trust in management. Public Administration Quarterly, 16, 265-284. McCormick, R. D. (1992). Family Affair. Chief Executive (76), 30-33. McCloskey, D. W. (2001). Telecommuting experiences and outcomes: Myths and realities. In N.J. Johnson (Ed.), Telecommuting and virtual Offices: Issues & opportunities. Hershey, USA: Idea Group Publishing, 231-246. McCloskey, D. W. & Igbaria, M. (1998). A Review of the Empirical Research on Telecommuting and Directions for Future Research. In The Virtual Workplace. Magid Igbaria and Margaret Tan editors. Hershey, USA: Idea Group Publishing. McCrae, R.R., & Costa, P.T. (1987). Validation of the five-factor model of personality across instruments and observers. Journal of Personality and Social Psychology, 52, 81-90. McHaney, R.W., & Cronan, T.P. (1998). Computer Simulation Success: On the Use of the End-User Computing Satisfaction Instrument. Decision Sciences, 29(2), 525-536. McHaney, R.W. & Cronan, T.P. (2000), AToward an Empirical Understanding of Computer Simulation Implementation Success, Information & Management, Vol. 37, 135-151. McHaney, R.W. & White, D. (1998). “Discrete Event Simulation Software Selection: An Empirical Framework, Simulation & Gaming, 29(2), 228-250. McIlroy, D., Bunting, B., Tierney, K., & Gordon, M. (2001) The relation of gender and background experience to self-reported computing anxieties and cognitions. Computers in Human Behavior, 17, 21-33. McKeen, J. D., (1990). Successful Development Strategies for Business Application Systems. Management Science, 36(1), 76-91. McKeen, J. D., Guimaraes, T. & Wetherbe, J.C. (1994). The Relationship Between User Participation and User Satisfaction: An Investigation of Four Contingency Factors. MIS Quarterly, 18(4), 427-451. McKenna P. (1996) Towards the paperless hospital: implementing the electronic patient record at Edinburgh, Conference on Current Perspectives in Healthcare Computing, Harrogate, March 18th-20th, British Journal of Healthcare Computing Ltd.
Comprehensive Bibliography
379
McNerney, D. J. (1994). A strategic partnership: Clean Air Act and work-family. HR Focus, 71, 22-23. McMaster, T., Mumford, E., Swanson, E.B., Warboys, B. and Wastell, D. (Eds.) (1997) Facilitating technology transfer through partnership, Chapman and Hall, London. Meall, L. (1993). Homework as a growth industry. Accountancy, 111, 53. Mercury News (1999), Computer Virus Caused Heavy Damage Abroad, Mercury News, April 28, http://www.mercurycenter.com/business/top/ 049439.htm (accessed 5/13/99). Metnitz G.H. and Lenz K. (1995) Patient data management systems in intensive care: the situation in Europe, Intensive Care Medicine, 21(9), 703-715. Metnitz G.H., Laback P., Popow C., Laback O., Lenz K., and Hiesmayr M. (1995) Computer assisted analysis in intensive care: the ICDEV project development of a scientific database system for intensive care, International Journal of Clinical Monitoring and Computing, 12, 147-159. Metnitz G.H., Hiesmayr M., Popow C. and Lenz K. (1996) Patient data management systems in intensive care, International Journal of Clinical Monitoring and Computing, 13, 99-102. Metzger, R. O. & Von Glinow, M. A.. (1988). Off-site workers: at home and abroad. California Management Review, 30(3), 101-111. Miles, M.B., & Huberman, A.M. (1994). Qualitative Data Analysis: An Expanded Sourcebook, 2nd Edn., London: Sage Publications. Miles, R. E. and Snow, C. C. (1995). The new network firm: A spherical structure built on a human investment philosophy. Organizational Dynamics, 23(4), 4-18. Miller, J. & Bauer, D. (1981). “Visual Similarity & Discrimination Demands,” Journal of Experimental Psychology, 110(1), 39-55. Miller, J., & Doyle, B. (1987). Measuring the effectiveness of computerbased information systems in the financial services sector. MIS Quarterly, 11(1) 107-124. Miller, P. (1992). Accounting and objectivity: The inven-tion of calculating selves and calculable spaces. Annals of scholarship, 9(1/2), 61-86. Miller R.A. (1994) Medical diagnostic decision support systems: past, present and future, Journal of the American Medical Informatics Association, 1(1), 8-27. Miller, T. E. (1996). Segmenting the Internet. American Demographics, 18 (7), 48-51. Mitev, N., & Kerkham, S. (1998, 4-6 June 1998). Less haste more speed: organisational and implementation issues of patient data management systems in an intensive care unit. Paper presented at the Proceedings of the 6th
380
Comprehensive Bibliography
European Conference on Information Systems, Aix en Provence, France. Modell M., Iliffe S., Austin A. and Leaning M.S. (1995) From guidelines to decision support in the management of asthma, In Health telematics for clinical guidelines and protocols, edited by C. Gordon and J.P. Christensen, ISO Press, 105-113. Moore, G.C. & Benbasat, I. (1991). Development of an instrument to measure perceptions of adopting information technology innovation, Information Systems Research, 2(3), 192-222. Moran, E. T. & Volkwein, J. F. (1992). The cultural approach to the formation of organizational climate. Human Relations, 45, 19-47. Moran, T. (1981). “An Applied Psychology of the User,” ACM Computing Surveys, 13, 1, pp. 1-12. Morris, M. G. & Dillon, A. (1997). How user perceptions influence software use. IEEE Software, 58-65. Morrison, Donald F. (1967) Multivariate Statistical Methods. New York: McGrawHill. Moore J. (1995) Electronic prescribing: ‘the perfect prescription’, NHS Executive, NHS, London. Mowshowitz, A. (1994). Virtual organizations: A vision of management in the information age. The Information Society, 10(4), 267-288. Mukherjee, A. K., “Heuristic Perturbation of Optimization Results in a DSS for Instructor Scheduling,” Decision Support Systems, 1994, 11(1), 6777. Mumford, E. (1985). Defining System Requirements to meet Business Needs: a Case Study Example. The Computer Journal 28(2): 97-104. Mumford, E. (1979). Consensus Systems Design: An Evaluation of this Approach. In N. Szyperski & E. Grochla (Eds.), Design and Implementation of Computer Based Information Systems, Sijthoff and Noordhoff, Groningen, Holland. Munro, M.C., Huff, S.L., Marcolin, B.L., & Compeau, D.R. (1997). Understanding and measuring user competence. Information and Management, 33, 45-57. Narusis, M.J./SPSS Inc. (1993). SPSS for Windows: Advanced Statistics, Release 6.0. SPSS Inc. National Audit Office. (1991). Managing Computer Projects in the National Health Service. London: HMSO. National Audit Office. (1996). The NHS Executive: The Hospital Information Support Systems Initiative. London: HMSO. Nenov V.I., Read W. and Mock D. (1994) Computer applications in the intensive care unit, Neurosurgical Clinician North America, October, 5(4), 811-827.
Comprehensive Bibliography
381
Neufeld, D. J. (1997). Individual Consequences of Telecommuting. Unpublished Doctoral Thesis. The University of Western Ontario, London, Canada. NHS Executive. (1994). A strategy for NHS-wide net-working (E5155): Information Management Group. NHS Executive. (1995). NHS-wide networking: applicationrequirements specification (H8003): Information Management Group. NHS Executive. (1996). The use of encryption and re-lated services with the NHSnet: A report for the NHS Executive by Zergo Limited (E5254): Information Management Group. NHS Executive. (1997a). The Caldicott Committee: Report on the review of patient-identifiable information : http://www1c.btwebworld.com/imt4nhs/general/caldico/ caldico1.htm. NHS Executive. (1997b). This is the IMG: A guide to the Information Management Group of the NHS Executive (B2216): Information Management Group. NHS Executive. (1998a). IMG: Programmes and Projects Summaries (B2232): Information Management Group. NHS Executive. (1998b). Information for Health - Executive Summary (A1104): Department of Health. Nickell, G., & Pinto, J. (1986). The computer attitude scale. Computers in Human Behavior, 12, 301-306. Norman D.A. (1983). Some Observations on mental models.” In Mental Models, A.L. Stevens and D. Gentner (eds.), Lawrence Earlbaum Associates Publishers, Hillsdale, New Jersey. North, D.C. (1990). Institutions, Institutional Change and Economic Performance. Cambridge University Press. Nunnally, J.C., & Bernstein, I.H. (1994). Psychometric theory (3rd edition). New York: McGraw-Hill, Inc. O’Hara-Devereaux, M, & Johansen, R. (1994). Globalwork: Bridging distance, culture, and time. San Francisco, CA: Jossey-Bass Publishers. O’Quin, K., Kinsey, T.G., & Beery, D. (1987). Effectiveness of microcomputer-training workshop for college professionals. Computers in Human Behavior, 3, 85-94. OASIG (1996). Why do IT Projects so often Fail? OR Newsletter 309: 12-16. Ogletree, Shirley M. & Williams, Sue W. (1990) Sex and sex-typing effects on computer attitudes and aptitude. Sex Roles, 23(11/12), 703-712. Olfman, L., & Mandviwalla, M. (1994). “Conceptual versus procedural software training for graphical user interfaces: A longitudinal field experiment.” MIS Quarterly, 18(4), 405-426. Olfman, L., & Bostrom, R. (1991). End-user software training: An experimental comparison of methods to enhance motivation. Journal of Information Systems, 1, 249-266.
382
Comprehensive Bibliography
Olson, M. H. (1988). Organizational barriers to telework. In W. B. S. Korte, W.J.; & Robinson, S. (Eds.), Telework: Present situation and future development of a new form of work. North-Holland. Olson, M.H. & Primps, S. B. (1984). Working at Home with Computers: Work and Nonwork Issues. Journal of Social Issues, 40(3), 97-112. Olson, M. H. (1982). New information technology and organizational culture. MIS Quarterly, 6, 71-99. Olsten Corporation (1993). Survey of changes in computer literacy requirements for employees as reported in “Computer skills are more critical, but training lags,” HR Focus, 70 (5), 18. Orlicky, J. A., Material Requirements Planning: The New Way of Life in Production and Inventory Management, McGraw-Hill, New York, NY, 1975. Orlikowski, W. (1996). Evolving with notes: Organizational change around groupware technology. In C. U. Ciborra (Ed.), Groupware & teamwork: Invisible aid or technical hindrance (pp. 23-59). Chichester: Wiley. Orlikowski, W.J. (1993). CASE Tools as Organizational Change: Investigating Incremental and Radical Changes in Systems Development. MIS Quarterly, 17(3), 309-340. Orr, Claudia, Allen, David, & Poindexter, Sandra. (2001) The effect of individual differences on computer attitudes: An empirical study. Journal of End User Computing, 13(2), 26-39. Ostrowski, John W., Gardner, Ella P. & Motawi, Magda H. (1986) Microcomputers in public finance organizations: A survey of uses and trends. Government Finance Review, 23-29. Overhage J. M., Perkins S., Tierney W. M., & McDonald C. J. (2001) Controlled Trial of Direct Physician Order Entry. J Am Med Informatics Assoc. 8, 361371. Palvia, P. (1991). On end user computing productivity. Information and Management, 21, 217-224. Pare G., Elam J.J. and Ward C.G. (1997) Implementation of a patient charting system: challenges encountered and tactics adopted in a burn center, Journal of Medical Systems, February, 21(1), 49-66. Pare, G., & Elam, J.J. (1997). Using case study research to build theories of it implementation. In A. S. Lee, J. Liebenau, and J. I. DeGross, (Eds.), Information Systems and Qualitative Research, Proceedings of the IFIP TC8 WG 8.2 International Conference on Information Systems and Qualitative Research, 31 May-3 June 1997, Philadelphia, Pennsylvania, USA. London: Chapman and Hall.
Comprehensive Bibliography
383
Park, Sung-Youl & Gamon, Julia. (1996) Designing inservice education for extension personnel: The role of learning styles in computer training programs. Journal of Applied Communications, 80(4), 15-24. Patton, M. (1990). Qualitative Evaluation and Research Methods (2nd Edition), Sage Publications, London. Pearlson, K. E., & Saunders, C. S. (2001). There’s no place like home: Managing telecommuting paradoxes. Academy of Management Executive, 15(2), 117-128. Peel V. (1994) Management-focused health informatics research and education at the University of Manchester, Methods of Information in Medicine, 33, 273-277. Peel V. (1996) Key health service reforms 1986-91, In Information management in health care, Series of Handbooks, Handbook A, Introductory Themes, Section A4.2, Health Informatics Specialist Group, The Institute of Health Management, Longman Health Management, London. Peel V., Heathfield H., Hudson P., Kay S., Klein L., Mackay L., Marley T. and Nicholson L. (1997) Considering an electronic patient record (EPR) and clinical work station (CWS) system: twenty critical questions for a hospital board, Pre-publication paper, Health Services Management Unit, University of Manchester, Manchester. Pedhazur, E.J., & Schmelkin, L.P. (1991). Measurement, design and analysis: An integrated approach. Hillsdale, New Jersey: Lawrence Erlbaum Associates. Pentland, B. T. (1989). The learning curve and the forgetting curve: The importance of time and timing in the implementation of technological innovations. Paper presented at the Annual Academy of Management Meetings, Washington, DC. Peppard, J. (1993). IT Strategy for Business. London: Pitman Publishing. Roberts, M. (1992). Expanding the role of the direct marketing database. Journal of Direct Marketing, 6,2, (51-60). Peters, L.H., Chassie, M.B., Lindholm, H.R., O’Connor, E.J., & Kline, C.R. (1982). The joint influence of situational constraints and goal setting on performance and affective outcomes. Journal of Management, 8 (2), 7-20. Peters, L. H., & O’ Connor, E. J. (1980). Situational constraints and work outcomes: The influence of a frequently overlooked construct. Academy of Management Review, 5, 391-397. Peters, L. H., O’ Connor, E. J., & Rudolf, C. J. (1980). The behavioral and affective consequences of performance-relevant situational variables. Organizational Behavior and Human Performance, 25, 79-96.
384
Comprehensive Bibliography
Petty, R.E. & Cacioppo, J.T. (1986). Communication and persuasion: central and peripheral routes to attitude change. New York: Springer-Verlag. Pfeffer, J. (1995). New Directions for Organization Theory. Oxford University Press, New York, NY. Phelps, N. (1985). Mountain Bell: Program for managers. In Office Workstations in the Home. Washington, DC: National Research Council, National Academy Press. PICIS Website (1997) Home page, http://www.picis.com/ Pierce, J.L., Rubenfeld, S.A., Morgan, S. (1991). Employee ownership: A conceptual-model of process and effects. Academy of Management Review, 16(1), 121-144. Pierce, J.L., Newstrom, J.W., Dunham, R.B. & Barber, A. E. (1989). Alternative Work Schedules. Boston, Allyn and Bacon Inc. Pierpont G.L. and Thilgen D. (1995) Effect of computerised charting on nursing activity in intensive care, Critical Care Medicine, June, 23(6), 1067-1073. Pinsonneault, A. & Boisvert, M. (2001). The Impacts of Telecommuting on organizations and individuals: A review of the literature. In N.J. Johnson (Ed.), Telecommuting and Virtual Offices: Issues & opportunities. Hershey, USA: Idea Group Publishing, 163- 185. Pitty D.L. and Reeves P.I. (1995) Developing decision support systems: a change in emphasis, Computer Methods and Programs in Biomedicine, 48, 35-38. Polson, P. “The Consequences of Consistent and Inconsistent User Interfaces” In R. Guindon (Ed.), Cognitive Science and Its Applications in Human-Computer Interaction, Lawrence Earlbaum Associate Publishers, Hillsdale, New Jersey, pp. 59-108. Pope-Davis, Donald B. & Twing, Jon S. (1991) The effects of age, gender, and experience on measures of attitude regarding computers. Computers in Human Behavior, 7, 333-339. Pope-Davis, Donald B. & Vispoel, Walter P. (1993) How instruction influences attitudes of college men and women towards computers. Computers in Human Behavior, 9, 83-93. Posch, R. (1994). Maintaining public trust in the virtual organization world. Direct Marketing, 57(1), 76-79. Pouloudi, A. (1998). Stakeholder Analysis in UK Health Interorganizational Systems: The Case of NHSnet. In K. Andersen (Ed.), EDI and Data Networking in the Public Sector: Governmental Action, Diffusion, and Impacts (pp. 83- 107). Boston: Kluwer.
Comprehensive Bibliography
385
Pouloudi, A. (1999, January 5-8). Aspects of the stake-holder concept and their implications for information systems development. Paper presented at the HICSS-32, Wailea, Maui, Hawaii. Pouloudi, A., & Whitley, E. A. (1997). Stakeholder iden-tification in interorganizational systems: Gaining insights for drug use management systems. European journal of information systems, 6(1), 1-14. Pouloudi, A., & Whitley, E. A. (2000). Representing human and non-human stakeholders: On speaking with authority. In R. Baskerville, J. Stage, & J. I. D. Gross (Eds.), Organizational and social perspectives on information tech-nology (pp. 340-354). Boston: Kluwer. Protti D.J. and Haskell A.R. (1996) Managing information in hospitals: 60% social, 40% technical? In Proceedings of IMIA Working Conference on Trends in Hospital Information Systems, edited by C. Ehlers, A. Baker, J. Bryant and W. Hammond, North Holland Publishing, Amsterdam, 45-49. Protti D.J., Burns, Hill and Peel V. (1996) Critical success factors to introducing a HIS and developing an Electronic Patient Record System: an interim case study of two different sites, Unpublished discussion paper, Health Management Services Unit, University of Manchester, Manchester. Quintanilla, Carl. (1997). The Leading Workplace Injuries May Surprise You. The Wall Street Journal, (July 15), p A1(W) p A1 (E) col 5 (3 col in). Rafaeli, A. (1986). Employee attitudes toward working with computers. Journal of Organizational Behavior, 1, 89-106. Raines, J. P. & Leathers, C. G. (2001). Telecommuting: The new wave of workplace technology will create a flood of change in social institutions. Journal of Economic Issues, 35(2), 307-313. Rankin, Tom. (1997). Stop the Pain. Cal-OSHA Reporter, (March 10), 24(10). Rao, S. (1996), The hot zone, Forbes, November 18, http://www.forbes.com/ forbes/111896/5812252a.htm (accessed 5/13/99). Ray, Charles M., Sormunen, Carolee, & Harris, Thomas M (1999) Men’s and women’s attitudes toward computer technology: A comparison. Office Systems Research Journal, 17(1), 1-8. Rechichi, Caterina, DeMoja, Carmelo A., and Scullica, Luigi. (1996). Psychology of Computer Use: XXXVI. Visual Discomfort and Different Types of Work at Videodisplay Terminals. Perpetual and Motor Skills, (June),83(3), 935-938. Regan, E.A. & O’Connor, B.N. (1994). End-User Information Systems: Perspectives for Managers and Information Systems Professionals, Macmillan Publishing Company, NY.
386
Comprehensive Bibliography
Risman, B. J. & Tomaskovic-Devey, D. (1989). The social construction of technology: Microcomputers and the organization of work. Business Horizons, 32(3), 71-75. Rivard, S., & Huff, S.L. (1988). Factors of success for end user computing. Communications of the ACM, 31 (5), 552-561. Rizzo, J., House, R. & Lirtzman, S. (1970). Role conflict and ambiguity in complex organizations. Administrative Sciences Quarterly, 15, 150-163. Robey, D. (1983). Cognitive style and DSS design: a comment on Huber’s paper. Management Science, 29(5), 580-582. Robey, D., & Farrow, D. (1982). User involvement in information system development: A conflict model and empirical test. Management Science, 28(1), 73-85. Robey, D. & Azevedo, A. (1994). The Organizational and Cultural Context of Systems Implementation: Case Experience from Latin America, Accounting, Management and Information Technologies, 4(1), pp. 23-37. Robinson, S. L. (1996). Trust and breach of the psychological contract, Administrative Science Quarterly, 41(4), 574-599. Rockart, J.F., & Flannery, L.S. (1983). The management of end user computing. Communications of the ACM, 26 (10), 776-784. Roderick, J. C. & Jelley, H. M. (1991). Managerial perceptions of telecommuting in two large metropolitan cities. Southwest Journal of Business & Economics, 8(1), 35-41. Rogers, E. M. (1995). Diffusion of Innovations (4th ed.). New York: The Free Press. Rogers, R. (1996). An NHS infrastructure - the long trek. British Journal of Healthcare Computing and Information Management, 13(7), 18-21. Searle, J. (1999). Mind, language and society: Philoso-phy in the real world. London: Weidenfeld & Nicholson. Rooney, N. G. (2000). Telecommuting: A case study for managing by results. The Ohio CPA Journal, 58(3), 34-39. Rosen, L.D., Sears, D.C., & Weil, M.M. (1987). Computer phobia. Behavior Methods, Instruments and Computers, 19, 167-179. Rosen, L.R. & Maguire, P. (1990) Myths and realities of computerphobia: A meta-analysis. Anxiety Research, 3, 175-191. Rosenberger, R. (2000a), The worldwide Michelangelo virus scare of 1992, Computer Viruses and the False Authority Syndrome, http:// www.vmyths.com/fas/fas1.cfm. Rosenberger, R. (2000b), Is I Love You More Famous than Jesus?, May 5, http:/ /www.vmyths.com/rant.cfm?id=124&page=4.
Comprehensive Bibliography
387
Rosenberger, R. (2000c), Another Poll Embarrasses the Fearmongers (Part 2), May 18, http://www.vmyths.com/rant.cfm?id=132&page=4. Rosenberger, R. (2000d), Mathematical Atrocity, May 22, http:// www.vmyths.com/rant.cfm?id=136&page=4. Rosenberger, R, and R. Greenberg (1996), Computer Virus Myths, http:// kumite.com/myths/myths (accessed 5/13/99). Ross, A. (1994). Trust as a moderator of the effect of performance evaluation style on job-related tension: A research note. Accounting, Organizations & Society, 19(7), 629-635. Roszkowski, M. J., Devlin, S. J., Snelbecker, G. E., Aiken, R. M., & Jacobsohn, H. G. (1988). Validity and temporal stability issues regarding two measures of computer aptitudes and attitudes. Educational and Psychological Measurement, 48, 1029-1035. Rotter, J. B. (1967). A new scale for the measurement of interpersonal trust. Journal of Personality, 35, 651-665. Rowlinson, M. (1997). Organizations and Institutions. Macmillan Press Ltd., London, UK. Rumelhart, D.E.(1980).“ Schemata: The Building Blocks of Cognition,” In Spiro, R., Bruce, B., & Brewer, W. (Eds), Theoretical Issues in Reading Comprehension, Lawrence Earlbaum Associates, Hillsdale, New Jersey, pp. 33-58. Rumelhart, D. & Norman, D. A.(1981). “Analogical Processes in Learning.” In Anderson J. R.,(ed.), Cognitive Skills and their Acquisition, Lawrence Earlbaum Associates Publishers, Hillsdale, New Jersey. Rundquist, Kristina. (1997). Sitting Down on the Job: How to Do It Right. Managing Office Technology, (September), 42(9), 36-38. Sankar, C.S., & Marshall, T.E. (1993). Database design support: An empirical investigation of perception and performance. Journal of Database Management, 4 (3), 4-14. Satzinger, J.W. and Olfman L. (1997). User Interface Consistency Across End-User Applications: The Effects of Mental Models.” Journal of Management Information Systems, ‘14, 4, pp. 167-193. Sauer, C. (1994). Why Information Systems Fail: A Case Study Approach. London: Alfred Waller. Sauer, C. (1993). Why Information Systems Fail: A Case Study Approach. Henley: Alfred Waller. Savage, J. A. (1988). California smog fuels telecommuting plans. Computerworld, 22(18), 65-66. Saving, K. A., and M. C. Keim, “Student and Advisor Perceptions of Academic Advising in Two Midwestern Colleges of Business,” College Student
388
Comprehensive Bibliography
Journal, 1998, 32(4), 511-521. Schriber, T.J. (1987). The Nature and Role of Simulation in the Design of Manufacturing Systems. Simulation in CIM and Artificial Intelligence Techniques (Retti, J. & Wichmn, K., editors), San Diego: The Society for Computer Simulation, 5-18. Schneider, B. (1978). Person-situation selection: A review of some abilitysituation interaction research. Personnel Psychology, 31, 281-297. Schnorr, Teresa M., Grajewski, Barbara A., Hornung, Richard W., Thun, Michael J., Egeland, Grace M., Murray, William E., Conover, David L., and Halperin, William E. (1991). Video Display Terminals and the Risk of Spontaneous Abortion. New England Journal of Medicine, 324, 727733. Schwarzer, R.(Ed) (1992). Self-efficacy: Thought Control of Action, Hemisphere Publishing Corporation, Washington. Scott, C. and Rockwell, S. (1997) The effect of communication, writing, and technology apprehension on likelihood to use new communication technologies. Communication Education, 46(1), 44-62. Scottish Tourist Board. (1991). Visitor Attractions: A Development Guide. Edinburgh: Scottish Tourist Board. Scudder, J.N., Herschel, R.T., & Crossland, M.D. (1994). Test of a model linking cognitive motivation, assessment of alternatives, decision quality, and group process satisfaction. Small Group Research. 25. 1. 57-82. Segars, A. H., & Grover, V. (1993). Re-examining perceived ease of use and usefulness: A confirmatory factor analysis. MIS Quarterly, 517-525. Sein, M.K. (1988). Conceptual models in training novice users of computer systems: Effectiveness of abstract versus analogical models and influence of individual differences. Unpublished Ph.D. Dissertation, Indiana University. Sein, M.K., Bostrom, R., and Olfman, L. (1987). “Training end-users to compute: cognitive, motivational and social issues.” INFOR, 25, pp. 236-255. Sein, M. K. and Bostrom, R. (1989). “The Influence of Individual Differences in Determining the Effectiveness of Conceptual Models in Training Novice Users,” Human-Computer Interaction, 4, 197-229. Sein, M.K., Bostrom, R., Olfman, L and Davis, S.A. (1993). Visual Ability as a Predictor of User Learning Success. International Journal of ManMachine Studies, 39 (4), pp. 599-620. Sein M.K. and Santhana, R.(1999). Research Report. Learning from Goal Directed Error Recovery Strategy. Information Systems Research, 10(3), 276-285.
Comprehensive Bibliography
389
Senior, B. (1997). Organisational Change. London: Pitman. Smith, M. R. (1996). Technological determinism in American culture. In M. R. Smith and Leo Marx (Eds.), Does technology drive history: The dilemma of technological determinism. Cambridge, MASS: The MIT Press. Seyal, Afzaal H., Rahim, Md. Mahbubur, & Rahman, Mohd. Noah Abd. (2000) Computer attitudes of non-computing academics: A study of technical colleges in Brunei Darussalam. Information & Management, 37, 169-180. Sharma, Subhash (1996) Applied Multivariate Techniques. New York: John Wiley and Sons. Shashaani, Lily (1994) Gender-differences in computer experience and its influence on computer attitudes. Journal of Educational Computing Research, 11(4), 347-367. Shashaani, Lily (1997) Gender differences in computer attitudes and use among college students. Journal of Educational Computing Research, 16(1), 37-51. Shayo, C., Guthrie, R., & Igbaria, M. (1999). Exploring the measurement of end user computing success. Journal of End User Computing, 11 (1), 514. Sheaff R. and Peel V. (1995) Managing health service information systems: an introduction, Open University Press, Buckingham. Simon, C.W. & Roscoe, S.N. (1984). “Application of multi-factor approach to transfer of training research”, Human Factors 26, 1, pp. 591-612. Singley, M.K. & Anderson, J.R. (1989). The transfer of text-editing skills, International Journal of Man Machine Studies, 22, pp. 403-423. Sittig, D.F. & Stead, W.W. (1994) Computer-based physician order entry: the state of the art. J Am Med Informatics Assoc. 1, 108-123. Snell, N. W. (1994). Virtual HR: Meeting new world realities. Compensation & Benefits Review, 26(6), 35-43. Sokal, A., & Bricmont, J. (1998). Intellectual impostures: Postmodern philosophers’ abuse of science. London: Profile. Sotheran M.K. (1996) Management in the 1990s: major findings of the research and their relevance to the NHS, in Information management in health care, Series of Handbooks, Handbook B., Aspects of Informatics, Section B1.1.1., Health Informatics Specialist Groups (HISG), The Institute of Health Services Management, Longman Health Management, London. Sotoyama, Midori, Jonai, Hiroshi, Saito, Susumu, and Villanueva, Maria Beatriz G. (1996). Analysis of Ocular Surface Area for Comfortable VDT Workstation Layout. Ergonomics, 39(6), 877-884.
390
Comprehensive Bibliography
Southwick, K. Online services come out swinging – the prize? Physician loyalty. Medicine on the Net, October 1997. Sprague, R. H., Jr., and E. D. Carlson, Building Effective Decision Support Systems, Prentice-Hall, Inc: Englewood Cliffs, NJ, 1982. Sprague, R. H., and H. J. Watson, Decision Support for Management, Prentice Hall, Upper Saddle River, NJ, 1996. Srinivasan, A. (1985). Alternative measures of systems effectiveness: Associations and implications. MIS Quarterly, 9(3), 243-253. Staggers, N. & Norcio, A. (1993). “Mental Models: Concepts for HumanComputer Interaction Research,” International Journal of Man-Machine Studies, 38, pp. 587-605. Stajkovic, A.D., & Luthans, F. (1998). Self-efficacy and work-related performance: A meta-analysis. Psychological Bulletin, 124 (2), 240-261. Stake, R. E. (1994). Case studies. In N. K. Denzin and Lincoln, Y.S. The Handbook of Qualitative Research, Sage Publications, CA, 236-247. Staples, D. S. (1997). The Management of Remote Workers: An Information Technology Perspective. Unpublished Doctoral Thesis. University of Western Ontario, London, Canada. Steel, R.P., & Mento, A.J. (1986). Impact of situational constraints on subjective and objective criteria of managerial job performance. Organizational Behavior and Human Decision Processes, 37, 254-265. Stevens, S.S. (1990). Applied Multivariate Statistics, Lawrence Earlbaum Associates Publishers, Hillsdale, New Jersey. Straub, D.W. (1989). Validating Instruments in MIS Research. MIS Quarterly, 13(2), 147-166. Strauss, A.L., & Corbin, J. (1990). Basics of Qualitative Research: Grounded Theory Procedures and Techniques. Newbury Park, CA: Sage Publications. Streiner, D. L., & Norman, G. R. (1995). Health measurement scales: A practical guide to their development and use. New York, NY: Oxford University Press. Surry, D. (1997). Diffusion theory and instructional technology. Paper presented at the Annual Conference of the Association for Educational Communications and Technology (AECT), Albuquerque, New Mexico. Swanson, E.B. (1994) Information systems innovation among organizations, Management Science, 40(9), 1069-92. Swanson, Naomi G., Galinsky, Traci L., Cole, Libby L., Pan, Christopher S., and Sauter, Steven L. (1997). The Impact of Keyboard Design on Comfort and Productivity in a Text-Entry Task. Applied Ergonomics, (February), 28(1), 9-16.
Comprehensive Bibliography
391
Swarbroke, J. (1995). The Development & Management of Visitor Attractions. London: Butterworth-Heinemann. Switzer, T.R. (1997). Telecommuters, the workforce of the twenty-first century: An annotated bibliography. Lanham, Maryland, USA: The Scarecrow Press, Inc. Symons, V.J. (1991). Impacts of Information Systems: Four Perspectives. Information and Software Technology, 33(3), 181-190. Szajna, B. (1994). Software evaluation and choice: Predictive validation of the technology acceptance instrument. MIS Quarterly, September, 319-324. Szajna, B., & Mackay, J. M. (1995). Predictors of learning performance in a computer-user training environment: A Path-analytic study. International Journal of Human-Computer Interaction, 7 (2), 167-185. Tabachnick, Barbara G. & Linda S. Fidell (1983) Using Multivariate Statistics. New York: Harper & Row. Tait, P. & Vessey, I. (1988). The Effect of User Involvement on System Success: A Contingency Approach. MIS Quarterly, 12(1), 91-107. Takeuchi, H. & Schmidt, A.H. (1980). New promise of computer graphics. Harvard Business Review, January/February, 122-131. Tan, J.K.H. & Benbasat, I. (1990). Processing of graphical information: A decomposition taxonomy to match data extraction tasks and graphical representations. Information Systems Research, 1(4), 416-439. Tannenbaum, S. I., & Yukl, G. (1992). Training and development in work organizations. Annual Review of Psychology, 43, 399-441. Tapscott, D. & Caston, A. (1993). Paradigm Shift: The New Promise of Information Technology. New York, NY: McGraw-Hill. TechWeb (1999), Chernobyl Virus Hits Hard in South Korea, TechWeb April 27, http://www.techweb.com/wire/story/TWB19990427S0029. Terborg, J. R. (1981). Interactional psychology and research on human behavior in organizations. Academy of Management Review, 6, 569-576. Thorp J. (1995a) Hospital Information Support Systems, In Information management in health care, Series of Handbooks, Handbook D, Issue 1, Section D4.1.1., The Institute of Health Services Management, Longman Heath Management, London. Thorp J. (1995b) The national HISS programme, In Information management in health care, Series of Handbooks, Handbook D., Issue 1, Section D4.1.2., The Institute of Health Services Management, Longman Heath Management, London. The Times (1996). Tourism will boost jobs in the next decade. December. Tippett, P. (2000), Malicious Code and Internet Security Congressional
392
Comprehensive Bibliography
Testimony, May 10, http://www.house.gov/science/tippett_051000.htm. Torkzadeh, G., & Doll, W.J. (1991). Test-Retest Reliability of the End-User Computing Satisfaction Instrument. Decision Sciences, 22(1), 26-33. Torkzadeh, Gholamreza & Angulo, Irma E. (1992) The concept and correlates of computer anxiety. Behaviour & Information Technology, 11(2), 99-108. Treharne R. (1995) Approaches to benefits realization, In Information management in health care, Series of Handbooks, Handbook D., Issue 1, Section D1.3., The Institute of Health Services Management, Longman Heath Management, London. Treister, N. W. (1998). Physician acceptance of new medical information systems: The field of dreams. Physician Executive, 24 (3), 20-25. TruSecure (1999), TruSecure Anti-Virus Policy Guide Version 3.12 (Dec. 10), TruSecure Corporation, http://www.trusecure.com/html/tspub/ whitepaper_index.shtml. Turban, E., and P. R. Watkins, “Integrating Expert Systems and Decision Support Systems,” MIS Quarterly, 1986, 10(2), 121-136. Underwood, B.J. (1957). Psychological Research. New York: AppletonCentury-Crofts. Urquhart C. and Currell R. (1999) Directions for information systems research on the integrated electronic patient record, In Information systems: the next generation, Proceedings of the Fourth UK Academy of Information Systems Conference, University of York, 7-9 April 1999, 634-644. Urschitz M., Lorenz S., Unterasinger L., Metnitz P., Preyer K. and Popow C. (1998) Three years experience with a patient data monitoring system at a neonatal intensive care unit, Journal of Clinical Monitoring and Computing, February, 14(2), 119-125. U.S. Department of Labor, Occupational Safety and Health Administration (OSHA). (1997). Working Safely With Video Display Terminals, OSHA 3092. Van Alstyne, M., Brynjolfsson, E., & Madnick, S. (1995). Why not one big database? Principles for data ownership. Decision Support Systems, 15, 267-284. Verespej, Michael A., (Ed.). (1997). New Monitor Advice. Industry Week, (May 5), 246(9), 18. Visala, S. (1991). Broadening the Empirical Framework of Information Systems Research. Information Systems Research. In H. Nissen, H. K. Klein, and R. Hirschheim (Eds.), Contemporary Approaches and Emergent Traditions, Proceedings of the IFIP TC8/WG 8.2 Working Confer-
Comprehensive Bibliography
393
ence, Elsevier Science Publishers B. V. (North-Holland), 347-364. Voss, H. (1996). Virtual organizations: The future is now. Strategy & Leadership, July/August, 12-16. Vroom, V. Grant, L. & Cotton, T. (1969). The consequences of social interaction in group problem-solving. Organizational Behavior and Human Performance. 4. 77-95. Vroom, V. H. (1964). Work and motivation. New York: Wiley. Waern, Y. (1985). “Learning Computerized tasks as related to prior task knowledge.” International Journal of Man-Machine Studies, 22, pp. 441-455. Walsham, G. (1994). Virtual organization: An alternative view. The Information Society, 10(4), 289-292. Ward, J. and Griffiths, P. (1996). Strategic planning for Information Systems. Chichester: John Wiley and Sons, 2nd edition. Warden J. (1996) New line of accountability for the NHS, British Medical Journal, 312(7042), 1320. Warner, L., & Smith, T. (1990). Computer training: Necessity not luxury. Management Accounting, 68 (3), 48. Warr, P., Cook, J. & Wall, T. (1979). Scales for the measurement of some work attitudes and aspects of psychological well-being. Journal of Occupational Behavior, 52, 129-148. Washburne, J.N. (1987). An experimental study of various graphic, tabular and textual methods of presenting quantitative material. The Journal of Educational Psychology, 18(6), 361-376. Weir, C., Lincoln, M., Roscoe, D. and Moreshead, G. (1995). Successful implementation of an integrated physician order entry application: a systems perspective. Gardner RM (ed). Proceedings 19th SCAMC. 7904. Hanley & Belfus, Inc. Philadelphia. Wells, J. (2001), How Scientific Naming Works, http://www.wildlist.org/ naming.html. Wheaton, B.B., Muthen, B., Alwin, D.F., & Summers, G.F. (1977). Assessing Reliability and Stability in Panel Models. Sociological Methodology, D.R. Heise (ed.),San Francisco: Jossey-Bass. Whitley, Bernard E., Jr. (1997) Gender differences in computer-related attitudes and behavior: A meta-analysis. Computers in Human Behavior, 13(1), 1-22. Whyte, G., & Bytheway, A. (1996). Factors affecting information systems’ success. International Journal of Service Industry Management, 7(1), 74-93. WildList Organization International (2001), Wildlist Index, http:// www.wildlist.org/WildList/.
394
Comprehensive Bibliography
Williams, S. (June 12, 1994) Technophobes: Victims of electronic progress. Mobile Register, 9E. Wilson, J. & Rutherford, A.(1989). “Mental Models: Theory and Application in Human Factors,” Human Factors, 31(6), 617-634. Wickens, C.D. & Andre, A.D. (1990). Proximity compatibility and information display: Effects of color, space, and objectness on information integration. Human Factors, 32(1), 61-77. Wildstrom, S. (September, 28, 1998). A computer user’s manifesto. Business Week. 18. Willcocks L., & Margetts H. (1994). Risk assessment and information systems. European Journal of Information Systems, 3(2), 127-138. Willcocks L. and Fitzgerald G. (1993) Market as opportunity? Case studies in outsourcing information technology and services, Journal of Strategic Information Systems, September, 2(3), 223-242. Willcocks L. (1991) Information in public administration and services in the United Kingdom: toward a management era? Information and the Public Sector, 1, 189-211. Willcox, D. (1995, 19 October). Health scare. Computing, 28-29. Williams, Paul. (1997). California Breaks ‘Ergonomic Ground’. Los Angeles Business Journal, (October), 19(42), 36. Wired News (2000), Not in Love With Virus Vendors, Wired News, May 10, http://www.wired.com/news/print/0,1294,36265,00.html. Witkin, H.A., Lewis, H.B., Hertzman, M., Machover, K., Meissner, P.B., & Wapner, S., (1954). Personality through perception. New York: Harper. (Reprinted: Westport, CT: Greenwood Press, 1972). Witkin, H.A., Oltman, P.K., Raskin, E., & Karp, S.A. (1971). A manual for the embedded figures test. Palo Alto, CA: Consulting Psychologists Press. Wong, E., & Tate, G. (1994). A study of user participation in information systems development. Journal of Information Technology, 9, 51-60. Wood-Harper, A.T., Antill L. (1985). Information Systems Definition: The Multiview Approach. London: Blackwell. Wood, R. E., & Bandura, A. (1989). Impact of conceptions of ability on selfregulatory mechanisms and complex decision-making. Journal of Personality and Social Psychology, 56, 407-415. Woodrow, Janice E.J. (1991) A comparison of four computer attitude scales. Journal of Educational Computing Research, 7(2), 165-187. Woodrow, Janice E.J. (1994) The development of computer-related attitudes of secondary students. Journal of Educational Computing Research, 11(4), 307-338. Wu, C-Y, F. Irazusta, J. T. Lancaster (1992). “A Decision Support system for
Comprehensive Bibliography
395
College Selection,” Computers & Industrial Engineering, 23(1-4), 397400. Wyatt J. (1991). Computer-based knowledge systems, The Lancet, Dec 7th, 338, 1431-1436. Yap, C., Soh, C., & Raman, K. (1992). Information systems success factors in small business. OMEGA, 20(5), 597-609. Yin, R.K. (1994). Case Study Research: Design and Methods. 2nd Edn, London: Sage Publications. Yin, R. K. (1989). Research design issues in using the case study method to study management information systems. In J. Cash and I. Lawrence (Eds.), The IS Research Challenge: Qualitative Methods, 1, Harvard Business School, Boston, MA, 1-6. Yoo, K.H. (1985). The effects of question difficulty and information complexity on the extraction of data from an information presentation. Unpublished doctoral dissertation: Indiana University. Zabrosky, A. W. (2000). The legal reality of virtual offices. Consulting to Management, 11(3), 3-6. Zimmerman, B. J., Bandura, A., & Martinez-Pons, M. (1992). Self-motivation for academic attainment: The role of self-efficacy beliefs and personal goal setting. American Educational Research Journal, 29 (3), 663-676. Zipf, G. K. (1935). The Psychobiology of Language. Boston: Houghton-Mifflin. Zmud, R.W. (1983). CBIS failure and success, Information Systems in Organisation. Glenview, Illinois: Scott Foresman and Company. Zmud, R.W. & Moffie, R.P. (1983). The impact of color graphic report formats on decision performance and learning. Proceedings: International Conference on Information Systems, 179-193. Zmud, R.W. & Cox, J.F. (1979). The Implementation Process: A Change Approach. MIS Quarterly, 3(2), 35-43 Zmud, R.W. (1983). Information Systems in Organizations, Glenview, IL: Scott, Foresman and Company.
396 About the Authors
About the Authors
M. Adam Mahmood is Professor of Computer Information Systems in the Department of Information and Decision Sciences. He also holds the Ellis and Susan Mayfield Professorship in the College of Business Administration at the University of Texas at El Paso. Dr. Mahmood’s research interests center on the utilization of information technology including electronic commerce for managerial decision making and strategic and competitive advantage, group decision support systems, and information systems success as it relates to end user satisfaction and usage. On these topics and others, he has published two scholarly books and over 75 technical research papers in some of the leading journals and conference proceedings in the information technology area including Management Information Systems Quarterly, Decision Sciences, Journal of Management Information Systems, European Journal of Information Systems, INFOR — Canadian Journal of Operation Research and Information Processing, Information and Management, and Journal of End User Computing, among others. Dr. Mahmood’s scholarly and service experience includes a number of responsibilities. He is presently serving as the Editor in Charge of the Journal of End User Computing. He has recently served as Guest Editor of the Journal of Management Information Systems. He has also served two one-year terms as President of the Information Resources Management Association. In 1997 former Governor Bush appointed him to a Texas State Board. In 1998, he has been recognized by American Men & Women of Science “as being among the most distinguished scientists in the United States and Canada.” In 2000 the International Biographical Centre of Cambridge, England named him as one of the 2000 Outstanding Scientists of the 20 th Century. In 2001 Governor Perry appointed him to the State Board that oversees the Texas Department of Information Resources. *** David Allen is an Associate Professor of Management at Northern Michigan University, Marquette, Michigan, where he teaches in the quantitative area. His research has focused on risk analysis and cost benefit analysis, total quality management, productivity in service systems and statistical regression estimators. Tom Butler is a senior researcher at the Executive Systems Research Centre and a College Lecturer at University College Cork. His research interests include the information systems development process, CASE, user participation, organizational change around IT, and the implications of IT for the competence and knowledge-based theories in firms. He is also interested in the application of constructivist philosophy for research on information research. James D. Campbell is Associate Professor in the Department of Family and Community Medicine at the University of Missouri-Columbia. Trained as a medical sociologist, his research interests are in doctor/patient communication, death and dying, prenatal care, Copyright © 2002, Idea Group Publishing.
About the Authors 397
coordination between primary and tertiary care, and collaborative practice between nurse practitioners and family practice physicians. He is currently the editor of the Annals of Behavioral Science and Medical Education. Stephen L. Chan is an Associate Professor in the Computer Science Department at Hong Kong Baptist University, Kowloon, Hong Kong. He received his Doctor of Science from Washington University, St. Louis. Prior to joining the Hong Kong Baptist University, he worked for McDonnell Douglas (now Boeing) for 13 years, primarily in the Operations Analysis Department of the McDonnell Aircraft. His research interest related to information systems development, business process reengineering, technology management, and scheduling expert systems. Among different consultancy projects, he has been involved in medical informatics for some years. Carol Clark is an Associate Professor of Computer Information Systems at Middle Tennessee State University in Murfreesboro, TN. She holds a Ph.D. from Northwestern University. She is the immediate Past President of the Midwest Business Administration Association and a former President of the Society for the Advancement of Information Systems. Her work has appeared in various journals including Information Strategy: The Executive’s Journal and Internal Auditing. Steve Clarke received a BSc in Economics from the University of Kingston Upon Hull, an MBA from the Putteridge Bury Management Centre, The University of Luton, and a PhD in human-centered approach to information systems development from Brunnel University-all in the United Kingdom. He is Principal Lecturer in Systems and Information Management at the University of Luton. His research interests include: social theory and information systems practice; strategic planning for information systems, and the impact of user involvement in information systems development. Major current research is focused on approaches to information systems strategy informed by critical social theory. Timothy Paul Cronan is Professor of Computer Information and Quantitative Analysis at the University of Arkansas, Fayetteville. Dr. Cronan received the D.B.A. from Louisiana Tech University and is an active member of the Decision Sciences Institute and The Association for Computing Machinery. He has served as Regional Vice President and on the Board of Directors of the Decision Sciences Institute and as President of the Southwest Region of the Institute. In addition, he served as Associate Editor for MIS Quarterly. His research interests include local area networks, downsizing, expert systems, performance analysis and effectiveness, and end-user computing. Publications have appeared in Decision Sciences, MIS Quarterly, OMEGA, The International Journal of Management Science, The Journal of Management Information Systems, Communications of the ACM, Journal of End User Computing, Database, Journal of Research on Computing in Education, Journal of Financial Research, as well as in other journals and proceeding of various conferences. Martin D. (Marty) Crossland is Associate Professor of Management Science and Information Systems in the College of Business Administration at Oklahoma State University (Tulsa). He holds a B.S. in Geology from Texas Tech University, an M.B.A. from Oklahoma City University, and a Ph.D. in MIS from Indiana University. He has research interests in Telecommunications, Networking, and Geographic Information Systems. He has previously published work in MIS Quarterly, Decision Support Systems, Small Group Research,
398 About the Authors
Technology Studies, and Journal of End User Computing, plus numerous national and international conference proceedings. Joe Donaldson is Associate Professor in the Department of Educational Leadership and Policy analysis at the University of Missouri-Columbia. His research interests are in the areas of education for the professions, organization and leadership in university continuing education, interorganizational collaboration and the returning adult student in higher education. He is coauthor of Collaborative Program Planning: Principles, Practices and Strategies and has had several articles published recently on the topics of returning adult students and problem-based learning. B. Elango is an Assistant Professor of Management in the College of Business at Illinois State University. He received his Ph.D. from Baruch College – City University of New York. His research and consulting interests lie in the interface of strategy with technology-innovation management, entrepreneurship and international business. He has presented papers at numerous conferences and his work has appeared in many journals. Brian Fitzgerald is a Senior Researcher at the Executive Systems Research Centre at University College Cork and is currently an associate editor for the Information Systems Journal. He is actively involved in applied research projects in the areas of systems development approaches, foundations of the IS field and executive information systems. His work in these areas is published in various books and international journals, including Information Systems Journal, INFOR, Journal of Information Technology, and the International Journal of Information Management. Kimberly D. Harris is Assistant Professor in the Department of health Management Systems at Duquesne University. Her research interests lie in the development of health information technology and electronic medical records in rural, underserved areas, distance education and situational implementation of varius types of medical technology. R. Wayne Headrick is Director of the Master of Business Administration Program and Professor of Business Computer Systems in the College of Business Administration and Economics at New Mexico State University. He has published in a variety of leading IS/IT journals, and conducts research in systems development techniques, the Internet as a teaching tool, and software complexity metrics. He has over 30 years of information technology experience as a programmer, systems development manager, computing services center manager, consultant, and educator. Richard T. Herschel is Associate Professor of Management and Information Systems in the Ervian K. Haub School of Business at St. Joseph’s University in Philadelphia, PA. He holds a M.A.S. in Business Management from The Johns Hopkins University, a B.A. in Journalism and Geography from Ohio Wesleyan University, and a Ph.D. in MIS from Indiana University. He is the coauthor of the text Organizational Communication: Empowerment in a Technological Society published by Houghton Mifflin. His research focuses in the areas of organizational communication and knowledge management. I. M. Jawahar is an Associate Professor of Management in the College of Business at Illinois State University. He received his Ph.D from Oklahoma State University. Jawahar has
About the Authors 399
published numerous articles in journals including Personal Psychology, Journal of Applied Psychology, Academy of Management Review, and Journal of End User Computing. In 1997 he was awarded The Dale Yoder and Herb Heneman Research Award (SHRM Research Award) for his research in the area of performance appraisal. His areas of research include performance appraisal, organizational justice, social issues in management, and behavioral aspects of technology management. Sharon Kerkham has been involved with many fields from fashion designer to health informatics and IT in education. After graduating from the London College of Fashion and a successful career in fashion, IT became an ever more important part of the fashion business. This led her to further study and research at Salford University, UK, where she investigated the use of information systems in hospitals. She is currently a consultant for IT training and support for schools in the UK and she is particularly interested in how children perceive and use IT. Louis A. Le Blanc is a professor of business administration at the Campbell School of Business at Berry College in Mount Berry, GA. He received the Ph.D. from Texas A&M University, followed by postdoctoral study at the University of Minnesota and Indiana University. His recent publications have appeared in the MIS Quarterly, Decision Sciences, Journal of End User Computing, European Journal of Operational Research and Group Decision and Negotiation. Dr. Le Blanc teaches courses in the management of information technology, decision support systems and operations management. Brian Lehaney received his MSc in Operational Research from the London School of Economics. He has subsequently specialized in simulation and soft systems methodology, which are his current research interests. He is a Principal Research Fellow in Systems and Operations Management at the University of Luton. Roger McHaney is an Associate Professor of Management Information Systems at Kansas State University. Dr. McHaney holds a Ph.D. in Computer Information Systems and Quantitative Analysis from the University of Arkansas and is affiliated with the Decision Sciences Institute and the Society for Computer Simulation. He has published in Decision Sciences, Information & Management, The International Journal of Production Research, Decision Support Systems, Simulation, The Journal of End User Computing, The Journal of Computer Information Systems, and various other journals. Nathalie Mitev is currently a visiting professor at Aarhus Business School in Denmark, on a special leave from the London School of Economics, UK; she has been a university lecturer in the UK at City University, Salford University, and the London School of Economics for the last 15 years. She holds several French and English postgraduate degrees including an MBA and a PhD. Her research interests concentrate on information systems and organizational change. She has studied implementation issues in small business, the health sector, the construction industry and the travel business. She has published in Information Technology and People, The International Journal of Electronic Commerce, Journal of Information Technology, Personnel Review, European Journal of Information Systems, Journal of End-User Computing, Journal of Internet Research, European Conference on Information Systems and International Conference in Information Systems.
400 About the Authors
George W. Morgan is a Professor of Computer Information Systems in the College of Business Administration at Southwest Texas State University. His primary research interests are systems analysis, design and development with an emphasis on software performance improvement, and curriculum development. Prior to entering academia, he was a cost accountant and data processing manager in the manufacturing industry, a public school district financial officer, and a systems engineer in the computer industry. W. Scott Murray is a graduate student in the Psychology Department at the University of Arkansas at Little Rock. His professional interests are in statistics, research methods, and counseling psychology. Lorne Olfman is Professor of Information Science at Claremont Graduate University and Chair of the Information Science Department. He has been doing research on end-user training for more than a decade, and is currently working on a research grant on this topic for the Advanced Practices Council of the society for Information Management. Lorne’s research interests also include organizational memory systems and human-computer interaction. Lorne is proud to have chaired the dissertation of his co-author. Claudia Orr is professor of Information Systems in Northern Michigan University, Marquette, Michigan, where she teaches communications, computer applications courses and help desk courses. Her current research focuses on instructional technology, computer skill levels among students and employees and teaching to the Net Generation. William C. (Bill) Perkins is Professor and Coordinator of Information Systems in the Kelley School of Business, Indiana University - Bloomington. He received a B.S.C.E. degree (civil engineering) from Rose-Hulman Institute of Technology, and M.B.A. and D.B.A. degrees (quantitative business analysis) from Indiana University. Dr. Perkins has published papers in Decision Sciences, Applied Economics, Journal of Political Economy, Journal of Operations Management, Decision Support Systems, Group Decision and Negotiation, and other journals. He has co-authored five books, including Managing Information Technology: What Managers Need to Know (4th edition, Prentice Hall, 2002). Sandra Poindexter is a Professor of Computer Information Systems at Northern Michigan University, Marquette, Michigan, where she teaches programming and systems analysis and design courses. Her research interests include Internet usage in education, instructional technology and globalization. Athanasia (Nancy) Pouloudi is a lecturer in the Department of Information Systems and Computing at Brunel University. She has a Ph.D. in “Stakeholder Analysis for Interorganizational Systems in Healthcare” from the London School of Economics and Political Science, an MSc in “Analysis, Design and Management of Information Systems” from the same university, and a First Degree in Informatics from the Athens University of Economics and Business. Her current research interests encompass organizational and social issues in information systems implementation, stakeholder analysis, electronic commerce and knowledge management. She has more than 20 papers in academic journals and international conferences in these areas. She is a member of the ACM, the Association for
About the Authors 401
Information Systems (AIS), the UK AIS, and the UK OR Society. Conway T. Rucks is an associate professor of marketing and chair of the Marketing Department at the University of Arkansas at Little Rock. He earned his DBA from Louisiana Tech University. His recent publications have appeared in Decision Sciences, Accident Analysis and Prevention, the European Journal of Operational Research and Expert Systems with Applications. Dr. Rucks teaches courses in marketing research, consumer behavior and marketing management. Joseph Scudder is Associate Professor of Organizational/Corporate Communication at Northern Illinois University, and earned his M.S. and Ph.D. from Indiana University. His research has examined collaborative technology, communication and technology, and influence strategies, and now is particularly focused upon models of influence that apply to mediated contexts – particularly the Internet. He is currently a co-director of the telematics project that uses technology to overcome the limitations of distance. Conrad Shayo has worked over the past sixteen years in various capacities as a university professor, consultant, and manager. He holds a Doctor of Philosophy Degree and a Master of Science Degree in Information Science from the Claremont Graduate University, formerly Claremont Graduate School. He also holds an MBA in Management Science from the University of Nairobi, Kenya; and a Bachelor of Commerce Degree in Finance from the University of Dar-ES-Salaam, Tanzania. Dr. Shayo’s research interests are in the areas of IT assimilation, end-user computing, organizational memory, information strategy, and virtual societies. Currently, Dr. Shayo is an Associate Professor of Information and Decision Sciences at California State University, San Bernardino. D. Sandy Staples, Ph.D., is an Assistant Professor in the School of Business at Queen’s University, Kingston, Canada. His research interests include the enabling role of information systems for virtual work and knowledge management, and assessing the effectiveness of information systems and IS practices. Sandy has published articles in various journals including Organization Science, Journal of Strategic Information Systems, Journal of Management Information Systems, International Journal of Management Reviews, Business Quarterly, Journal of End-User Computing, and OMEGA, and he currently serves on the Editorial Advisory Board of the Journal of End User Computing. Gary L. Sullivan is Professor of Marketing and holder of the Betty M. MacGuire Endowed Professorship in Business Administration at the University of Texas, El Paso. He currently serves as Chair of Marketing and Management. Dr. Sullivan received his Ph.D. in Marketing from the University of Florida. Sullivan’s principal research interests are consumer decision making and advertising effectiveness. He has published in a variety of journals in business and the social sciences including Psychology & Marketing, Journal of the Academy of Marketing Science, Expert Systems with Applications, Journal of Travel Research, Sex Roles and the Journal of Mental Imagery. Sullivan has presented research at the Association for Consumer Research, the American Marketing Association, the American Academy of Advertising, and other groups.
402 About the Authors
Ray-Lin Tung received his M.B.A. from the University of Texas at El Paso. He is currently working as a software engineer in Taiwan. Edgar Whitley is a Senior Lecturer in Information Systems at the London School of Economics and Political Science. He has a BSc (Econ) in Computing and a PhD in Information Systems, both from the LSE. He has taught undergraduate, postgraduate students and managers in the UK and abroad. Edgar was one of the organisers of the First European Conference on Information Systems and is actively involved in the coordination of future ECIS conferences. He has published widely on various information systems issues and is currently editing a special issue of The Information Society on Time and Information Technology. He is one of the programme chairs for the forthcoming IFIP 8.2 conference on organisational discourse and information technology to be held in Barcelona in December 2002.
Index 403
Index Symbols 11 item scale 311 15 item scale 312 5 item scale 312
A abstract conceptualization (AC) 225 academic advising DSS 266 accurate mental model 96 additional courses 272 advising database 273 age of computer users 215 aloofness 328 American optometric association 84 American telemedicine association 55 ANOVA 105 anti virus (AV) software products 233 artificial neural networks (ANNs) 335 assured quality 76 automated advising system 268
B baby carelink project 54 backpropagation 338 bedside monitoring 68 behavioral climate 328 behavioral intention (BI) 4 best practice literature 192 bill of material” (BOM) 264 British Medical Association (BMA) 166 business ethics 334 business process reengineering (BPR) 81
C Canada’s health informatics association 36 Canadian organization for the advancement of computers 37
Canonical correlation analysis (CanCor) 226 carpel tunnel syndrome 85 cell data 105 CERT coordination center (CERT/CC) 238 CFI 252 change process 131 Chernobyl (CIH) virus 236 chi-square test 62, 105 chi-squared “goodness of fit” test 332 clinical data 24 CMDS (computer management development services) 264 cognitive style 288 community information systems (CIS) 189 community trusts 189, 193 completed courses 272 comprehensive user participation 147 computer anxiety 211, 212 computer aptitude literacy and interest profile 179 computer attitude 213, 218 computer attitude scale (CAS) 179, 213 computer competency 210 computer experience 220 computer liaison committee (CLC) 142 computer literacy course 217 computer simulation 244 computer virus 233 computer vision syndrome (CVS) 84 computerized 25 computerized charting 25 computerphobia 211 conceptually ordered displays 198 concrete experience (CE) 225 confidential patient information 176 confidentiality issue 171 consideration 328 construct validity 250 contextual environment 130 Copyright © 2002, Idea Group Publishing.
404 Index
continuing medical education (CME) 13 continuous computer learning 216 control of videoconferencing 64 core patient information systems 40 core requirement 267 corrective lenses 89 course scheduling logic 266 covariance 105 Cronbachs alpha 250 CSA labs anti-virus site 238 cyberphobia 211
D database and image storage 79 database management system (DBMS) 95 database marketing 118, 119 Davis instrument 255 dBase IV 103 debriefing interviews 110 decision accuracy 290 decision quality 293 decision support system (DSS) 40, 244, 263 decision time 290, 293 decision-maker performance 286 deepest layer rule 277 defensive computing 233 departmental systems 40 design cells 294 direct physician order entry (POE) system 74 discriminant analysis 335 discriminant validity 258 disengagement 327 DSS generator 265 due process 173
E ease-of-use 249 effects matrix 198 elective 268 electromagnetic field exposures 91 electronic media 118 electronic patient record (EPR) 23 eligible course 272 end user computing (EUC) 83, 177 end-user computing satisfaction (EUCS) 249
end-user context 72 end-user software packages 95 enrollment 171 enterprise resource planning (ERP) 264 enterprise resource planning (ERP) software 264 ergonomic keyboards 90 ergonomics programs 88 escalation 161 esprit 328 ethical decision making 335 ethical issues 335 EUCS instrument 257 EUCS measures 249 existing mental models 95 experimental setting 294 explanatory capability 344
F fault handling centers 155 fault tolerance 77 field dependence 289 figuration 287 focus group interviews 304 focus group sessions 123 focus groups 122 formal software training 96 future methods of operation (FMO) 155
G generic appointment system (GAS) 144 geographic information system (GIS) 145, 286 group embedded figures test (GEFT) 294
H hare virus scare of 1996 236 health information system 38 healthcare environments 53, 56 high-end spreadsheet 263 hindrance 328 hospital information systems 40 hospital information system (HIS) databases 75 human activity 121 human activity system 121 human-centered systems 121
Index 405
I
M
I love you virus 236 ICDEV (intensive care data evaluation system) 24 ICSA annual computer virus prevalence survey 238 image 287 image theory 287 in-vivo codes 197 increased spontaneous abortions 87 individual performance 291 information identification and validation 79 information management group (IMG) 174 information system success 247 information system technology 247 information systems (IS) 116, 325 information systems development 130 information systems innovations 163 information systems projects 159 information technology (IT) 120, 303 innovative information systems 158 institutional DSS 266 intensive care unit (ICU) 21 intéressement 171 intimacy 328 issue of confidentiality 167 IT directorate (ITD) 143 IT implementation 131
M-day 235 magnetic resonance imaging (MRI) 81 mail surveys 254 management information systems 179 MANOVA 311 mapping 95 mapping via analogy 95 mapping via training 95 market research 119 marketing database 119 maximum dependency rule 278 media hype 235 medical diagnostic DSS 24 medical informatics 46 medical records 63 medication ordering system 73 medication prescriptions 72 Medilink project 53 mediocre worm/virus 237 Melissa virus 236 mental models 95 meta-analytic study 130 Michelangelo scare of 1992 235 microcomputer-based system 263 Missouri telemedicine network (MTN) 3 mobilization 172 monitoring of infants 59 mouse related disorders 86 multiple discriminant analysis (MDA) 334 muscle fatigue 85 muscle pain 85 musculoskeletal discomfort 85
J job climate questionnaire 312 joint application design (JAD) 140, 326
K knowledge-based heuristics act 265
L learning performance 103 Likert-type scales 58 linear structural relation (LISREL) analysis 336 love bug 236 lower course number rule 278
N National Health Service (NHS) 20, 189 need-for-cognition (NFC) 289, 290 neonatal care 56 neonatal intensive care unit (NICU) 52 network topology 338 New Data Protection Act Will 170 NHS trusts 34 NHSnet 158, 166 NICU wards 56 NNFI 252
406 Index
O
R
OASIG 120 obligatory passage point 171 ocular surface area (OSA) 85 office ergonomics research committee 86 on-site development approach 149 order capturing 74 order processin 74 order receiving 74 order sending 74 order system maintenance module 80 ordering module 80 organizational change 131 organizational climate 312 OSHA (occupational safety and health administration) 86 outcome variables 136
radiation effects 87 rapid application development (RAD) approach 145 reduced birth weight 87 referral letters 81 reflective observation (RO) 225 regression analysis 184 remote employees 309 remote work experience 308 remote work setting 309 repetitive motion injuries (RMIs) 86 rural health care 3 rural telemedicine evaluation project (RTEP) 3
S
scanning and image processing system (SIPS) 72, 75 paper-and-pen method 73 ScreenPlay 91 paradox 103 self-administered postal questionnaire 122 participatory design (PD) 140 self-efficacy 102, 182 patient care systems 40 simulation language 244 patient data 69 simulator 245 PC-based computer system 265 single item approach 248 PDMS 21 single sample cross-validation index PDMS software program 27 (ECVI) 252 Pearson correlations 184 social learning theory 102 perceived learning mode 221 society for computer simulation 254 perceived usefulness 249 software training 95 perception outcomes 106 solution scoring rule 294 physician ordering entry system 72 spatial tasks 285 POISE (procurement of information systems specific learning style 225 specific major 268 effective) 30 stakeholder perspectives 173 post-implementation climate 331 PowerPause 91 statistical power 110 pre-existing mental model 101 strategic information systems planning 72 pre-implementation climate 330 student bidding system 266 prerequisites 268 subject pool 294 preterm birth 87 successful systems development 136 primary learning style 216 synergistic controls 240 problematization 171 system integration of information technoloproduction emphasis 328 gies 80 proprietary databases 41 systems development projects 188, 190 psychological ownership 207 psychometric testing 6
P
Index 407
T tailed tests 105 task complexity 146 task-technology fit (TTF) 291 technical system 167 technological gender gap 214 technology acceptance model (TAM) 3, 6 technophobia 211 telemedicine 2 telework researchers 307 thrust 328 time ordered displays 198 tourism industry 118 traditional approach 134 training task context 97 treatments 104 trigger date 235 TruSecure anti-virus policy guide 240
U UK national health service (NHSnet) 159 user acceptance 157 user information satisfaction 192 user interface 279 user participation 130 user resistance 157
user satisfaction 192 user-centered design 53
V variance-based studies 130 VDT emission levels 90 video display terminal (VDT) 83 videoconferencing system 53 videoconferencing technology 55 vignette 57 violations 272 virus alerts 238 virus disaster 239 virus hysteria alert 238 virus payload 234 virus phenomenon 235 virus research facilities 234 virus statistics 237 virus threat 233 vision problems 84 visitor attractions 122 visual discomfort (asthenopia) 85 visualization 300
W wild viruses 235 wildlist 237