Solutions and Innovations in Web-Based Technologies for Augmented Learning: Improved Platforms, Tools, and Applications Nikos Karacapilidis University of Patras, Greece
Information science reference Hershey • New York
Director of Editorial Content: Director of Production: Managing Editor: Assistant Managing Editor: Typesetter: Cover Design: Printed at:
Kristin Klinger Jennifer Neidig Jamie Snavely Carole Coulson Larissa Vinci Lisa Tosheff Yurchak Printing Inc.
Published in the United States of America by Information Science Reference (an imprint of IGI Global) 701 E. Chocolate Avenue, Suite 200 Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.igi-global.com and in the United Kingdom by Information Science Reference (an imprint of IGI Global) 3 Henrietta Street Covent Garden London WC2E 8LU Tel: 44 20 7240 0856 Fax: 44 20 7379 0609 Web site: http://www.eurospanbookstore.com Copyright © 2009 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Product or company names used in this set are for identi.cation purposes only . Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark. Library of Congress Cataloging-in-Publication Data Solutions and innovations in web-based technologies for augmented learning : improved platforms, tools, and applications / Nikos Karacapilidis, editor. p. cm. -- (Advances in web-based learning) Includes bibliographical references and index. Summary: "This book covers a wide range of the most current research in the development of innovative web-based learning solutions, specifically facilitating and augmenting learning in diverse contemporary organizational settings"--Provided by publisher. ISBN 978-1-60566-238-1 (hardcover) -- ISBN 978-1-60566-239-8 (ebook) 1. Education--Computer network resources. 2. Internet in education. 3. Organizational learning--Computer assisted instruction. 4. Distance education. I. Karacapilidis, Nikos. LB1044.87.S619 2009 371.33'44678--dc22 2008023189 British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. All work contributed to this book set is original material. The views expressed in this book are those of the authors, but not necessarily of the publisher. If a library purchased a print copy of this publication, please go to http://www.igi-global.com/agreement for information on activating the library's complimentary electronic access to this publication.
Advances in Web-based Learning Series (AWBL) ISBN: 1935-3669
Editor-in-Chief: Nikos Karacapilidis, University of Patras, Greece Web-Based Education and Pedagogical Technologies: Solutions for Learning Applications Liliane Esnault; EM Lyon, France
IGI Publishing • copyright 2008 • 364 pp •H/C (ISBN: 978-1-59904-525-2) • US $89.95 (our price) The rapid development and expansion of Web-based technologies has vast potential implications for the processes of teaching and learning world-wide. Technological advancements of Web-based applications strike at the base of the education spectrum; however, the scope of experimentation and discussion on this topic has continuously been narrow. Web-Based Education and Pedagogical Technologies: Solutions for Learning Applications provides cutting-edge research on such topics as network learning, e-learning, managing Web-based learning and teaching technologies, and building Web-based learning communities. This innovative book provides researchers, practitioners, and decision makers in the .eld of education with essential, up-to-date research in designing more effective learning systems and scenarios using Web-based technologies.
Solutions and Innovations in Web-Based Technologies for Augmented Learning: Improved Platforms, Tools, and Applications Nikos Karacapilidis, University of Patras, Greece
Information Science Reference • copyright 2009 • 374 pp •H/C (ISBN: 978-1-60566-238-1) • US $195.00 (our price) The proper exploitation of Web-based technologies towards building responsive environments that motivate, engage, and inspire learners, and which are embedded in the business processes and human resources management systems of organizations, is highly critical. Accordingly, the research field of technology-enhanced learning continues to receive increasing attention. Solutions and Innovations in Web-Based Technologies for Augmented Learning: Improved Platforms, Tools, and Applications provides cutting-edge research on a series of related topics and discusses implications in the modern era’s broad learning concept. Addressing diverse conceptual, social, and technical issues, this book provides professionals, researchers, and practitioners in the field with up-to-date research in developing innovative and more effective learning systems by using Web-based technologies.
The Advances in Web-based Learning (AWBL) Book Series aims at providing an in-depth coverage and understanding of diverse issues related to the application of web-based technologies for facilitating and augmenting learning in contemporary organizational settings. The issues covered address the technical, pedagogical, cognitive, social, cultural and managerial perspectives of the Webbased Learning research domain. The Advances in Web-based Learning (AWBL) Book Series endeavors to broaden the overall body of knowledge regarding the above issues, thus assisting researchers, educators and practitioners to devise innovative Web-based Learning solutions. Much attention will be also given to the identification and thorough exploration of good practices in developing, integrating, delivering and evaluating the impact of Web-based Learning solutions. The series intends to supply a stage for emerging research in the critical areas of web-based learning to further expand to importance of comprehensive publications on these topics of global importance.
Hershey • New York Order online at www.igi-global.com or call 717-533-8845 x100– Mon-Fri 8:30 am - 5:00 pm (est) or fax 24 hours a day 717-533-7115
Associate Editor Liliane Esnault, E.M.LYON, France
Editorial Review Board Agostinho Rosa, Technical University of Lisbon, UK Amaury Daele, University of Fribourg, Switzerland Amita G. Chin, Virginia Commonwealth University, USA Andy Koronios, University of South Australia, Australia Anil Aggarwal, University of Baltimore, USA Anthony Norcio, University of Maryland at Baltimore County, USA Antonio Cartelli, University of Cassino, Italy Brian Corbitt, Deakin University, Australia Carol Lerch, Daniel Webster College, Nashua, NH, USA Cesar Alberto Collazos O., University of Cauca, Popayán, Colombia Cesar Garita, University of Amsterdam, The Netherlands Chris Zhang, University of Saskatchewan, Canada Colin McCormack, University College Cork, Ireland Danièle Herrin, Univeristé de Montpellier, Montpellier, France David McConnell, Lancaster University, UK David Taniar, Monash University, Australia Dimosthenis Anagnostopoulos, Harokopio University of Athens, Greece Emmanuel Fernandes, Université de Lausanne, Suisse Eugenia Ng, Hong Kong Institute of Education, China Fuhua Oscar Lin, Athabasca University, Canada George Ghinea, Brunel University, UK Gord McCalla, University of Saskatchewan, Saskatoon, Canada Holly Yu, California State University, USA Ikuo Kitagaki, Hiroshima University, Japan Jan Frick, Stavanger University College, Norway Janine Schmidt, McGill University, Canada John Lim, National University of Singapore, Singapore Katia Passerini, New Jersey Institute of Technology, USA Katy Campbell, University of Alberta, Canada Khaled Wahba, Cairo University, Egypt
Larbi Esmahi, Athabasca University, Canada Li Yang, Western Michigan University, USA Mahesh S. Raisinghani, Texas Woman’s University, USA Mara Nikolaidou, University of Athens, Greece Maria Manuela Cunha, Polytechnic Institute of Cavado, Portugal Marisa Ponti, IT Universitetet, Sweden Martin Crossland, Oklahoma State University, Tulsa, USA Martin Gaedke, Universität Karlsruhe, Germany Mehdi Ghods, The Boeing Company, USA Miguel-Angel Sicilia, University of Alcalá, Spain MinnieYi-Min Yen, University of Anchorage, Alaska Moez Limayem, University of Arkansas, USA Murali Shanker, Kent State University, USA MV Ramakrishna, Monash University, Australia Nikos Karacapilidis, University of Patras, Greece Patrice Sargenti, University of Monaco, Monaco Patrick van Bommel, University of Nijmegen, The Netherlands Philippe Koch, IBM Learning Solutions, France R. Subramaniam, Nanyang Technological University, Singapore Romain Zeiliger, Gate-CNRS, Ecully, France Ron Vyhmeister, Adventist International Institution of Advanced Studies, Philippines Roy Rada, University of Maryland at Baltimore County, USA Sergio Lujan-Mora, Escuela Politecnica Superior IV Universidad de Alicante, Spain Sherry Y. Chen, Brunel University, UK Sree Nilakanta, Iowa State University, USA Sue Tickner, University of Glasgow, UK Tak-ming Law, Institute of Vocational Education, HK Tang Changjie, Sichuan University, China Tanya McGill, Murdoch University, Australia Terry Ryan, Claremont Graduate University, USA Timothy Shih, Tamkang University, Taiwan Vivien Hodgson, University of Lancaster, UK VP Kochikar, Infosys Technologies, India Werner Beuschel, University of Applied Science, Brandenburg, Germany Witold Abramowicz, Poznan University of Economics, Poland Yair Levy, Nova Southeastern University, USA
Table of Contents
Preface.................................................................................................................................................. xix Section I Augmenting Learning Chapter I The Role of Learner in an Online Community of Inquiry: Responding to the Challenges of First-Time Online Learners................................................................................................................. 1 Martha Cleveland-Innes, Athabasca University, Canada Randy Garrison, The University of Calgary, Canada Ellen Kinsel, Odyssey Learning Systems, Canada Chapter II Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning................................................................................................................................................. 15 Xinchun Wang, California State University, Fresno, USA Chapter III Cognition, Technology, and Performance: The Role of Course Management Systems........................ 35 Teresa Lang, Columbus State University, USA Dianne Hall, Auburn University, USA Chapter IV The Role of Organizational, Environmental and Human Factors in E-Learning Diffusion................................................................................................................................................ 53 Kholerile L. Gwebu, University of New Hampshire, USA Jing Wang, Kent State University, USA Chapter V Distance Education: Satisfaction and Success....................................................................................... 71 Wm. Benjamin Martz, Jr., Northern Kentucky University, USA Morgan Shepherd, University of Colorado at Colorado Springs, USA
Chapter VI Group Support Systems as Collaborative Learning Technologies: A Meta-Analysis........................... 79 John Lin, National University of Singapore, Singapore Yin Ping Yang, National University of Singapore, Singapore Yingqin Zhong, National University of Singapore, Singapore Section II Design, Modeling, and Evaluation Issues Chapter VII Knowledge Flow and Learning Design Models towards Lifewide E-Learning Environments....................................................................................................................................... 110 M.C. Pettenati, University of Florence, Italy M.E. Cigognini, University of Florence, Italy Chapter VIII An Agent-Based Framework for Personalized E-Learning Services................................................... 130 Larbi Esmahi, Athabasca University, Canada Chapter IX Supporting Evolution of Knowledge Artifacts in Web Based Learning Environments...................... 142 Dimitris Kotzinos, Institute of Computer Science, FORTH-ICS and Department of Geomatics and Surveying, TEI of Serres, Greece Giorgos Flouris, Institute of Computer Science, FORTH-ICS, Greece Yannis Tzitzikas, University of Crete and Institute of Computer Science, FORTH-ICS, Greece Dimitris Andreou, Institute of Computer Science, FORTH-ICS, Greece Vassilis Christophides, University of Crete and Institute of Computer Science, FORTH-ICS, Greece Chapter X Interface and Features for an Automatic ‘C’ Program Evaluation System.......................................... 168 Amit Kumar Mandal, IIT Kharagpur, India Chittaranjan Mandal, IIT Kharagpur, India Chris Read, Kingston University, UK Chapter XI Evaluating Computerized Adaptive Testing Systems.......................................................................... 186 Anastasios A. Economides, University of Macedonia, Greece Chrysostomos Roupas, University of Macedonia, Greece
Chapter XII Technology Integration Practices within a Socioeconomic Context: Implications for Educational Disparities and Teacher Preparation........................................................................... 203 Holim Song, Texas Southern University, USA Emiel Owens, Texas Southern University, USA Terry T. Kidd, University of Texas School of Public Health, USA Chapter XIII Utilizing Web Tools for Computer-Mediated Communication to Enhance Team-Based Learning.......................................................................................................................... 218 Elizabeth Avery Gomez, New Jersey Institute of Technology, USA Dezhi Wu, Southern Utah University, USA Katia Passerini, New Jersey Institute of Technology, USA Michael Bieber, New Jersey Institute of Technology, USA Chapter XIV Accessible E-Learning: Equal Pedagogical Opportunities for Students with Sensory Limitations............................................................................................................................. 233 Rakesh Babu, University of North Carolina at Greensboro, USA Vishal Midha, University of North Carolina at Greensboro, USA Section III Tools and Applications Chapter XV Supporting Argumentative Collaboration in Communities of Practice: The CoPe_it! Approach........................................................................................................................ 245 Nikos Karacapilidis, University of Patras and Research Academic Computer Technology Institute, Greece Manolis Tzagarakis, Research Academic Computer Technology Institute, Greece Chapter XVI Personalization Services for Online Collaboration and Learning........................................................ 258 Christina E. Evangelou, Informatics and Telematics Institute, Greece Manolis Tzagarakis, Research Academic Computer Technology Institute, Greece Nikos Karousos, Research Academic Computer Technology Institute, Greece George Gkotsis, Research Academic Computer Technology Institute, Greece Dora Nousia, Research Academic Computer Technology Institute, Greece
Chapter XVII Computer-Aided Personalised System of Instruction for Teaching Mathematics in an Online Learning Environment.................................................................................................... 271 Willem-Paul Brinkman, Delft University of Technology, The Netherlands Andrew Rae, Brunel University, UK Yogesh Kumar Dwivedi, Swansea University, UK Chapter XVIII Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice...................................................................................................................... 300 Sandy el Helou, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland Denis Gillet, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland Christophe Salzmann, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland Yassin Rekik, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland Chapter XIX Multimedia Authoring for Communities of Teachers.......................................................................... 317 Agnès Guerraz, INRIA Rhône-Alpes, France Cécile Roisin, INRIA Rhône-Alpes, France Jan Mikáč, INRIA Rhône-Alpes, France Romain Deltour, INRIA Rhône-Alpes, France Compilation of References................................................................................................................ 334 About the Contributors..................................................................................................................... 367 Index.................................................................................................................................................... 375
Detailed Table of Contents
Preface.................................................................................................................................................. xix Section I Augmenting Learning Chapter I The Role of Learner in an Online Community of Inquiry: Responding to the Challenges of First-Time Online Learners................................................................................................................. 1 Martha Cleveland-Innes, Athabasca University, Canada Randy Garrison, The University of Calgary, Canada Ellen Kinsel, Odyssey Learning Systems, Canada Learners experiencing an online educational community for the first time can explain the adjustment required for participation. Findings from a study of adjustment to online learning environments validate differences found in 3 presences in an online community of inquiry. Using pre- and post-questionnaires, students enrolled in entry-level courses in 2 graduate degree programs at Athabasca University, Canada, describe their adjustment to online learning. Responses were analyzed in relation to the elements of cognitive, social, and teaching presence, defined by Garrison, Anderson, and Archer (2000) as core dimensions of learner role requirements in an online community of inquiry. Five areas of adjustment characterize the move toward competence in online learning: interaction, self-identity, instructor role, course design, and technology. Student comments provide understanding of the experience of first-time online learners, including the challenges, interventions, and resolutions that present themselves as unique incidents. Recommendations for the support and facilitation of adjustment are made. Chapter II Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning................................................................................................................................................. 15 Xinchun Wang, California State University, Fresno, USA Although the pedagogical advantages of online interactive learning are well known, much needs to be done in instructional design of applicable collaborative learning tasks that motivate sustained student participation and interaction. In a previous study based on a Web-based course offered in 2004, Wang
(2007) investigated the factors that promote sustained online collaboration for knowledge building. By providing new data from the same Web-based course offered in 2006 and 2007, this study investigates students’ attitudes toward process- and product-oriented online collaborative learning. The analysis of 93 post course survey questionnaire data show that the overwhelming majority of students have positive experience with online collaborative learning. Data also suggest that students are more enthusiastic about process-oriented tasks and their attitudes toward product-oriented collaborative learning tasks are mixed. Chapter III Cognition, Technology, and Performance: The Role of Course Management Systems........................ 35 Teresa Lang, Columbus State University, USA Dianne Hall, Auburn University, USA Development and sale of computer-assisted instructional supplements and course management system products are increasing. Textbook sales representatives use this technology to market textbooks, and many colleges and universities encourage the use of such technology. The use of course management systems in education has been equated to the use of enterprise resource planning software by large businesses. Research findings about the pedagogical benefits of computer-assisted instruction and computer management systems are inconclusive. This study describes an experiment conducted to determine the benefit to students of using course management systems. The effects of cognition, learning styles, and computer attitude were considered and eliminated to better isolate any differences in performance. Student performance did not improve with the use of the technology. Chapter IV The Role of Organizational, Environmental and Human Factors in E-Learning Diffusion................................................................................................................................................ 53 Kholerile L. Gwebu, University of New Hampshire, USA Jing Wang, Kent State University, USA Improvements in technology have led to innovations in training such as Electronic Learning (E-learning). E-learning aims to help organizations in their training initiatives by simplifying the training process and cutting cost. It also attempts to help employees in their learning processes by making learning readily accessible. Unfortunately, the diffusion of this innovation has not been as successful as was initially predicted. In this paper we explore the drivers behind the diffusion of e-learning. Apart from the factors investigated by previous research, we believe that one more dimension, -human factors- should be taken into account when evaluating the diffusion of a training innovation, since learners are, to a large extent, the central issue of training. In the case of e-learning we believe that motivation plays a key role in the diffusion of e-learning. Chapter V Distance Education: Satisfaction and Success....................................................................................... 71 Wm. Benjamin Martz, Jr., Northern Kentucky University, USA Morgan Shepherd, University of Colorado at Colorado Springs, USA
• •
Almost 3.5 million students were taking at least 1 online course during the fall 2006 term. The 9.7 % growth rate for online enrollments far exceeds the 1.5 % growth of the overall higher education student population. (Allen and Seaman, 2007)
By 2006, the distance education industry was well beyond $33.6 billion (Merit Education, 2003). As with most markets, 1 of the keys to taking advantage of this growing market is customer satisfaction. Therefore the greater the student satisfaction in a distance program, the more likely that program will be successful. This paper identifies 5 key components of satisfaction for distance education programs through a student satisfaction questionnaire and factor analysis. A questionnaire was developed using these variables and administered to 341 distance students. The results revealed 5 constructs for student satisfaction in a distance education program (Martz and Reddy, 2005; Martz and Shepherd, 2007). Using these factors as guidance, this paper extends those findings to provide some operational and administrative implications. Chapter VI Group Support Systems as Collaborative Learning Technologies: A Meta-Analysis........................... 79 John Lin, National University of Singapore, Singapore Yin Ping Yang, National University of Singapore, Singapore Yingqin Zhong, National University of Singapore, Singapore Computer-based systems have been widely applied to support group-related activities such as collaborative learning and training. The various terms accorded to this research stream include virtual teams, ecollaboration, computer-supported collaborative work, distributed work, electronic meetings, and so forth. A notable and well-accepted aspect in the information system field is group support systems (GSS), the focus of this chpater. The numerous GSS studies have reported findings which may not be altogether consistent. An overall picture is much in want which attends to the synthesizing of the findings accumulated over decades. This chapter presents a meta-analysis study aimed at gaining a general understanding of GSS effects. We investigate 6 important moderators in GSS experimental research: group outcomes, namely group size, task type, anonymity, time and proximity, level of technology, and the existence of facilitation. The results point to important conclusions about the phenomenon of interest; in particular, their implications vis-à-vis computer-supported collaborative learning technologies and use are discussed and highlighted along each dimension of the studied variables. Section II Design, Modeling, and Evaluation Issues Chapter VII Knowledge Flow and Learning Design Models towards Lifewide E-Learning Environments....................................................................................................................................... 110 M.C. Pettenati, University of Florence, Italy M.E. Cigognini, University of Florence, Italy
This chapter considers the affordances of social networking theories and tools in building new and effective e-learning practices. We argue that “Connectivism” (social networking applied to learning and knowledge contexts) can lead to a re-conceptualization of learning in which formal, non-formal, and informal learning can be integrated so as to build potentially lifelong learning activities which can be experienced in “personal learning environments”. In order to provide a guide for the design, development, and improvement of e-learning environments, as well as for the related learning activities, we provide a knowledge flow model and the consequent learning design model, highlighting the stages of learning, the enabling conditions, and possible technological tools to be used for the purpose. In the conclusion to the chapter, the derived model is applied in a possible scenario of formal learning in order to show how the learning process can be designed according to the presented theory. Chapter VIII An Agent-Based Framework for Personalized E-Learning Services................................................... 130 Larbi Esmahi, Athabasca University, Canada This paper provides an overview of personalized e-learning services and related technology and presents a multi-agent system for delivering adaptive e-learning. We discussed the main issues related to personalization in e-learning: technology advancement and the shift in perception of the learning process, one size fits all versus personalized services, and the adaptation process. The paper provides also an overview of most known implemented systems for adaptive e-learning, as well as detailed description of the architecture and components of the proposed multi-agent framework. Finally, the paper concludes with some comments about the dimensions to consider for implementing personalization.. Chapter IX Supporting Evolution of Knowledge Artifacts in Web Based Learning Environments...................... 142 Dimitris Kotzinos, Institute of Computer Science, FORTH-ICS and Department of Geomatics and Surveying, TEI of Serres, Greece Giorgos Flouris, Institute of Computer Science, FORTH-ICS, Greece Yannis Tzitzikas, University of Crete and Institute of Computer Science, FORTH-ICS, Greece Dimitris Andreou, Institute of Computer Science, FORTH-ICS, Greece Vassilis Christophides, University of Crete and Institute of Computer Science, FORTH-ICS, Greece The development of collaborative e-learning environments that support the evolution of semantically described knowledge artifacts is a challenging task. In this chapter we elaborate on usage scenarios and requirements for environments grounded on learning theories that stress on collaborative knowledge creation activities. Subsequently, we present a comprehensive suite of services, comprising an emerging framework, called Semantic Web Knowledge Middleware (SWKM), that enables the collaborative evolution of both domain abstractions and conceptualizations, and data classified using them. The suite includes advanced services for ontology change, comparison and versioning over a common knowledge repository offering persistent storage and validation.
Chapter X Interface and Features for an Automatic ‘C’ Program Evaluation System.......................................... 168 Amit Kumar Mandal, IIT Kharagpur, India Chittaranjan Mandal, IIT Kharagpur, India Chris Read, Kingston University, UK A system for automatically testing, evaluating, grading, and providing critical feedback for submitted ‘C’ programming assignments has been implemented. The interface and key features of the system are described in detail along with some examples. The system gives proper attention towards the monitoring of a student’s progress and provides complete automation of the evaluation process, with a fine-grained analysis. It also provides online support to both the instructors and students and is designed for serviceoriented integration with a course management system using Web services. Chapter XI Evaluating Computerized Adaptive Testing Systems.......................................................................... 186 Anastasios A. Economides, University of Macedonia, Greece Chrysostomos Roupas, University of Macedonia, Greece Many educational organizations are trying to reduce the cost of exams, the workload, delay of scoring, and the human errors. Also, organizations try to increase the accuracy and efficiency of the testing. Recently, most examination organizations use Computerized Adaptive Testing (CAT) as the method for large scale testing. This chapter investigates the current state of CAT systems and identifies their strengths and weaknesses. It evaluates 10 CAT systems using an evaluation framework of 15 domains categorized into 3 dimensions: Educational, Technical and Economical. The results show that the majority of the CAT systems give priority to security, reliability, and maintainability. However, they do not offer to the examinee any advanced support and functionalities. Also, the feedback to the examinee is limited and the presentation of the items is poor. Recommendations are made in order to enhance the overall quality of a CAT system. For example, alternative multimedia items should be available so that the examinee would choose his preferred media type. Feedback could be improved by providing more information to the examinee or providing information anytime the examinee wished. Chapter XII Technology Integration Practices within a Socioeconomic Context: Implications for Educational Disparities and Teacher Preparation........................................................................... 203 Holim Song, Texas Southern University, USA Emiel Owens, Texas Southern University, USA Terry T. Kidd, University of Texas School of Public Health, USA With the call for curricular and instructional reform, educational institutions have embarked on the process to reform their educational practices to aid the lower SES student in their quest to obtain quality education with the integration of technology. The study performed was to examine the socioeconomic disparities of teachers’ technology integration in the classroom as it relates to implementing technology
interventions to support quality teaching and active student learning. This chapter provides empirical evidence of whether these disparities continue to exist, and their effects on student achievement in the classroom. Chapter XIII Utilizing Web Tools for Computer-Mediated Communication to Enhance Team-Based Learning.......................................................................................................................... 218 Elizabeth Avery Gomez, New Jersey Institute of Technology, USA Dezhi Wu, Southern Utah University, USA Katia Passerini, New Jersey Institute of Technology, USA Michael Bieber, New Jersey Institute of Technology, USA Team-based learning is an active learning instructional strategy used in the traditional face-to-face classroom. Web-based computer-mediated communication (CMC) tools complement the face-to-face classroom and enable active learning between face-to-face class times. This paper presents the results from pilot assessments of computer-supported team-based learning. The authors utilized pedagogical approaches grounded in collaborative learning techniques, such as team-based learning, and extended these techniques to a web-based environment through the use of computer-mediated communications tools (discussion web-boards). This approach was examined through field studies in the course of two semesters at a US public technological university. The findings indicate that the perceptions of team learning experience such as perceived motivation, enjoyment and learning in such a web-based CMC environment are higher than in traditional face-to-face courses. In addition, our results show that perceived team members’ contributions impact individual learning experiences. Overall, Web-based CMC tools are found to effectively facilitate team interactions and achieve higher-level learning. Chapter XIV Accessible E-Learning: Equal Pedagogical Opportunities for Students with Sensory Limitations............................................................................................................................. 233 Rakesh Babu, University of North Carolina at Greensboro, USA Vishal Midha, University of North Carolina at Greensboro, USA The transformation of the world into a highly technological place has led to the evolution of learning from the traditional classroom to e-learning, using tools such as course management systems (CMS). By its very nature, e-learning offers a range of advantages over traditional pedagogical methods, including issues of physical access. It is particularly useful for people with sensory limitations as it offers a level playing field for them in learning. This study examines the accessibility, usability, and richness of CMS used for e-Learning in institutions of higher education. A model is proposed that underscores the influence of accessibility, usability, and richness of the CMS, coupled with learning motivation on the learning success as perceived by students with sensory limitations. The model is tested by surveying university students with sensory limitations about their views on the course management system used. The results suggested that accessibility and usability of a CMS have a positive influence on the learning success as perceived by students with sensory limitations.
Section III Tools and Applications Chapter XV Supporting Argumentative Collaboration in Communities of Practice: The CoPe_it! Approach........................................................................................................................ 245 Nikos Karacapilidis, University of Patras and Research Academic Computer Technology Institute, Greece Manolis Tzagarakis, Research Academic Computer Technology Institute, Greece Providing the necessary means to support and foster argumentative collaboration is essential for Communities of Practice to achieve their goals. However, current tools are unable to cope with the evolving stages of the collaboration. This is primarily due to the inflexible level of formality they provide. Arguing that a varying level of formality needs to be offered in systems supporting argumentative collaboration, this chapter proposes an incremental formalization approach that has been adopted in the development of CoPe_it!, a Web-based tool that complies with collaborative principles and practices, and provides members of communities engaged in argumentative discussions and decision making processes with the appropriate means to collaborate towards the solution of diverse issues. According to the proposed approach, incremental formalization can be achieved through the consideration of alternative projections of a collaborative workspace. Chapter XVI Personalization Services for Online Collaboration and Learning........................................................ 258 Christina E. Evangelou, Informatics and Telematics Institute, Greece Manolis Tzagarakis, Research Academic Computer Technology Institute, Greece Nikos Karousos, Research Academic Computer Technology Institute, Greece George Gkotsis, Research Academic Computer Technology Institute, Greece Dora Nousia, Research Academic Computer Technology Institute, Greece Collaboration tools can be exploited as virtual spaces that satisfy the community members’ needs to construct and refine their ideas, opinions, and thoughts in meaningful ways, in order to suc-cessfully assist individual and community learning. More specifically, collaboration tools when properly personalized can aid individuals to articulate their personal standpoints in such a way that can be proven useful for the rest of the community where they belong. Personalization services, when properly integrated to collaboration tools, can be an aide to the development of learning skills, to the interaction with other actors, as well as to the growth of the learners’ autonomy and self-direction. This work pre-sents a framework of personalization services that has been developed to address the requirements for efficient and effective collaboration between online communities’ members that can act as catalysts for individual and community learning.
Chapter XVII Computer-Aided Personalised System of Instruction for Teaching Mathematics in an Online Learning Environment.................................................................................................... 271 Willem-Paul Brinkman, Delft University of Technology, The Netherlands Andrew Rae, Brunel University, UK Yogesh Kumar Dwivedi, Swansea University, UK This paper presents a case study of a university’s discrete mathematics course with over 170 students who had access to an online learning environment (OLE) that included a variety of online tools, such as videos, self-tests, discussion boards, and lecture notes. The course is based on the ideas of the Personalised System of Instruction (PSI) modified to take advantage of an OLE. Students’ learning is initially examined over a period of 2 years, and compared with that in a more traditionally taught part of the course. To examine students’ behaviour, learning strategies, attitudes, and performance, both qualitative and quantitative techniques, were used as a mixed methodology approach, including in-depth interviews (N=9), controlled laboratory observations (N=8), surveys (N=243), diary studies (N=10), classroom observations, recording online usage behaviour, and learning assessments. In addition, students’ attitude and performance in 2 consecutive years where PSI was applied to the entire course provides further understanding that is again in favour of PSI in the context of OLE. This chapter aims to increase understanding of whether PSI, supported by an OLE, could enhance student appreciation and achievement as findings suggest. Chapter XVIII Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice...................................................................................................................... 300 Sandy el Helou, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland Denis Gillet, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland Christophe Salzmann, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland Yassin Rekik, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland The École Polytechnique Fédérale de Lausanne is developing a Web 2.0 social software called eLogbook and designed for sustaining interaction, collaboration, and learning in online communities. This chapter describes the 3A model on which eLogbook is based as well as the main services that the latter provides. The proposed social software has several innovative features that distinguish it from other classical online collaboration solutions. It offers a high-level of flexibility and adaptability so that it can fulfill the requirements of various Communities of Practice. It also provides community members with ubiquitous access and awareness through its different interfaces. Finally, eLogbook strengthens usability and acceptability thanks to its personalization and contextualization mechanisms.
Chapter XIX Multimedia Authoring for Communities of Teachers.......................................................................... 317 Agnès Guerraz, INRIA Rhône-Alpes, France Cécile Roisin, INRIA Rhône-Alpes, France Jan Mikáč, INRIA Rhône-Alpes, France Romain Deltour, INRIA Rhône-Alpes, France One way of providing technological support for communities of teachers is to help participants to produce, structure and share information. As this information becomes more and more multimedia in nature, the challenge is to build multimedia authoring and publishing tools that meets requirements of the community. In this paper we analyze these requirements and propose a multimedia authoring model and a generic platform on which specific community-oriented authoring tools can be realized. The main idea is to provide template-based authoring tools while keeping rich composition capabilities and smooth adaptability. It is based on a component-oriented approach integrating homogeneously logical, time and spatial structures. Templates are defined as constraints on these structures. Compilation of References................................................................................................................ 334 About the Contributors..................................................................................................................... 367 Index.................................................................................................................................................... 375
xix
Preface
This volume of Advances in Web-based Learning (AWBL) Book Series, entitled “Solutions and Innovations in Web-Based Technologies for Augmented Learning: Improved Platforms, Tools, and Applications”, includes a wide range of the most current research in the development of innovative Web-based learning solutions. It aims at providing an in-depth coverage and understanding of issues related to the implementation and application of Web-based technologies for facilitating and augmenting learning in diverse contemporary organizational settings. In our present era, Web-based technologies offer numerous means for facilitating and enhancing learning in various contexts. Their exploitation should be in accordance with a series of technical, pedagogical, cognitive, social, cultural, and managerial parameters. This volume will assist researchers, educators, and professionals in understanding the necessary components for Web-based learning technologies and how to best adopt these elements into their own contexts, wehter being in classrooms, workgroups, communities, or world-wide organizations. Chapter I, “The Role of Learner in an Online Community of Inquiry: Responding to the Challenges of First-time Online Learners” by Martha Cleveland-Innes, Athabasca University (Canada), Randy Garrison, The University of Calgary (Canada), and Ellen Kinsel, Odyssey Learning Systems (Canada), reports on findings from a study of adjustments to online learning environments. Using pre- and postquestionnaires, students enrolled in entry-level courses in two graduate degree programs at Athabasca University, Canada, and described their adjustments to online learning. Responses are analyzed in relation to the elements of cognitive, social, and teaching presence, which have been defined as core dimensions of learner role requirements in an online community of inquiry. Student comments provide understanding of the experience of first-time online learners, including the challenges, interventions, and resolutions that present themselves as unique incidents. Recommendations for the support and facilitation of adjustment are also made. Chapter II, “Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning” by Xinchun Wang, California State University at Fresno (USA), focuses on the instructional design of applicable collaborative learning tasks that motivate sustained student participation and interaction. By providing data from a Web-based course offered in 2006 and 2007, this chapter investigates students’ attitudes toward process and product oriented online collaborative learning. The analysis of 93 post course survey questionnaire data show that the overwhelming majority of students have positive experience with online collaborative learning. Data also suggest that students are more enthusiastic about process oriented tasks and their attitudes toward product oriented collaborative learning tasks are mixed. Chapter III, “Cognition, Technology, and Performance: The Role of Course Management Systems” by Teresa Lang, Columbus State University (USA) and Dianne Hall, Auburn University (USA), describes an experiment conducted to determine the benefit to students of using course management systems. The effects of cognition, learning styles, and computer attitude were considered and eliminated to better
xx
isolate any differences in performance. The data collected in this study supported the hypothesis that cognition influences performance. Learning style was found not to influence performance. Cognition and learning styles are frequently cited in the literature as influencing performance both with technology and without technology, so this finding partially supports the literature. Moreover, empirical evidence to support the benefit of course management systems in the learning process was not found in this study. Chapter IV, “The Role of Organizational, Environmental and Human Factors in E-Learning Diffusion” by Kholekile Gwebu, University of New Hampshire (USA) and Jing Wang, Kent State University (USA), explores the factors that influence e-learning diffusion in contemporary corporations. Through the application of concepts from the literature on organizational change, innovation diffusion, and motivation, authors attempt to assess the factors that influence the diffusion of e-learning in organizations. Among these factors are organizational variables such as Organizational Complexity (specialization, function differentiation, professionalism) and Bureaucratic Control (formalization, centralization, vertical differentiation). Much attention is given to the factor of employee motivation. Chapter V, “Distance Education: Satisfaction and Success” by Benjamin Martz, Northern Kentucky University (USA) and Morgan Shepherd, University of Colorado at Colorado Springs (USA), identifies five key components of satisfaction for distance education programs through a student satisfaction questionnaire and factor analysis. A questionnaire was developed using these variables and administered to 341 distance students. The results revealed five constructs for student satisfaction in a distance education program. Using these factors as guidance, this chapter extends those findings to provide some operational and administrative implications. Chapter VI, “Group Support Systems as Collaborative Learning Technologies: A Meta-Analysis” by John Lim, Yin Ping Yang, and Yingqin Zhong, National University of Singapore (Singapore), presents a meta-analysis study aimed at gaining a general understanding of Group Support Systems (GSS) effects. Six important moderators in GSS experimental research are investigated: group outcomes, namely group size, task type, anonymity, time and proximity, level of technology, and the existence of facilitation. The results point to important conclusions about the phenomenon of interest. Their implications with regards to computer-supported collaborative learning technologies and use are discussed and highlighted along each dimension of the studied variables. Chapter VII, “Knowledge flow and learning design models towards lifewide e-learning environments” by Maria Chiara Pettenati and Elisabetta Cigognini, University of Florence (Italy), considers the affordances of social networking theories and tools in building new and effective e-learning practices. In order to provide a guide for the design, development and improvement of e-learning environments, as well as for the related learning activities, this chapter proposes a knowledge flow model and the consequent learning design model, highlighting the stages of learning, the enabling conditions, and possible technological tools to be used. The proposed model is applied in a scenario of formal learning. Chapter VIII, “An Agent-Based Framework for Personalized E-Learning Services” by Larbi Esmahi, Athabasca University (Canada), provides an overview of personalized e-learning services and related technology, and presents a multi-agent system for delivering adaptive e-learning. The author discusses the main issues related to personalization in e-learning: technology advancement and the shift in perception of the learning process, one size fits all vs. personalized services, and the adaptation process. The chapter also provides an overview of the most known implemented systems for adaptive e-learning, as well as a detailed description of the architecture and components of the proposed multi-agent framework. Chapter IX, “Supporting Evolution of Knowledge Artifacts in Web-Based Learning Environments” by Dimitris Kotzinos, FORTH-ICS and TEI of Serres (Greece), Giorgos Flouris, FORTH-ICS (Greece), Yannis Tzitzikas, University of Crete and FORTH-ICS (Greece), Dimitris Andreou, FORTH-ICS (Greece), and Vassilis Christophides, University of Crete and FORTH-ICS (Greece), elaborates usage scenarios
xxi
and requirements for e-learning environments grounded on learning theories that stress on collaborative knowledge creation activities. Subsequently, this chapter presents a comprehensive suite of services, comprising an emerging framework, called Semantic Web Knowledge Middleware, that enables the collaborative evolution of both domain abstractions and conceptualizations, and data classified using them. The proposed suite includes advanced services for ontology change, comparison, and versioning over a common knowledge repository offering persistent storage and validation. Chapter X, “Interface and Features for an Automatic ‘C’ Program Evaluation System” by Amit Kumar Mandal, IIT Kharagpur (India), Chittaranjan Mandal, IIT Kharagpur (India), and Chris Reade, Kingston University (United Kingdom), reports on an implemented system for automatically testing, evaluating, grading, and providing critical feedback for submitted ‘C’ programming assignments. The interface and key features of the system are described in detail along with some examples. The system gives proper attention towards the monitoring of a students’ progress and provides complete automation of the evaluation process. It also provides online support to both the instructors and students and is designed for service-oriented integration with a course management system using Web services. Chapter XI, “Evaluating Computerized Adaptive Testing Systems” by Anastasios Economides and Chrysostomos Roupas, University of Macedonia (Greece), investigates the current state of Computerized Adaptive Testing (CAT) systems and identifies their strengths and weaknesses. More specifically, this chapter evaluates ten CAT systems using an evaluation framework of 15 domains categorized into three dimensions: Educational, Technical, and Economical. The results show that the majority of the CAT systems give priority to security, reliability, and maintainability. However, they do not offer to the examinee any advanced support and functionalities. Also, the feedback to the examinee is limited and the presentation of the items is poor. Recommendations are given in order to enhance the overall quality of a CAT system. Chapter XII, “Technology Integration Practices within a Socioeconomic Context: Implications for Educational Disparities and Teacher Preparation” by Holim Song, Texas Southern University (USA), Emiel Owens, Texas Southern University (USA), and Terry Kidd, University of Texas (USA), reports on a study that was performed in order to examine the socioeconomic disparities of teachers’ technology integration in the classroom as it relates to implementing technology interventions to support quality teaching and active student learning. This chapter provides empirical evidence of whether these disparities continue to exist and if so, their effects on student achievement in the classroom. Chapter XIII, “Utilizing Web Tools for Computer-Mediated Communication to Enhance TeamBased Learning” by Elizabeth Avery Gomez, New Jersey Institute of Technology (USA), Dezhi Wu, Southern Utah University (USA), Katia Passerini, New Jersey Institute of Technology (USA), and Michael Bieber, New Jersey Institute of Technology (USA), presents the results from pilot assessments of computer-supported team-based learning. Authors utilized pedagogical approaches grounded in collaborative learning techniques, such as team-based learning, and extended these techniques to a Webbased environment through the use of computer-mediated communications (CMC) tools. Their approach was examined through field studies duirng the course of two semesters at a U.S. public technological university. The findings indicate that Web-based CMC tools effectively facilitate team interactions and achieve higher-level learning. Chapter XIV, “Accessible E-Learning: Equal Pedagogical Opportunities for Students with Sensory Limitations” by Rakesh Babu, University of North Carolina at Greensboro (USA), and Vishal Midha, University of North Carolina at Greensboro (USA), examines the accessibility, usability, and richness of course management systems (CMS) used for e-learning in institutions of higher education. A model is proposed that underscores the influence of accessibility, usability, and richness of the CMS, coupled with learning motivation on the learning success as perceived by students with sensory limitations. The
xxii
model is tested by surveying university students with sensory limitations about their views on the course management system used. The results suggest that accessibility and usability of a CMS have a positive influence on the learning success as perceived by students with sensory limitations. Chapter XV, “Supporting Argumentative Collaboration in Communities of Practice: The CoPe_it! approach” by Nikos Karacapilidis, University of Patras and Research Academic Computer Technology Institute (Greece), and Manolis Tzagarakis, Research Academic Computer Technology Institute (Greece), argues that a varying level of formality needs to be offered in systems supporting argumentative collaboration. The chapter accordingly proposes an incremental formalization approach that has been adopted in the development of CoPe_it!, a Web-based tool that complies with collaborative principles and practices, and provides members of communities engaged in argumentative discussions and decision making processes with the appropriate means to collaborate towards the solution of diverse issues. Chapter XVI, “Personalization Services for Online Collaboration and Learning” by Christina Evangelou, Informatics and Telematics Institute (Greece), Manolis Tzagarakis, Research Academic Computer Technology Institute (Greece), Nikos Karousos, Research Academic Computer Technology Institute (Greece), George Gkotsis, Research Academic Computer Technology Institute (Greece), and Dora Nousia, Research Academic Computer Technology Institute (Greece), focuses on the integration of personalization services to collaboration support tools, the aim being to advance the development of learning skills, the interaction with other actors, and the growth of the learners’ autonomy and self-direction. This chapter presents a framework of personalization services that has been developed to address the requirements for efficient and effective collaboration between online communities’ members that can act as catalysts for individual and community learning. Chapter XVII, “Computer-Aided Personalised System of Instruction for Teaching Mathematics in an Online Learning Environment” by Willem-Paul Brinkman, Delft University of Technology (The Netherlands), Andrew Rae, Brunel University (United Kingdom), and Yogesh Kumar Dwivedi, Swansea University (United Kingdom), presents a case study of a disrete university mathematics course with over 170 students who had access to an online learning environment that included a variety of online tools, such as videos, self-tests, discussion boards, and lecture notes. Students’ learning is initially examined over a period of two years, and compared with that in a more traditionally taught part of the course. To examine students’ behaviour, learning strategies, attitudes ,and performance, both qualitative and quantitative techniques were used as a mixed methodology approach, including in-depth interviews, controlled laboratory observations, surveys, diary studies, classroom observations, recording online usage behaviour, and learning assessments. Chapter XVIII, “Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice” by Sandy El Helou, Denis Gillet, Christophe Salzmann, and Yassin Rekik, École Polytechnique Fédérale de Lausanne – EPFL (Switzerland), presents a Web 2.0 social software, called eLogbook, which has been designed for sustaining interaction, collaboration, and learning in online communities. This chapter describes the 3A model on which eLogbook is based as well as the main services that the latter provides. The proposed social software has several innovative features that distinguish it from other classical online collaboration solutions. Among others, it offers a high-level of flexibility and adaptability so that it can fulfill the requirements of various Communities of Practice. It provides community members with ubiquitous access and awareness through its different interfaces, and strengthens usability and acceptability thanks to its personalization and contextualization mechanisms. Chapter XIX, “Multimedia Authoring for Communities of Teachers” by Agnès Guerraz, INRIA Rhône-Alpes (France), Cécile Roisin, INRIA Rhône-Alpes (France), Jan Mikáč, INRIA Rhône-Alpes (France), and Romain Deltour, INRIA Rhône-Alpes (France), proposes a multimedia authoring model and a generic platform on which specific community-oriented authoring tools can be realized. The main
xxiii
idea is to provide template-based authoring tools, while keeping rich composition capabilities and smooth adaptability. The proposed model is based on a component-oriented approach integrating logical, time and spatial structures, while templates are defined as constraints on these structures. The proper exploitation of Web-based technologies towards building responsive environments that motivate, engage, and inspire learners, and which are embedded in the business processes and human resources management systems of organizations, is highly critical. Accordingly, the research field of technology-enhanced learning receives a continuously increasing attention. “Solutions and Innovations in Web-Based Technologies for Augmented Learning: Improved Platforms, Tools and Applications” provides cutting-edge research on a series of related topics and discusses its implications in the modern era’s broad learning concept. Addressing diverse conceptual, social, and technical issues, this book provides professionals, researchers, and practitioners in the field with up-to-date research in developing innovative and more effective learning systems by using Web-based technologies. Nikos Karacapilidis Editor-in-Chief Advances in Web-based Learning Book Series
Section I
Augmenting Learning
Chapter I
The Role of Learner in an Online Community of Inquiry:
Responding to the Challenges of First-Time Online Learners Martha Cleveland-Innes Athabasca University, Canada Randy Garrison The University of Calgary, Canada Ellen Kinsel Odyssey Learning Systems, Canada
Abstract Learners experiencing an online educational community for the .rst time can explain the adjustment required for participation. Findings from a study of adjustment to online learning environments validate differences found in 3 presences in an online community of inquiry. Using pre- and post-questionnaires, students enrolled in entry-level courses in 2 graduate degree programs at Athabasca University, Canada, describe their adjustment to online learning. Responses were analyzed in relation to the elements of cognitive, social, and teaching presence, defined by Garrison, Anderson, and Archer (2000) as core dimensions of learner role requirements in an online community of inquiry. Five areas of adjustment characterize the move toward competence in online learning: interaction, self-identity, instructor role, course design, and technology. Student comments provide understanding of the experience of first-time online learners, including the challenges, interventions, and resolutions that present themselves as unique incidents. Recommendations for the support and facilitation of adjustment are made. Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Role of Learner in an Online Community of Inquiry
Introduction The move to online delivery in post-secondary education institutions has increased exponentially over the last decade. Early concerns were raised about the extent to which students would embrace online education. However, recent evaluation of student enrolment in online courses indicates much willingness to engage; optimistic online enrolment projections are now a reality and there are implications that growth will continue. “Online enrolments continue to grow at rates faster than for the overall student body, and schools expect the rate of growth to further increase.” (Allen & Seaman, 2004, Introduction, 3rd para.). As growth continues, more and more students will experience online education. Students will have to develop new skills required to be competent online learners, and will modify behaviours from classroom learning to fit the online environment. The details of this adjustment process for learners to this new delivery method is still under explored; “there is also (sic) a need for better understanding of students’ adaptation to online learning over time.” (Wilson, et al., 2003). Adaptation to the role of online learner can be understood by looking at the structure of the online pedagogical environment, or community of inquiry (Garrison, Anderson & Archer, 2000) and tenets of role theory (Blau & Goodman, 1995) and how role change occurs (Turner, 1990). The integration of new behaviours into one’s role repertoire (Kopp, 2000) occurs in a context (Katz & Kahn, 1978) and through an intricate process of role taking, role exploration and role making (Blau & Goodman, 1995). As the context of teaching and learning in online environments is very different from long standing classroom structure, and will act as a catalyst for role adjustment for individual satudents moving online. This paper outlines the character of adjustment made by such students, determined from a study of novice online learners. Students responded to open-ended questions before and after (pre
and post) their first online experience; responses were coded and categorized according to adjustment to cognitive, social and teaching presence. With each of the presences, responses formed a pattern around activities and outcomes in the following thematic areas: interaction, instructor role, self-identity, course design, and technology. In addition, a process of meeting challenges presented by this new environment is outlined. This data provides understanding of the experience of first-time online learners. Recommendations are made for incorporating this understanding into instructional design and facilitation in order to ease adjustment for learners new to the online environment.
LITERATURE REVIEW Online Community of Inquiry The community of inquiry model, originally proposed by Garrison, Anderson and Archer (2000), served as the conceptual framework around which to study online learning and learner adjustment. The theoretical foundation of this framework is based upon the work of John Dewey (1938). At the core of Dewey’s philosophy are collaboration, free intercourse, and the juxtaposition of the subjective and shared worlds. This is the essence of a community of inquiry. Consistent with his philosophy of pragmatism, Dewey (1933) viewed inquiry as a practical endeavour. Inquiry emerged from practice and shaped practice. Dewey’s work on reflective thinking and inquiry provided the inspiration for operationalizing cognitive presence and purposeful learning in the community of inquiry framework (Garrison & Archer, 2000). The other elements of the community of inquiry model, social presence and teaching presence, were derived from other educational sources, but are consistent with Dewey’s philosophy and the framework of a community of inquiry (Garrison & Anderson, 2003).
The Role of Learner in an Online Community of Inquiry
The community of inquiry framework has attracted considerable attention in higher education research. In particular, it has framed many studies of online learning. This speaks to both the importance of community in higher education as well as the usefulness of the framework and how the elements are operationalized. Moreover, the structural validity of the framework has been tested and confirmed through factor analysis (Arbaugh & Hwang, 2006; Garrison, ClevelandInnes & Fung, 2004; Ice, Arbaugh, Diaz, Garrison, Richardson, Shea, & Swan, 2007). A review of the research using the framework and the identification of current research issues is provided by Garrison and Arbaugh (2007). An online community of inquiry, replete with interaction opportunities in several places of ‘presence’ (Garrison & Anderson, 2003) provides a supportive context for the re-development of the role of learner. The relationship among these dimensions is depicted in Figure 1. These are the core elements in an educational experience and key to understanding role adjustment. Cogni-
tive, social and teaching presences represent the primary dimensions of role in an educational context; it has a character of its own in an online environment. Changes in cognitive, social and teaching presence, as a result of a new context and communication medium, will necessitate role adjustments by the learners. Cognitive presence is defined “as the extent to which learners are able to construct and confirm meaning through sustained reflection and discourse …” (Garrison, Anderson & Archer, 2001, p. 11). Role adjustment here reflects the nature of the communication medium: spontaneous, verbal communication is supplanted by a reflective, textbased medium. This represents a radical departure from classroom interaction. A more precise and recorded form of communication, the text-based medium has the potential to support deep and meaningful learning outcomes. Social presence is defined as “the ability of participants to identify with the community (e.g., course of study), communicate purposefully in a trusting environment, and develop inter-personal
Figure 1.
The Role of Learner in an Online Community of Inquiry
relationships by way of projecting their individual personalities” (Garrison, in press). In addition to the general challenge of asynchronous written communication and its lack of non-verbal cues, is the challenge of group identity. An essential characteristic of online learning is open communication and group cohesion. Social presence provides the capacity to communicate and collaborate. This requires that members identify with the group or class (Rogers & Lea, 2005). Since the educational experience is a social transaction, special consideration must be given to the social interactions and climate. Interpersonal and emotional communication should build over time. Social presence represents a major role adjustment in moving from a real-time face-to-face classroom experience to a virtual community. Teaching presence is defined as “the design, facilitation and direction of cognitive and social processes for the purpose of realizing personally meaningful and educationally worthwhile learning outcomes” (Anderson, Rourke, Garrison, and Archer, 2001). It is what binds all the elements together in a purposeful community of inquiry. The properties of the online community also necessitate significant design changes and role adjustment for the teacher. Teaching presence must recognize and utilize the unique features of the medium and structure and model appropriate learning activities. This translates into an experience and role that may not be at all familiar to the learner.
“process by which people learn the characteristics of their group … (and) the attitudes, values and actions thought appropriate for them” (Kanwar & Swenson, 2000, p. 397). Under conditions of long-standing roles, individuals engage in ‘role-taking’ behavior, where observation and mimicry of role models allow those new to the role to ‘practice’ appropriate role behaviors. ‘Role making’ occurs as individuals construct aspects of the role with their own individual meanings and satisfying behaviors attached. This occurs under social conditions where such individual autonomy is allowed. It also occurs where role models are not readily available, and construction of the role is required. Such is the case for becoming an online learner. An adjustment from the more generalized role of learner, the responsibilities and requirements of working online are not readily apparent to those new to the role. The transition to, and adjustment in, the role of online learner, is part of the current social climate in online learning. While maintaining the usual expectations and privileges attached to the role of learner, online learners add such things as:
Role Adjustment
•
‘Role’ is used here as a sociological construct, defined as a collection of behavioral requirements associated with a certain social position in a group, organization or society (Kendall, Murray & Linden, 2000). At its most general level, role expectations are dictated by the social structure. Individuals who engage in the role are guided, through a process of socialization, to appropriate role performance. Socialization then refers to the
An online community of inquiry is a distinct personal and public search for meaning and understanding. New roles are necessitated in an online community by the nature of the communication which compels students to assume greater responsibility for and control over their learning.
• •
•
knowledge about, skill with and acceptance of the technology, new amounts and modes of communication with instructors, peers and administrators, increased levels of learner self-direction, and a new ‘place’ for learning in time (anytime, usually determined by the learner and their life circumstances) and space (anywhere, dependent upon equipment requirements).
The Role of Learner in an Online Community of Inquiry
As McLuhan observed, “each form of transport not only carries, but translates and transforms the sender, the receiver and the message” (McLuhan, 1995, p. 90). An asynchronous and collaborative learning community necessitates the adoption of personal responsibility and shared control. This goes to the heart of an online learning community and represents a significant shift from the information transmission of the lecture hall and the passive role of students. Thus, online learning communities demand role adjustments. This brings another need: to understand changes in responsibilities and roles. Differences in the required activities of online learning, in comparison to classroom based faceto-face, result in new, required expectations and behaviors for learners. These new activities cluster into a pattern that is seen as the ‘role’ of online learner. The term role refers to the expected and generally accepted ways of behaving, acting and interacting (Knuttila, 2002). Taking on a role (e.g. teacher, mother, learner) involves learning what the expected behaviors are through a process of observation and trial and error attempts at the role (Collier, 2001). While the adoption and enactment of social roles is a standard, commonplace element of everyday experience, becoming an online learner has a unique characteristic. For many learners, role models for learning the required and expected activities are not present until one is already engaged in an online course (Garrison & Cleveland-Innes, 2003). Role acquisition is part of individuation in the experience of working online. Each online learner engages in the experience of learning online and the process of role taking and role making occurs concurrently within the learning experience. From the perspective of the individual, learning online requires the development of competencies in the role of ‘online learner’. As a new social role, the pathway to competence will occur over time as the role becomes prevalent and normalized. In this early stage, online communities will contribute to the socialization process for those engaging in
this new role. The result is a new role and a new identity for learners.
METHODOLOGY Sample Students participating in this study were enrolled in two graduate programs at Athabasca University. Students came from 19 distinct course groups over four terms. Two-hundred and seventeen students from core courses normally taken early in each program, courses purposively selected in order to include the greatest number of novice online learners, agreed to participate. Of the 217 students consenting to participate, 150 returned both questionnaires; 33% male and 58% female (9% did not indicate gender). Respondents indicated their age as follows: 20-29 years – 10%, 30-39 years – 24%, 40-49 years – 43%, 50+ years – 23%. All courses were delivered using a combination of print and electronic media and online conferencing. The online conferencing component provided the opportunity for student engagement and group interaction. Required conference participation was used for assessment in some courses while it remained a voluntary activity in others.
Data Collection This study used an instrument validated by Garrison, Cleveland-Innes & Fung (2004), based on the concepts of the community of inquiry model, to measure the extent of student identification with the behaviours, expectations and requirements of the role of online learner. Identical questionnaires were sent by email during the first two weeks of each term and again during the final two weeks of each term. The questionnaire collected both quantitative and qualitative data. Quantitative data was generated from 28 Likert-type scaled responses to statements derived from the community of inquiry model. After the scaled items,
The Role of Learner in an Online Community of Inquiry
seven open-ended questions about the adjustment to online learning were presented. These questions were written and pre-tested with a select sample of students and faculty familiar with online learning. This qualitative data was analyzed and summarized in relation to the adjustment process to the role of online learner, and is the evidence used for this discussion. Forty-six percent of the participants reported this as their first experience in an online learning environment (n=73). This group is the focus for this paper, as the other respondents would have experienced their primary adjustment to online learning in previous courses. Written, detailed responses were gathered from open-ended questions related to activities and outcomes, becoming part of the online learning community, and the design and facilitation of online learning.
Data Analysis The constant comparison method was used to code responses to questions; open, axial and confirmatory coding progression was employed. Five themes emerged from this process: interaction, instructor role, self-identity, course design, and technology. Definitions of these constructs are outlined below. Interaction: Respondents identified issues such as quality, quantity, and value of dialogue with classmates and instructors, often comparing it to interaction in the face-to-face learning environment and describing their transition from verbal to written communication. Instructor Role: Respondents commented on the visibility of the instructor in the conference forums and the quantity, quality and timing of feedback. Self-identity: Respondents showed evidence of reflection on self-concept, learning style, personal needs, and increasing responsibility and ownership for learning.
Design: Respondents commented on the effectiveness of course structure and delivery and the availability of institutional support. Technology: Respondents pointed out technology issues that may affect participation in the community of inquiry and slow their adjustment to the role of online learner.
FINDINGS The purpose of this study was to assess the experiences of first-time online learners and their perceptions of the adjustment to online learning. Their responses to the open-ended questions reflect varying aspects of adjustment clustering around the emergent themes of interaction, instructor role, self-identity, course design, and technology. These themes are explored in relation to cognitive, social and teaching presence in the online environment. Cognitive Presence. Table 1 provides sample comments from first-time online learners regarding adjustments in cognitive presence in an online community of inquiry specific to each of the themes (numbers indicate respondent ID codes). Learners voiced concern regarding their adjustment to contributing to online content discussions that lack the visual cues available in face-to-face interaction. Some mentioned their fear of being misunderstood or saying something wrong. First-time online learners also reported an adjustment to assuming more responsibility for their own understanding of the material without direct instruction from the professors. Concern was voiced that without more direction from the instructor, it became necessary to rely on fellow students for interpretation, and this could lead to uncertainty or dissatisfaction with learning outcomes. Several learners commented that their participation in online discussions was greater than in a traditional classroom where they
The Role of Learner in an Online Community of Inquiry
Table 1. Adjustment in cognitive presence Interaction
At first, I hesitated in fear of saying something wrong (similar feelings in F2F situations as well). However, after receiving feedback from other colleagues, the online conference engagements became enjoyable and valuable from a learning outcome perspective. (#195)
Instructor Role
I have found that it is more difficult to be sure that you understand the material in the online learning because there is little discussion with the prof. The prof seems to set up the lecture and then let us talk amongst ourselves with no interaction to let us know if we are on the right track. (#331)
Self-identity
I feel that I don’t have as much to offer as others, either because I have had a more limited scope of learning or life experiences or because I can be intimidated by huge thoughts from bright people. (#250)
Course Design
Gaining equal participation and a common understanding in group work was a challenge. At the same time, it led to bonding between some group members. Group assignments early in the class helped to get us started quickly. (#335)
Technology
I like asking questions, but I rarely do on-line. I like clarifying things, but I rarely do on-line. I like to participate in class, but I’m a slow typist, so I rarely do on-line. (#390)
were often shy and reluctant to speak up, while others reported a feeling of intimidation when they perceived that classmates had a greater understanding of the concepts or dominated the forum discussions. Social Presence. In terms of social presence, first-time online learners expressed a need for time to feel comfortable communicating in a text only environment and to adjust to expressing emotion and communicating openly in an environment that lacks visual or other non-textual cues that provide context to communications in a face-to-face setting. Some appreciated opportunities to connect with one or a few other learners in a small group activity while others found this difficult to manage, particularly when one group member was domi-
nant in the group. One learner expressed concern that “This mode of learning, however convenient for my schedule, is dehumanizing the learning process for me, and I’m not sure I’m happy with that” (#279). Sample comments on adjustment in social presence in an online community of inquiry are shown in Table 2. Teaching Presence. Table 3 includes sample comments from first-time online learners regarding their adjustment to a changed teaching presence from past experience in face-to-face learning environments. Many indicated that a more visible teaching presence at the very beginning is desired to ease the adjustment from traditional learning environments to the online environment where the instructor is more of a facilitator and guide. Some
Table 2. Adjustment in social presence Interaction
I did notice my emotional, social ability to communicate became easier and I felt more relaxed as the course progressed. (#197)
Instructor Role
The only aspect (once again) that I found challenging was that I didn’t really feel that I got a sense of ‘knowing’ the instructor, nor did he really get to ‘know’ me. (#421)
Self-identity
I find that I am much more open and interactive on-line than I am in person… I am not able to “hide in the corner” as I could in a live class. (#58)
Course Design
I found the use of small working groups to be a positive way of getting people to interact with one another, allowing me to project myself as a “real” person. . .It may be tougher to do this in the context of the larger class (i.e. those in other working groups.) One does not have the same degree of back and forth “organizational” communication with these other people. I think I may be less of a three dimensional person to these other people. (#284)
Technology
Not ever having learned how to type may also be a factor as I [consider it] to be a handicap the same way someone who has difficulty expressing themselves verbally would. (#37)
The Role of Learner in an Online Community of Inquiry
Table 3. Adjustment to teaching presence Interaction
Once we were comfortable with his role as more of a guide and facilitator than an omni-present being, we were able to take more ownership for our role in the program and for our own investment in the course. (#407)
Instructor Role
I’m certain that [the professor] reviewed the discussion threads regularly but he seemed more like the virtual “fly on the wall” than an active participant. (#204)
Self-identity
I personally felt that a little more input and guidance from the instructor might have removed some anxiety and stimulated some interaction on my part. (#197)
Course Design
I think the instructor needs to be a very active participant at the beginning of the course. Everyone seems eager to talk to each other at the beginning (how many times did I log in on the first day to see if there was anything new?), and the instructor should tap into that by starting to focus that energy on the content. (#211)
Technology
Most emails sent by my instructor disappeared and I did not know what I had to do. (#146)
reported that they adjusted by assuming more responsibility for their own learning outcomes while others expressed concern that the learners were left to discuss content on their own without assistance from the instructor to let them know if they were on the right track. All three types of presence addressed in the online community of inquiry model are evident in the responses from new online learners. Adjustments were demonstrated in the identification of things that were unexpected or new, and the response to that newness. Five components of the online environment emerged as themes in the adjustment process, separately for all three presences. Performing the first analysis, an interesting pattern was noted in the data. Respondents provided enough detail to demonstrate the process of adjustment in some answers. A second analysis was performed, using pre-identified codes of challenge, intervention and resolution. This coding structure identified any challenges reported in each presence, the intervening action on the part of student or others, and the result that followed. Constant comparison, with inter-rater confirmation, was employed as a process. Specific challenges and the resolution of these challenges provide another window on the adjustment process. Challenges are identified as those things students find difficult, uncomfortable or in any way problematic regarding the online learning environment. Interventions are any occurrence
that ensued after the challenge, either deliberately or incidentally. Resolution refers to what the students describe as happening after. In some cases, the result was a positive one. In other cases, what happened resulted in an unsatisfactory outcome. Table 4 provides examples of the adjustment process, by presence.
DISCUSSION It is clear that an adjustment is taking place for students engaging in online learning, and that students can articulate the processes of this adjustment when asked. Using the community of inquiry model provided a valuable heuristic device for organizing the character of this adjustment. In each element of presence, the same five themes identified areas where change is taking place. Evident in student descriptions of their experience is a unique orientation to five thematic areas, which together embody the experience of being an online learner. These five thematic areas do not act in isolation. Consider that technology use occurs within a particular course design and is more or less optimized by the role of the instructor. WebCT, for example, provides the opportunity to chat synchronously; if the course design doesn’t require it, or the instructor doesn’t provide time to use it, students may or may not experience this technological opportunity. As another example,
The Role of Learner in an Online Community of Inquiry
Table 4. Adjustment process by presence Challenge Cognitive presence
At first, I hesitated in fear of saying something wrong
I have found that it is more difficult to be sure that you understand the material online ….
Intervention
Result
… after receiving feedback from other colleagues …
…. online conference engagements became enjoyable and valuable from a learning outcome perspective. (#195)
…. the prof seems to set up the lecture and then let us talk amongst ourselves …
I feel that I don’t have as much to offer as others, either because I have had a more limited scope of learning or life experiences or because I can be intimidated by huge thoughts from bright people. (#250) Gaining equal participation and a common understanding in group work was a challenge … I like asking questions, I like clarifying things, I like to participate in class, but I’m a slow typist Social presence
My emotional, social and ability to communicate
.. .led to bonding between some group members …
…. with no interaction to let us know if we are on the right track. (#331)
(such that) group assignments early in the class helped to get us started quickly. (#335)
…. so I rarely do on-line. (#390)
as the course progressed
. became easier and I felt more relaxed (#197)
The only aspect (once again) that I found challenging was that I didn’t really feel that I got a sense of ‘knowing’ the instructor, nor did he really get to ‘know’ me. (#421) I am not able to “hide in the corner” as I could in a live class.
…. I find that I am much more open and interactive on-line than I am in person. (#58) Small working groups to be a positive way of getting people to interact with one another …..
allowing me to project myself as a “real” person. (#284)
Not ever having learned how to type may also be a factor as I [consider it] to be a handicap the same way someone who has difficulty expressing themselves verbally would. (#37)
continued on following page
The Role of Learner in an Online Community of Inquiry
Table 4. continued Teaching presence
Instructor role as more of a guide and facilitator than an omni-present being
was something we had to get more comfortable with
Once we were, we were able to take more ownership for our role in the program and for our own investment in the course. (#407)
a little more input and guidance from the instructor
might have removed some anxiety and stimulated some interaction on my part. (#197)
I think the instructor needs to be a very active participant at the beginning of the course.
Everyone seems eager to talk to each other at the beginning (how many times did I log in on the first day to see if there was anything new?), and the instructor should tap into that by starting to focus that energy on the content. (#211)
Most emails sent by my instructor disappeared.
I did not know what I had to do. (#146)
interaction can be fostered or hindered by the instructor’s ability to invite learners to participate, the learners’ sense of competence regarding the presentation of ideas in print, the technological possibilities for interaction, and the design of the course in question. In other words, all five themes can be examined separately as they relate to online learning, but must be considered in comprehensive relation to each other if we are to illuminate the student experience online.
This is also the case as we examine all five themes in each area of presence. What emerges is a multidimensional perspective that must guide thinking as we design online courses to engage and support students as they adjust to, move into and become competent performing in the online community of inquiry. Technology, for example, has a unique role to play in each of cognitive, social and teaching presence. Social presence becomes possible as learners use the technology
Figure 2.
Community of Inquiry Social Presence
Interaction Challenges Interventions Results
10
Cognitive Presence
s elf Identity Challenges Interventions Results
Instructor Role Challenges Interventions Results
Teaching Presence
c ourse Design Challenges Interventions Results
t echnology Challenges Interventions Results
The Role of Learner in an Online Community of Inquiry
to present themselves as individuals through the written word or verbal language. This use of the technology overlaps with, but is unique from cognitive presence, where intellectual reasoning is presented as a portion of individual identity. Teaching presence emerges where the technology allows for presentation of material, directions from self to others and the interpretation of material. The importance of teaching presence is demonstrated in differences noted across instructors. Evidence here affirms Garrison & ClevelandInnes’ (2005) premise that “teaching presence must be available, either from the facilitator or the other learners, in order to transition from social to cognitive presence.” (p. 16). Without adequate support from the instructor, adjustment occurs without a clear point of reference to expectations. This lack may create a situation fails to sustain interest and engagement. This supports previous evidence that learners without guidance operate remotely: “without instructor’s explicit guidance and ‘teaching presence,’ students were found to engage primarily in ‘serial monologues’” (Pawan, et al., 2003, p. 119). Teaching presence supports sustained, beneficial academic interaction, movement within the presences of online community and, for first-time online learners, points of reference regarding expectations in the adjustment to the online environment. Specific challenges and the resolution of these challenges demonstrate places where students are not prepared, and must respond or change to manoeuvre the online environment. Challenges are any requirement that learners find difficult, uncomfortable or in any way problematic regarding the online learning environment. Interventions ensued after the challenge, deliberately or incidentally. Resolution happens afterwards, in some cases but nor all. Challenges may resolve themselves (#195), require intervention from the instructor (#331) or remain a challenge (#390); the latter, in particular, need attention by instructional designers or instructors.
CONCLUSION AND RECOMMENDATIONS The adjustment process to the characteristics and requirements of online learning is not merely a matter of comfort or student satisfaction. It has practical and pedagogical implications as well. Much research demonstrates the authenticity of social, cognitive and teaching presence online (see, for example, Meyer, 2003; Shea, Pickett, & Pelz, 2004 and Swan, 2003). These elements are unique to the medium and will require established roles for learners and instructors. Competent online learners are essential to creating community and contributing to higher-order learning activities. “Balancing socio-emotional interaction, building group cohesion and facilitating and modeling respectful critical discourse are essential for productive inquiry” (Garrison, 2006, p. 7). Evidence supports the premise that students experience a dynamic adjustment to the role of online learner, made up of particular ways of behaving, acting and interacting (Knuttila, 2002). As the role of online learner is still undefined, students grapple with requirements, looking to their own reasoning, other students and the instructor for direction about the right things to do. Adjustments occur in all three areas of presence, and each presence is both constrained and enabled by course design, technology, the instructor, personal self identity and the interaction within the community. Attention to these online elements in relation to the ‘getting up to speed’ or adjustment for learners, each time they join an online community, will smooth this move to competence. In order to become present in the important functionalities of an online community of inquiry, adjustment must occur. Without adjustment to competence as an online learner, the learning process may be hindered. Support for students to move to a place of comfort and sense of competence is of value. Based on the comments of first-time online learners describing their adjustment to the online
11
The Role of Learner in an Online Community of Inquiry
community of inquiry in terms of cognitive presence, social presence, and teaching presence, we recommend the following be incorporated into the instructional design and delivery of online courses in order to ease the adjustment to the role of online learner and enhance the elements that contribute to an effective community of inquiry. •
•
•
•
12
Acknowledge and make explicit the initial adjustment process, and provide a venue to identify challenges, consider interventions and ensure resolution. Professional development opportunities for instructors should focus on techniques for easing learner adjustment to the online learning environment. Instructors should learn to recognize indicators of adjustment and be prepared to suggest appropriate support services if required.. Provide ample opportunity for those unfamiliar with the technology to gain skills and feel comfortable so they don’t feel they are at a disadvantage vis à vis their classmates. An orientation conference forum moderated by experienced online learners and facilitators should be available well in advance of the course start date. Encourage participants to separate content related dialogue from socializing. This can be accomplished by providing a café-style conference for those who enjoy chatting while those who have limited time or prefer not to socialize can focus on content. Include greater instructor involvement at the beginning of introductory level courses, but judiciously thereafter. This will ease the adjustment for first-time online learners as they assume greater responsibility for meeting learning outcomes, gain comfort in contributing to discussions often dominated by more experienced online learners, and become more confident in their new role
•
•
•
without the immediate feedback from the instructor that occurs in the face-to-face classroom. Too much online intervention by the instructor can be intimidating and may decrease engagement. Request that participants limit the length of their conference postings to one screen and insist that discussions remain focused on content other than in the online café. Time is an issue for distance learners, and adjusting to the role of online learner includes learning to balance that role with others in adult life including work, family, and community. Provide the opportunity to establish group rules of netiquette. Ask that individuals to make explicit his or her strategies for use of emoticons and other expressions so everyone has access to the same tools. Request, through private email, that learners dominating online dialogue limit their postings to avoid intimidating novice conference participants. At the same time, private email can be used to encourage the less active participants to contribute. One effective technique is to include learners’ names in conference postings and replies in order to draw them further into the conversation. This is the equivalent of being called on in a face-to-face classroom.
Changed practice implies role adjustment for the instructors as well as the learners. Professional development activities that focus on the affective components of course delivery will enable instructors to ease the adjustment of the learners to online learning as well as increase their own comfort level and effectiveness.
Limitations Participants in this study were new to online learning, but most were new to graduate study as well. The adjustment to online learning would occur in conjunction with adjustment to graduate study;
The Role of Learner in an Online Community of Inquiry
this needs to be considered in interpretation of findings. In addition, awareness of the requirements of online learning may have been created by completing the pre-questionnaire. This may have affected the adjustment process and student response to it.
FuTUure RESEear This research program continued with a review of conference transcripts. This demonstrated adjustment to conference behaviour from course commencement to completion. This view of student activity provided general confirmation that what students say happened did happen. Further research will clarify the stages of adjustment for first-time online learners, and similarly for experienced online learners each time they begin a new course. Challenges and appropriate interventions in each of social, cognitive and teaching presence must be made explicit through research such that responses may be recommended. These responses will ultimately identify what must be in place to ensure complete and competent engagement for online students. It is this latter understanding that is a critical conclusion to this work on learner adjustment in online environments.
REFERENC Allen, I. E., & Seaman, J. (2004). Entering the mainstream: The quality and extent of online education in the United States , 2003 and 2004. Needham , MA : Sloan-C. Retrieved December 4, 2005, from http://www.sloan-c.org/resources/ entering_mainstream.pdf Anderson, T., Rourke, L., Garrison, D. R., & Archer, W. (2001). Assessing teaching presence in a computer conferencing context. Journal of Asynchronous Learning Networks, 5(2), Retrieved
September 2005 from: http://www.aln.org/alnweb/journal/jaln-vol5issue2v2.htm Arbaugh, J. B., & Hwang, A. (2006). Does “teaching presence” exist in online MBA courses? The Internet and Higher Education, 9(1), 9-21. Blau, J.R., & Goodman, N., (Eds.) (1995). Social roles & social institutions. New Brunswick: Transaction Publishers. Collier, P. (2001). A differentiated model of role identity acquisition. Symbolic Interactionist, 24(2), 217-235. Dewey, J. (1933). How we think (rev. ed.). Boston: D.C. Heath. Dewey, J. (1938). Experience and education (7th printing, 1967). New York: Collier. Garrison, D. R., (2006). Online community of inquiry review: Understanding social, cognitive and teaching presence. Invited paper presented to the Sloan Consortium Asynchronous Learning Network Invitational Workshop, Baltimore, MD, August. Garrison, D. R., & Anderson, T. (2003). E-Learning in the 21st Century: A framework for research and practice. London: Routledge/Falmer. Garrison, D. R., & Arbaugh, J. B. (2007). Researching the community of inquiry framework: Review, issues, and future directions. Internet and Higher Education, 10(3), 157-172. Garrison, D. R., & Archer, W. (2000). A transactional perspective on teaching-learning: A framework for adult and higher education. Oxford, UK: Pergamon. Garrison, D.R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. Internet and Higher Education, 11(2), 1-14. Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence and computer
13
The Role of Learner in an Online Community of Inquiry
conferencing in distance education. American Journal of Distance Education, 15(1), 7-23.
Katz, D., & Kahn, R. (1978). The social psychology of organizations. New York: Wiley.
Garrison, R., & Cleveland-Innes, M. (2005). Facilitating cognitive presence in online learning: Interaction is not enough. American Journal of Distance Education, 19(3), 133-148.
Knuttila, M. (2002). Introducing sociology: A critical perspective. Don Mills, Ontario: Oxford University Press.
Garrison, D.R., & Cleveland-Innes, M. (2003). Critical factors in student satisfaction and success: Facilitating student role adjustment in online communities of inquiry. In Elements of Quality Online Education: Into the Mainstream, Vol. 4 in the Sloan-C Series, J. Bourne and J. Moore (Eds.), 29-38. Needham, MA: Sloan-C. Garrison, R., Cleveland-Innes, M., & Fung, T. (2004). Student role adjustment in online communities of inquiry: Model and instrument validation. Journal of Asynchronous Learning Networks, 8(2), 61-74. Retrieved December 2004 from http://www.sloan-c.org/publications/jaln/ v8n2/pdf/v8n2_garrison.pdf. Ice, P., Arbaugh, B., Diaz, S., Garrison, D. R., Richardson, J. Shea, P., & Swan, K. (2007). Community of Inquiry Framework: Validation and Instrument Development. The 13th Annual Sloan-C International Conference on Online Learning, Orlando, November. Kanwar, M., & Swenson, D. (2000). Canadian sociology. Iowa: Kendall/Hunt Publishing Company. Kendall, D., Murray, J., & Linden, R. (2000). Sociology in our times. (2nd ed.). Ontario: Nelson Thompson Learning.
14
Kopp, S. F. (2000). The role of self-esteem. LukeNotes, 4(2). Retrieved September, 2005 from http://www.sli.org/page80.html McLuhan, M. (1995). Understanding media: The extensions of man. Cambridge: The MIT Press. Meyer, K. A. (2003). Face-to-face versus threaded discussions: The role of time and higher-order thinking. Journal of Asynchronous Learning Networks, 7(3), 55-65. Rogers, P., & Lea, M. (2005). Social presence in distributed group environments: The role of social identity. Behavior & Information Technology, 24(2), 151-158. Shea, P., Pickett, A., & Pelz, W. (2004). Enhancing student satisfaction through faculty development: The importance of teaching presence, Elements of Quality Online Education: Into the Mainsteam, Needham, MA: SCOLE (ISBN 0-9677741-6-0). Swan, K. (2003). Developing social presence in online discussions. In S. Naidu (Ed.), Learning and teaching with technology: Principles and practices (pp. 147-164). London: Kogan Page. Turner, J. (1990). Role change. Annual Review of Sociology, 16, 87-110. Wilson, D., Varnhagen, S., Krupa, E., Kasprzak, S., Hunting, V. & Taylor, A. (2003). Instructors’ adaptation to online graduate education in health promotion: A qualitative study. Journal of Distance Education, 18(2), 1-15.
15
Chapter II
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning Xinchun Wang California State University, Fresno, USA
Ab Although the pedagogical advantages of online interactive learning are well known, much needs to be done in instructional design of applicable collaborative learning tasks that motivate sustained student participation and interaction. In a previous study based on a Web-based course offered in 2004, Wang (2007) investigated the factors that promote sustained online collaboration for knowledge building. By providing new data from the same Web-based course offered in 2006 and 2007, this study investigates students’ attitudes toward process- and product-oriented online collaborative learning. The analysis of 93 post course survey questionnaire data show that the overwhelming majority of students have positive experience with online collaborative learning. Data also suggest that students are more enthusiastic about process-oriented tasks and their attitudes toward product-oriented collaborative learning tasks are mixed.
INTRODUCTION The pedagogical advantages of student interaction in collaborative construction of knowledge are grounded in the social constructivist perspective of learning. From the social constructivist
perspective, all learning is inherently social in nature. Knowledge is discovered and constructed through negotiation, or collective sense making (Duin & Hansen, 1994; Kern, 1995; Wang & Teles, 1998; Wu, 2003). Pedagogically sound tasks in an online learning environment should, therefore, reflect social learning and facilitate
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
interactive learning and collaborative construction of knowledge.
Interactive Learning and Factors Influence Online Collaboration From a student’s perspective, online interaction in learning takes place at two different levels: interaction with the contents, including interactive computer software and multimedia system, and interaction with instructors and between peers (Evans & Gibbons, 2007; Gao & Lehman, 2003). There is evidence that pedagogically well-designed interactive learning tasks actually increase rather than decrease student access to instructors; increase interactions between instructors and among students; and increase students involvement of course content as well (Lavooy & Newlin, 2003; Mouza, Kaplan, & Espinet; 2000; Wu, 2003). Interactive learning tasks also promote greater equality of participation (Mouza, Kaplan, & Espinet, 2000), more extensive opinion giving and exchanges (Summer & Hostetler, 2002), empower shy students to participate, and promote more student-centered learning (Kern, 1995; Wang & Teles, 1998) At the level of interaction with content, students benefit more from producing explanations than receiving explanations. Such proactive learning engages students in a higher level of thinking than the reactive type of learning (Gao & Lehman, 2003; Wu, 2003). Additionally, students who reported high levels of collaborative learning in an online course tend to be highly satisfied with their learning and they also tend to perceive high levels of social presence in the course (So & Brush, 2007). Despite these advantages, research also indicates that online interactive learning and collaboration are not always sustainable and students’ participation in Computer Mediated Communication (CMC) tasks may wane after the assessed tasks that require the postings are completed (Macdonald, 2003). In a survey on
16
college student’s attitudes toward participation in electronic discussions, Williams & Pury (2002, p.1) found that “contrary to much literature on electronic collaboration suggesting students enjoy online collaboration, our students did not enjoy online discussion regardless of whether the discussion was optional or mandatory.” Like any other form of learning, learning collaboratively in an online course is also characterized by individual differences. Collaboration as a process of participating to the knowledge communities is not an equal process to all the members of the community (Leinonen, Järvelä, & Lipponen, 2003). Much needs to be done to explore factors that promote sustained student interest in online interactive learning and collaboration. One challenge for developing sustainable online collaborative learning tasks lies in the nature of the CMC system itself. Although CMC supports interaction and collaborative learning, it also has inherent shortcomings. Disadvantages include the time it takes to exchange messages and the increased difficulties in expressing ideas clearly in a context reduced learning environment and the difficulty in coordinating and clarifying ideas (Sumner & Hostetler, 2002). The increased time it takes to reach consensus and decisions (Kuhl, 2002; Sumner & Hostetler, 2002) and to produce a final product (Macdonald, 2003). Given all these difficulties students need to overcome in order to collaborate effectively in interactive learning environment, online instructors need to address these obstacles with careful instructional design and provide support for collaborative learning with appropriate interactive learning tasks. Research has also shown that computer mediated communicative tasks require more active role of students than traditional instruction in the face-to-face environment does (Wang & Teles, 1998). Students need to be willing to send a formal written question rather than have a casual conversation with peers or with the instructor in order to have their questions answered (Kuhl, 2002). To communicate effectively with peers
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
and the instructor, students need to create the context through written messages, which requires the writing skills to identify their problems and express them precisely in order to have the questions answered. Team work and negotiation for meaning are necessary skills in CMC that cannot be assumed. Students need to learn to be familiar with the discourse of the discipline and academic genre for an online synchronous and asynchronous forum (Kuhl, 2002; Macdonald, 2003). In addition to negotiation skills online, previous research has identified a number of other factors that influence student participation and interaction in a web-based learning environment. Among others, the assessment of collaborative learning tasks plays a crucial role in ensuring student participation (Kear, 2004; Kear & Heap, 1999; Macdonald, 2003) and it directly influences the level of participation (Wang, 2007). In general, assessed collaborative learning tasks attract student participation at the cost of unassessed tasks. Furthermore, grade for discussion was also positively related to students’ perceived learning (Jiang & Ting 2000). The structure of discussion in CMC is found to be another important factor in ensuring the amount of participation and level of interaction and collaboration among the peers. Such structure includes the size of the discussion groups, the nature and types of discussion topics (Williams & Pury, 2002), and whether the collaboration emphasizes on the process of learning or the end product of such collaboration, or both (Kear, 2004; Kear & Heap, 1999; Macdonald, 2003, Wang, 2007). Research also indicates that student facilitators play an important role in attracting their peers’ participation in online group discussions and the success of such roles are closely related to the depth of the discussion threads that often lead to more than six or more rounds of student postings (Hew & Cheung, 2007). The interaction level between the students and the teacher and among the students was found to be a significant factor
in determining the effectiveness of the teaching method (Offir, Lev & Bezalel, 2007). In addition, class size and level of participation in terms of note writing and reading is also found to be related. Data show that large classes are related with an increase in the number of notes written, a decrease in average note size, and an increase in note scanning rather than reading (Hewitt & Brett, 2007). To summarize, although the pedagogical advantages of online collaborative learning are commonly recognized, such learning is sometimes difficult to sustain due to a number of factors. Among others, online negotiation skills, the direct link between collaborative tasks and assessment, the structure of online discussions such as the nature and types of discussion topics, the size of the group, and the differences between process and product oriented collaborative tasks are some of the factors that influence student participation, interaction, and collaboration.
Process and Product Oriented Online Collaborative Learning Online collaboration can be either process or product oriented. Forum discussions regarding course contents or related issues are commonly process oriented as the sharing of ideas help learners understand the issues without necessarily leading to a final product. Students are assessed individually based on their participation and quality of their contributions. Alternatively, online interaction and collaboration may lead to a final product such as an essay, a project, or a webpage, etc. There can be two assessment elements to such tasks, a common grade for the group for the overall quality of the collaborative product and individual grades for the contribution of each individual to the collaborative endeavor (Kear, 2004; Kear & Heap, 1999; Macdonald, 2003). The similarities and differences of process and produce oriented online collaborative learning tasks are summarized in Table 1.
17
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
Table 1. Similarities and differences between process and product oriented online collaborative learning tasks Process oriented tasks
Product oriented tasks
Exchange of views to share ideas that may or may not lead to agreements
Exchange of views that are consensus building to reach agreements
No end product
End product: a project, report, etc.
Relatively easy to interact and share views
Difficult to reach agreement by a time line
Individual grade
Common and/or Individual grade
For a product oriented collaboration, simply assigning learners to work on a group project does not necessarily mean that they will work collaboratively. Learners tend to use a task specialization approach where tasks are divided among group members. Therefore, learners may not look for opportunities to develop mutual engagement, knowledge and skill exchange, and interpersonal communication skills (So & Brush, 2007). Therefore, the instructional design of product oriented online collaborative learning tasks need to take measures to ensure real collaboration among peers.
The Study In a previous study based on a web-based course offered in 2004, Wang (2007) investigated the factors that promote sustained online collaboration. The current study extends the previous study by providing new data from the same web-based course offered in 2006 and 2007. I addition to investigating the factors that promote sustained collaborative learning in a web-based course using asynchronous conferencing system as its main learning tasks (Wang 2007), this study further investigates the students’ attitudes toward collaborative learning. In particular, it investigates the learners’ perspectives of process and product oriented online collaborative learning tasks. Both types of interactive learning activities were implemented as the main learning tasks of the web-based course under study.
18
The research questions are: 1. 2.
3.
What are students’ overall attitudes toward collaborative learning as the main learning tasks in a web-based course? Are there any differences in student attitudes toward process and product oriented online collaborative learning tasks? If so, what are the factors that influence students’ different perspectives toward such tasks? What pedagogical implications do the findings have?
COURSE INFORMATION AND DATA CLLECTION Course Information The course under study is an upper division general education course on Bilingualism and Bilingual Education delivered entirely on Blackboard at a state university in California. In a previous study, (Wang 2007), data were collected from a post course questionnaire in Spring and Fall 2004 semesters when this web-based course was first offered. A total of 60 students, 22 in the Spring Semester class, and 20 and 18 students in the two Fall Semester classes completed the course. A total of 53 of the students completed the post course survey questionnaire and the results were reported in the previous study (Wang, 2007). The current study further investigates students’ atti-
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
tudes toward online collaborative learning with additional 93 survey questionnaires completed by the students who took the same course during the Fall 2006 and 2007 semesters. Course structure remained unchanged for the most part. Forum discussions on course readings and related issues formed the core interactive learning activities that were 45% of the course grade. These were process oriented interactive learning tasks for which individual grades were assigned for each student based on their quantity and the quality of postings in the forums. Small groups of 4-6 people were formed at the beginning of the semester for the weekly asynchronous group forums. During the 16 week semester, a total of 18 discussion forums were completed in each online group. For each forum, the instructor assigned a reading chapter along with comprehension questions and discussion topics to help the students to grasp the contents. Students divided the reading questions among themselves in their groups and posted the answers to each question for the first round of postings. They were also required to make comments on at least one peer’s answers in the second round of postings to carry on the discussions. To ensure participation, strict deadlines for each round of postings were enforced and each student’s answers to the questions and comment messages were assessed by the instructor who assigned up to 3% of the course grade for participation of each discussion forum. The other major collaborative task was a product oriented group project that constituted 12% of course grade for which all the students in the same group received a common grade based on the level of collaboration and the quality of the final written report. There was no individual assessment component for the group project. The interdependent grading (a common grade for all members of a group only) was aimed at promoting more collaboration among the peers to produce a true collaborative product with individual contributions. The group project was closely related
to one of the course themes on types of bilingual education programs. Each student was required to visit a local school to interview a bilingual teacher to gain first hand information about bilingual education programs implemented in California. Students then shared and synthesized the interview data to produce a group report. They were not required to meet face-to-face for the group project but they exchanged information in an online forum that was mostly procedural to plan, negotiate, to reach agreement and to produce the final product. The process of planning and producing the project required negotiation, cooperation, and collaboration among peers to actually arrive at consensus to produce a report. Though not graded, the progress of each group in the online forums was closely monitored by the instructor. The only deadline for submitting the group project was imposed to ensure the completion of the work for the first two semesters when the course was offered in 2004 (Wang, 2007). During Fall 2006 and 2007 semesters when the course was offered (two classes for each semester), strict intermittent deadlines for each step (such as the completion of the interview, the posting of interviews in the Group Forums, and the completion of the first draft of the group project) were enforced in addition to the final deadline for the submission of the Group Project. These measures were taken to ensure that the “slow” students must stay on the course and complete their tasks according to the schedule at every step. Other course activities included two individual written assignments (8%) and three online exams (35%) that assessed the learning outcomes of the course readings and group discussions. Table 2 summarizes the course activities and grading.
Data Collection: Post Course Survey Data
At the end of the semester, an online survey was administered in each class to collect information about students’ learning experience and
19
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
Table 2. Course activities and grading Activities
Grading
Description
Weekly group forums
45%
Structured discussions on course readings
Weekly class forums
0%
Required postings of moderator’s summaries from each weekly group forum (Spring Semester class only)
Group project
12%
Final product graded interdependently (same grade for each member of the group)
Individual assignments
8%
No interaction among students required
Three exams
35%
Online exams on course contents to assess outcome of learning
their attitudes toward the course, in particular, their experience with online collaboration in both the weekly conference discussions and the group project. The survey questionnaire, which consisted of 17 multiple choice questions and 4 open-ended questions (see Appendix) was uploaded to the survey area of the course on Blackboard. Students were able to access and complete the survey questionnaire anonymously during the week after the final exam. Blackboard automatically calculated the results of the multiple choice questions in percentage. The transcripts of the survey responses for all three classes were printed out for analysis. Ninety – three students completed the survey questionnaire. Therefore, the analysis of the survey data is based on the 93 completed questionnaires. These new data were also analyzed along with the 53 survey questionnaire data reported in Wang (2007) study that were based on the same course offered in the Spring and Fall 2004.
RESLTS Students’ Attitudes Toward Cllaborative Learning Table 3 presents students’ responses to the question “what are your thoughts about the structure of the course?” Students took this web-based course overwhelmingly preferred the interactive learning structure of the course to the weekly quizzes if they were given the choices. Although the percentage of students who preferred weekly quizzes based on the readings increased from 8% in 2004 to 24% in 2006 and 2007, the overall majority of them still preferred the current course structure. An additional question about the learners’ overall experience with this web-based course was included in the survey for the 2006 and 2007 classes and the results are summarized in Table 4. Ninety-eight percent of the students reported that their experience with this web-based course was “very positive” and “positive”. Therefore, even
Table 3. Students’ responses to “what are your thoughts about the structure of the course?” Choices
20
2004, N= 53
2006-2007, N= 93
I like the way the course is structured in terms of forum discussions because we learn from each other.
92.5%
76%
I prefer weekly quizzes based on the readings rather than answering questions and joining the group discussions.
7.5%
24%
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
Table 4. Students’ responses to the question “my experience with this web-based course:” (N=93) Choices
2006-2007, N=93
Is very positive
53%
Is positive
44%
Is negative
1%
Is very negative
1%
though about a quarter of the students preferred ‘weekly quizzes’ type of less interactive learning if there were given the choices, they still reported that their overall experience with the course was “positive” and “very positive”. What factors encouraged students to participate in this form of active and interactive learning throughout the semester? Did the students really think that they learned from building on each other’s insights? What were the effects of such learning as reflected by students’ responses in the survey data? The survey questionnaire addressed these issues in a number of questions. Table 5 summarizes students’ responses to the effectiveness of group discussions. Chi Square analyses of students’ responses to the questions in Table 5 along the scale of strongly agree to strongly disagree were all sig-
nificant beyond 0.0001 level. About 90% of the students agreed or strongly agreed that answering questions and participating in discussions helped them understand the readings better and that online discussion was helpful because they collaborated more and learned more from each other. Additionally, 72% of the students responded that they learned more from online discussions than they would have learned from the lectures. Furthermore, 89% of the students responded that group cohesion and mutual trust was an important factor in their group.
Level of Participation Students’ participation in group discussions was not only required but was also directly linked to the assessment of their postings in the forums. The
Table 5. Students’ views about group discussions (N=53) Survey Questions
% Responses Strongly agree
Agree
Disagree
Strongly disagree
Chi²
My answers to the questions and comments on peers’ messages help me to understand the readings better.
30%
62%
8%
0%
49.717
My peers’ answers/comments helped me understand the readings better.
32%
57%
11%
0%
39.453
I learned more from online discussions than I would have learned from lectures.
25%
47%
25%
2%
21.792
The online discussion is helpful because we collaborate more and learn from each other more.
38%
55%
6%
2%
41.415
The group cohesion and mutual trust is an important factor in our group.
53%
36%
11%
0%
36.132
21
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
required postings and their assessment appeared to play an important role in motivating the students to participate the discussions. Table 6 summarizes students’ responses to the level of participation in their group discussions if the postings were not required and graded. The new data from the 2006 and 2007 classes were more or less the same as the previous data when the course was offered for the first time in 2004. Overall, 45% -50% of the students responded they would post some but not as many messages, 21% said they would post very few and 6%-8% responded they would not post any messages at all! Only 21% -28% (an increase of 7% in the new data) responded they would post the same number of messages. One might arguer that the survey data may not reflect the real level of participation in discussions if the postings were not required or assessed because all the postings in this course were actually required and assessed. Therefore, a firm claim of the effect of assessment on forum contributions must be tested with a treatment group whose postings in forums were assessed and compared with a
control group whose postings in forums were optional and unassessed. Nevertheless, students’ responses to this survey question still reflect the “if not” situation because they had just completed the weekly postings for the entire semester and such learning experience would certainly affect their responses. Therefore, the “if not assessed” situation was contrasted against the real situation of “assessed’ postings. In addition to the number of postings, the amount of time spent on reading peers’ messages in an asynchronous forum also reflects the level of participation of collaborative learning. When the course was offered in 2006 and 2007, the survey questionnaire asked an additional question about the reading of peers’ messages in the forums, also an important indication of interactive learning. The results are summarized in Table 7. Thirty-seven percent of the students responded that they read all the messages posted by the peers while 47% of the students read most but not all the messages. Adding the answers to these two items up, the overall majority of the students, 84%, read
Table 6. Students’ responses to “would you post the same number of messages as you actually did over the semester if these postings were optional, not required or graded?” Choices
2004, (N= 54)
Yes, I will post the same number of messages
2006-2007, (N=93)
21%
28%
I will post some messages but not as many
51%
45%
I will post very few messages
21%
21%
I will not post any messages
8%
6%
Table 7. Students’ responses to “In our group discussions,” (N=93) Choices I read all the messages posted by my peers
37%
I read most but not all the messages by my peers
47%
I read some but not all the messages
10%
I do not read these messages often
22
2006-2007, N=93
6%
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
most if not all the postings in their groups. Still, 10% of the students read only some messages while 6% of them responded that they did not read these messages often. Unlike the required postings by the deadlines in an asynchronous forum, it is relatively more difficult to enforce the reading of each posting in such learning environment. Therefore, the request to make comments on peers’ messages for a required second round of posting is the only direct measure to enforce the reading. Still, 16% of the students read only some of the messages. It is very likely that these students chose to read one or two peers’ postings on which they made comments. Previous studies have examined the relationship between group size and the amount of reading of postings of online asynchronous forums. It is important to point out that group size and number of postings in a forum may have a direct impact on the number of messages the students actually chose to read. The students’ self-reported amount of reading of messages will be further discussed in the next session.
Group Formation
Table 8 summarizes students’ responses to the question on group formation. The 2004 data were based on the 37 questionnaires from the Fall semester students as the Spring 2004 semester course did not have this question in the survey. Not much noticeable differences were observed
from the new data when the course was offered in 2006 and 2007. There is a slight increase of percentage of students, from 8% to 15%, who wanted to work with different people in a group every few weeks. Almost the same percentage of students, 30% vs. 31% for the two sets of data, responded that it made no difference for them to work with the same or different people in a group throughout the semester. More than half other students responded that they preferred to work with the same people for their group discussions because they knew each other better and the number was even higher (62%) for the Fall 2004 class data. It appears that the group as a community for online learning established deep roots in this course. Except for some course related general forums in which questions regarding course activities were exchanged, students generally did not have access to the majority of the fellow students in their class. It would not have been surprising if students had expressed their desires to learn the discussions in other groups through some form of exchanges on a class level, or, through reshuffling groups. Yet, the survey responses suggest that at least two thirds of the students did not express the need to work outside their fixed small groups. It is important to note that the survey data reflected the students’ views towards their working groups that were fixed for the entire semester. If they actually had the chance to work in different groups in this online course, they might have
Table 8. Students’ responses to “what is your view about group formation?” Choices
Fall 2004 N=37
2006-2007, N=93
I want to work with the same group members the way it is now because we know each other better.
62%
I want to work with different people in a group every few weeks because we will learn from other students we never meet.
8%
15%
It will not make a difference to me working with the same people or different people in a group.
30%
31%
53%
23
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
Table 9. The group project about bilingual programs in our local schools Multiple choices (choose all that apply)
2004, N=53
Is a good assignment and I learned a lot through doing the project. Makes the course readings more meaningful and more relevant to me.
2006-2007, N=93
70%
48%
68%
52%
Is a good assignment but takes too much time to complete.
17%
20%
Could be an individual assignment focusing on one school rather than a group project that involves more collaboration.
30%
24%
4%
2%
Is not very important for this course.
different views. To explore the advantages and disadvantages of fixed or dynamic small groups in a web-based course that uses weekly forum discussions, both group types need to be included in the data in future studies.
Process vs. Product Oriented Collaboration Table 9 presents students’ responses to a question that allowed for multiple choices about the group project. In this “choose all that apply” multiple choice question, the first two choices were aimed at assessing whether the assignment itself was important for the course in the students’ eyes because the importance of the group project may affect their overall performance, or, vice
versa. As seen in Table 9, 70% of the students from the 2004 classes felt that the group project was a good assignment and agreed they learned a lot through doing it. Sixty-eight percent of the students also responded that the project made the course readings more meaningful and relevant. About half of the student population, 48% -52%, in the 2006 and 2007 classes viewed the group project as an important task, a noticeable decline for the previous semesters in 2004. The Fall 2004 semester post course survey asked an additional question about their experience with the group project and the responses are summarized in Table 10. (This question was not included in the Spring 2004 post course survey.) New data from the 2006 and 2007 classes were also included in Table 10.
Table 10. Students’ response to the group project Choices
2006-2007, N=93
I prefer individual work leading to a project of my own even though I only have information about one school.
32%
27%
I prefer to collaborate with peers the way it is now because it is not a problem with me to collaborate.
24%
24%
24%
24%
22%
25%
I prefer to collaborate with others for a group project but I do not like to depend on other people’s schedule because some just do not get their work done on time. Even though it is hard to collaborative for the group project, it is still worth doing it because we learn more about our bilingual programs in different schools through doing it together.
24
Fall 2004, N=37
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
As seen from Table10, the new data of 2006 and 2007 were almost identical to the Fall 2004 class, despite the fact that the number of students who perceived the importance of this group project actually went down for the students in the 2006 and 2007 classes. While 24% of the students preferred to work with peers because they had no problems to collaborate, exactly another 24% of them did not like to depend on other peoples’ schedules because some just did not get the work done on time. It is surprising that the percentage of responses to these two choices were identical for the two sets of data, even though more intermittent deadlines were imposed at different stages for the group project in 2006 and 2007 classes as new measures to facilitate the collaboration among peers. Twentyfive percent of the students in the 2006 and 2007 classes felt it worthwhile to collaborate for the group project despite the fact that it was difficult, only a slight increase of 3% from the Fall 2004 class. Similarly, 27% of the students preferred individual work leading to a project of their own even though they would not accomplish as much, a slight decrease of a 5% from the previous data. Compared to the overall positive responses toward collaboration in forum discussions (see Table 3 and Table 4), students’ attitudes toward online collaboration in producing the group project were mixed. Such differences were also reflected in some student comments on the group project in the open-end questions. One student wrote “I think it’s too inconvenient to try and get a group project together online. I also don’t like having someone’s performance affect my grade. I would rather do the project on my own.” It appears that the end product type of collaborative tasks demands more consensus-building collaboration. When students were timed for such intensive interaction and collaboration, they became less enthusiastic about it.
DISCUSSION Students’ Overall Attitudes Toward Collaborative Learning and Their Level of Participation The new data based on the 2006 and 2007 classes support the earlier findings about students’ attitudes toward online collaborative learning. Ninety-eight percent of the students reported that they were “positive” and “very positive” with this web-based course. The overall majority of them also stated that they preferred the forum discussions to weekly quizzes as the main learning tasks, if they were given the choices. As summarized in Wang (2007) findings, a number of factors contributed to the sustained small group discussions in this course. One of the factors may be the structure of the forums that required two rounds of postings. Students not only always had “something to say” in each forum but knew exactly what specific questions they were expected to answer in advance. These written exercises required in the first round of postings kept each individual student accountable for knowing the contents through reading. Therefore, students’ interaction with the course readings, the first level of interaction with the material, was enhanced by producing written answers to be commented by peers in the group forums. The enthusiasm in group discussions never waned forum after forum because each forum focused on a new reading chapter. Furthermore, the comment messages required students to exchange information by building on each other’s ideas to negotiate for meaning and to collaboratively construction knowledge. Such interaction between peers and between students and instructors provided another level of interaction for learning. Students’ positive experience with the semester long forum discussions was related to the benefits of proactive learning and learning from each other for knowledge construction. While the advantages of online interactive learning
25
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
have long been proved in previous studies (Kern, 1995; Lavooy & Newlin, 2003; Mouza, Kaplan, & Espinet; 2000; Summer & Hostetler, 2002; Wang & Teles, 1998; Wu, 2003), this study provided new data for the use of small group discussion as the core interactive learning tasks through the application of carefully prepared discussion questions that elicits proactive learning and through peer interaction and collaboration. When online collaborative learning tasks become main course pedagogy, such interactive learning is likely to be more sustainable and effective. With regard to levels of participation in collaborative learning, previous research indicates that the size of online learning community affects the level of comforts which influences the level of participation (Hewitt & Brett, 2007; Williams & Pury, 2002). Hewitt & Brett (2007) reported that large classes were related with an increase in the number of notes (messages) written, a decrease in average note size, and an increase in note scanning rather than reading. The current finings suggest that students were comfortable with their peers in a group of 4-6 members and group cohesion and mutual trust was an important element for their collaborative learning. However, data also show that such trust and comfort with a smaller group size is no guarantee of semester-long sustainable interactive learning in the asynchronous forums. Overall, only 28% of the students responded that they would post the same number of messages in their forums if the postings were not required and graded. Forty-five percent of the students responded they would post some but not as many messages and 21% percent of the students said they would post very few messages. What is more, 8% of the students responded they would not post any messages at all. These numbers from the new data were almost identical to the previous data. Taken together, these data support the previous research findings that the assessment of collaborative learning tasks plays a crucial role in ensuring student participation. Macdonald (2003) reported that students actively contributed to the discussions
26
when the tasks were assessed but participation of discussions waned when the postings became optional. Grade for discussion was also positively related to students’ perceived learning (Jiang & Ting 2000). Apparently, any optional interactive learning tasks would not have sustained for the entire semester. Another indication of the level of participation is the amount of reading of messages posted by the peers in their group forums. Thirty-seven percent of the students reported that they read all the messages posted by the peers while 47% of them read most but not all the messages. However, 16% of the students responded that they read only some messages or did not often read the messages posted by the peers. Unlike the required postings by the deadlines in an asynchronous forum, it is relatively more difficult to enforce the reading of each posting in such learning environment. It is important to note that the size of the group (4-6 students) is relatively small for asynchronous forums and it was assumed that such a size would generate sufficient responses from each other. On the other hand, the number of messages produced by each member was manageable and easy to keep track of. Future studies need to investigate the level of participation of collaborative learning with different group sizes and with different learning tasks. With regard to group formation, the majority of students reported that they preferred working with the same members of the group for the entire semester rather than rotating the peers. Obviously, it takes time to establish such mutual trust, even in a small group of 4-6 members. Therefore, it is very likely that the group cohesion and mutual trust comes from the semester long of interaction, cooperation and collaboration online. The new data showed a slight increase of the percentage of students who expressed the desire to work with different peers during the semester. Future studies need to investigate the benefits and disadvantages of dynamic group formations in which students
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
are given the chance to work with different online peers during the semester.
Students’ Attitudes Toward Process and Product Oriented Interactive Learning Very few studies have dealt with the differences between process and product oriented interactive learning tasks and how these differences influence peer interaction and collaboration (Kear, 2004; Kear & Heap, 1999; Macdonald, 2003; Wang, 2007). This web-based course applied both process and product orientated interactive learning tasks that required different types and levels of interaction and collaboration. As discussed earlier, in the weekly group forums, the debate and exchange of ideas focused on the process of learning that did not lead to a final product. In contrast, the group project was a product driven collaborative task in that the interaction and collaboration among the peers through sharing and exchange of ideas and negotiation must help to reach certain consensus to produce a group report. Survey data suggest that students were more enthusiastic about process oriented group discussions than the group project. In the previous study based on the same course offered in 2004, Wang (2007) found that among others, the main reasons for students’ frustration about the group project were the difficulties in reaching agreement according to a time frame, especially in the online environment. The differences in working pace and conflicts of schedules, and, perhaps more importantly, differences in level of devotion to the collaborative task in online environment made it more difficult for the peers to reach consensus in the process of doing the group project. The early birds who preferred to start and complete their parts of the work in a timely fashion conflicted with those who procrastinated in getting the work done. As peers in the same group would receive a common grade only for their project, there was pressure for them to
compromise to reach agreements in completing the project. In order to reduce the frustration caused by the schedule conflicts between the peers, a few intermittent deadlines were imposed when the course was offered in 2006 and 2007 years. Students had to meet each deadline (individual interviews, sharing the interview summaries, and draft the project) at each step to avoid the last minute rush that usually delayed the progress of the group work. The new data about the group project from 2006-2007 classes were collected after these new measures were taken. As seen from Table 9 and Table 10, these new measures did not appear to address these challenges the groups faced in completing the projects. In fact, only 24% of the students from the current data, (exactly the same 24% of the students in the previous study) responded that they preferred to collaborate with others for a group project but they did not like to depend on other peoples’ schedule because some just did not get their work done on time. Similarly, 24% of students in the current data (exactly the same percentage as found in the previous study) responded that they preferred to collaborate with peers because it was not a problem for them to collaborate. Only a slight increase of the number of students, from the previous 22% to the current 25%, responded that even though it was hard to collaborative for the group project, it is still worth doing it because they learned more about the bilingual programs in different schools through doing it together. There was also a slight decrease of number of students, from the previous 32% to the current 27%, who stated that they preferred individual work leading to a project of their own even though they only have information about one school. These data suggest that, overall, students’ attitude towards the group project was mixed. There was less enthusiasm for the group project than for the weekly forum discussions. In addition to the differences in working schedules and level of devotion to the group project, the common grade assigned to students for their
27
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
group project appeared to be another factor that caused the frustration for this collaborative task. Although the common grade can be used as a useful instructional strategy to implement end product driven collaborative tasks to encourage collaboration, the frustration and stress caused by the schedule conflicts and different levels of devotion toward such collaboration calls for more careful instructional design of such tasks. Perhaps some form of individual grading in addition to the interdependent grading are necessary to measure each individual student’s efforts and contribution. In fact, Kear & Heap (1999) reported that students expressed a preference for a higher individual grade component when both common and individual grades were assigned for their group project. It is important to balance the level of collaboration among the students and the individual flexibility of online learning. Future studies need to address the pedagogical design of end product driven collaborative tasks in web-based courses.
CONCLUSION The new data in the current study further supported the earlier findings (Wang, 2007) about students’ positive attitudes toward collaborative learning as main learning tasks in a web-based course. The overall majority of them also stated that they preferred the forum discussions to weekly quizzes as the main learning tasks, if they were given the choices. Among others, the structure of the online discussion, group cohesion, direct link of the interactive learning tasks to the assessment, and strictly imposed deadlines are some of the important factors that influence students’ learning experience and level of participation in collaborative learning. The differences in process and product driven interactive learning tasks also have a different impact on student online collaboration. In general, students were more enthusiastic about process oriented than
28
product driven collaborative tasks. Despite the new measures taken in the form of imposing more intermittent deadlines for the preparation of the group project, a product orientated collaborative task, a substantial number of students still preferred to do an individual project on their own, if they were given the choices. Many of them expressed their concerns about their grades being affected by their peers’ work. It appears that assigning a common grade to all the members of the group may not be the best way of assessing such a product oriented collaborative task. Some element of individual assessment component might be necessary to reflect the different level of devotion of the students. Finally, as the current data are based on one web-based course that was mainly a reading course, the findings may not be generalized into a broad scope. Because of this limitation, the current findings may not be directly applicable to other courses that have a different online pedagogical approach. Yet, a few recommendations may be made for designing and implementing similar interactive learning activities to promote sustained and effective online collaboration. •
•
• •
Although a very good tool for promoting interactive learning and collaboration, online discussion is not always sustainable if not well planned and structured. It is recommended that instructors carefully design each forum discussion with direct involvement of course contents with predetermined specific questions to engage students in a high level of thinking through providing written answers to the topics for which peer critiques are required. To continue to motivate the students, link the assessment with all interactive learning tasks utilizing specific grading scales. Impose strict deadlines for each round of postings in each discussion forum. Form small groups of 4-6 as learning communities for discussions so the peers will
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
•
•
have sufficient input from each other yet still find it easy to keep track of all the postings in each new thread. Use process oriented interactive learning tasks to facilitate continuous online interaction and collaboration and yet still give each student sufficient amount of freedom in completing the assessed learning tasks. When design product oriented interactive learning tasks, much care needs to be taken in order to prepare the students to reach consensus. Give sufficient time for completing such learning assignment. Incorporate both common and individual grades in grading an group project.
REFERENCES Duin, H., & Hansen, C. (1994). Reading and writing on computer networks as social construction and social interaction. In Selfe, C. & Hilligoss, S. (Eds.) Literacy and computers: The complications of teaching and learning with technology, (pp. 89-112). New York: The Modern Language Association. Evans, C., & Gibbons, N. J. (2007). The interactivity effect in multimedia learning. Computers & Education 49, 1147–1160. Gao, T., & Lehman, J., D. (2003). The effects of different levels of interaction on the achievement and motivational perceptions of college students in a Web-based learning environment. Journal of Interactive Learning Research, 14(4), 367-387. Hew, K. F., & Cheung, W.S. (2007). Attracting student participation in asynchronous online discussions: A case study of peer facilitation. Computers & Education (doi:10.1016/ j.compedu.2007.11.002). Hewitt, J., & Brett, C. (2007). The relationship between class size and online activity patterns in asynchronous computer conferencing environ-
ments. Computers & Education 49, 1258–1271. Jiang, M., & Ting, E. (2000). A study of factors influencing students’ perceived learning in a Web-based course environment. International Journal of Educational Telecommunications, 6(4), 317-338. Kear, K. (2004). Peer learning using asynchronous discussion systems in distance education. Open Learning, 19(2), 151- 164. Kear, K., & Heap, N. (1999). Technology-supported group work in distance learning. Active Learning 10, 21-26. Kern, R. (1995). Restructuring classroom interaction with networked computers: Effects on quantity and characteristics of language production. The Modern Language Journal, 79, 457- 476. Kuhl, D. (2002). Investigating online learning communities. U.S. Department of Education Office of Educational Research and Improvement (OERI). Lavooy, M. J., & Newlin, M. H. (2003). Computer Mediated Communication: Online instruction and interactivity. Journal of Interactive Learning Research, 14(2), 157-165. Leinonen, P., Järvelä, S., & Lipponen, L. (2003). Individual Students’ Interpretations of their contribution to the computer-mediated discussions. Journal of Interactive Learning Research, 14(1), 99-122. Macdonald, J. (2003). Assessing online collaborative learning: process and product. Computers and Education, 40, 377-391. Mouza, C., Kaplan, D., & Espinet, I. (2000). A Web-based model for online collaboration between distance learning and campus students (IR020521): Office of Educational Research and improvement, U.S. Department of Education. Offir, B., Lev, Y., & Bezalel, R. (2007). Surface and deep learning processes in distance
29
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
education: Synchronous versus asynchronous systems. Computers & Education, doi:10.1016/ j.compedu.2007.10.009. So, H.-J., & Brush, T. A. (2007). Student perceptions of collaborative learning, social presence and satisfaction in a blended learning environment: Relationships and critical factors. Computers & Education, doi:10.1016/j.compedu.2007.05.009. Sumner, M., & Hostetler, D. (2002). A comparative study of computer conferencing and face-to-face communications in systems design. Journal of Interactive Learning Research, 13(3), 277-291. Wang, X. (2007). What factors promote sustained online discussions and collaborative learning in a Web-based course? International Journal of Web-Based Learning and Teaching Technology, 2(1), 17-38.
30
Wang, X., & Teles, L. (1998). Online collaboration and the role of the instructor in two university credit courses. In. Chan, T. W., Collins, A. & Lin, J. (Eds.), Global Education on the Net, Proceedings of the Sixth International Conference on Computers in Education, 1, 154-161. Beijing and Heidelberg: China High Education Press and Springer-Berlag. Williams, S., & Pury, C. (2002). Student attitudes toward participation in electronic discussions. International Journal of Educational Technology, 3(1), 1-15. Wu, A. (2003). Supporting electronic discourse: Principles of design from a social constructivist perspective. Journal of Interactive Learning Research, 14(2), 167-184.
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
APPENDIX: SSurvey QUESTIionnaire 1.
Is this your first web-based (entirely online) course? a. b. c. d
Yes. No, I already took one entirely online course before this one. No, I took two or more other entirely online courses before this one. I took one or more web-enhanced course (partially online) before this web-based (entirely online course). e. No, I have never taken any web-based nor web-enhanced course. 2.
This reading course is structured on group discussions with individual and group assignments. What are your thoughts about the structure of the course? a. I like the way the course is structured in terms of forum discussions because we learn from each other. b. I prefer weekly quizzes based on the readings rather than answering questions/joining group discussions.
3.
Will you post the same number of messages as you actually did over the semester if these postings were optional, not required and graded? a. b. c. d.
4.
Yes, I will pos the same number of messages. I will post some messages but not as many. I will post very few messages. I will not post any messages.
Please circle one answer for each of the following: a. In our group forums, my answers to the questions and comments on peers’ messages help me to understand the contents/readings of the course better.
strongly agree
agree
disagree
strongly disagree
b. My peers’ answers/comments helped me to understand the readings better.
strongly agree
agree
disagree
strongly disagree
c. I learned more through online discussions than I would have learned from the lectures.
strongly agree
agree
disagree
strongly disagree
d. The online discussion is helpful because we collaborate more with each other and support each other.
strongly agree
agree
disagree
strongly disagree
31
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
e. The group cohesion and mutual trust is an important factor in our group forums.
strongly agree
agree
f.
I prefer individual work to group work and would have done better if I did not have to collaborate with my peers in my group for discussions.
strongly agree
agree
disagree
disagree
strongly disagree
strongly disagree
g. I prefer individual work to group work and would have done better if I did not have to collaborate with my peers in the group for the . nal project.
strongly agree
agree
disagree
strongly disagree
h. The deadlines for the readings and postings in each forum are very important because they help me to complete the readings and the course.
5.
strongly agree
agree
i.
The overall course contents are interesting and I have learned a lot about bilingualism and bilingual education from taking this course.
strongly agree
agree
disagree
disagree
strongly disagree
strongly disagree
Choose one of the following: a. I wanted other group members to read our group discussions and I also missed the discussions in other groups. b. Every group should have summarized their forum discussions each week and post it to a general forum so that interested students could comment on the discussions in other groups. c. Reading and responding to peers’ messages in our own group discussions is sufficient for me to understand the course contents. It would take too much time to read and respond to summary messages from other groups.
6.
What is your view about group formation? a. I want to work with the same group members the way it is now because we know each other better. b. I want to work with different people in a group every few weeks because we will learn from other students we never meet. c. It will not make a difference to me working with the same people or different people in a group.
32
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
7.
The pace of the course, including readings and postings a. b. c. d.
8.
Is neither too fast nor too slow for me. Is too fast for me because I always try to catch up with the readings. Is too slow for me and we could have read more chapters. Should be OK for a course like this but I found it too fast for me because I work many hours a week and have limited time for course work.
Course documents: a. I printed out all the lecture notes and review guides (or some of them) because they are helpful. b. I read the lecture notes and the review guides online but did not print them all. c. I never printed out nor read the lecture notes and the review guides because they are not essential for me.
9.
The videos on reserve in the music library are used in all other face to face sessions of the same course. I found these videos a. worth seeing because they are informative and very relevant to the course content. b. relevant to the course content, but it is hard for me to make special trips to the university to watch them all. c. are not relevant to the course content and can be omitted.
10. You took all the three exams online in this semester. Do you think the online exam should be kept the way they are now, or do you prefer to take these exams in a classroom on a certain date? a. I prefer online exams the way they are now. b. I prefer to come to a classroom to write the exams. c. I have no preference. 11. Exam format: a. I prefer multiple choice exams. b. I prefer essay question type of exams. c. It does not make a difference for me. 12. The group project about bilingual programs in our local schools (circle all the answers that apply to you) a. b. c. d.
Is a good assignment and I learned a lot through doing the project. Makes the course readings more meaningful and more relevant to me. Is a good assignment but takes too much time to complete. Could be an individual assignment focusing on one school rather than a group project that involves more collaboration. e. Is not very important for this course.
33
Students’ Attitudes toward Process and Product Oriented Online Collaborative Learning
13. For the group project: a. I prefer individual work leading to a project of my own even though I only have information about one school. b. I prefer to collaborate with peers the way it is now because it is not a problem with me to collaborate. c. I prefer to collaborate with others for a group projected but I do not like to depend on other people’s schedule because some just do not get their work done on time. d. Even though it is hard to collaborative for the group project, it is still worth doing it because we learn more about our bilingual programs in different schools through doing it together. 14. Overall, my experience with this web-based course a. b. c. d.
Is very positive. Is positive. Is negative. Is very negative.
15. Experience with the Blackboard and the online forums: circle all apply to you. a. I found it challenging at the beginning but quickly picked up and like it now. b. The interface is straightforward and easy to learn, although I was not very experienced with any online courses. c. It was never a problem for me because I am good at technology. d. It was a plus because I learned the technology as well as the course contents 16. If I have the choice in future, a. I will take a similar web-based course. b. I will not choose to take a similar web-based course. c. It will not make a difference, web-based or face-to-face version. 17. Would you recommend a friend to take this web-based course? a. Yes b. No c. Not sure 18. Please take some time to answer the following questions: a. Please describe your experience with the forum discussion part of the course. (positive, negative, expectation, effect on learning, etc. anything you think is relevant) b. What do you like the most, or dislike the most about this course? c. In your opinion, what are the most important elements for a web-based course like this to be successful? d. To improve the course for future students, what changes do you recommend?
34
35
Chapter III
Cognition, Technology, and Performance:
The Role of Course Management Systems Teresa Lang Columbus State University, USA Dianne Hall Auburn University, USA
AbSTRACT Development and sale of computer-assisted instructional supplements and course management system products are increasing. Textbook sales representatives use this technology to market textbooks, and many colleges and universities encourage the use of such technology. The use of course management systems in education has been equated to the use of enterprise resource planning software by large businesses. Research findings about the pedagogical benefits of computer-assisted instruction and computer management systems are inconclusive. This study describes an experiment conducted to determine the benefit to students of using course management systems. The effects of cognition, learning styles, and computer attitude were considered and eliminated to better isolate any differences in performance. Student performance did not improve with the use of the technology.
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Cognition, Technology, and Performance
CogniTehnology, and Performan: TThe of Cour Managemen Advances in technology have become marketing tools in our society. Cell phone providers offer text messaging, photo capability, and Internet connectivity to entice consumers to purchase their service over that of the competition. Businesses offer online bill paying, discounts for Internet orders, and auction derived pricing opportunities. Publishers of today’s college textbooks offer a variety of computer-based supplements and computer-based course management tools to accompany their textbooks. Textbook sales representatives use these technologies as a strategic marketing tool when approaching college faculty about textbook alternatives and adoption. Faculty members are promised easy implementation and it is implied that the technologies will lead to improved learning for students. Today’s society expects to receive things on demand. Purchase software and download it immediately, follow the stock market up to the minute, take a test and get the results immediately. Course management systems make course materials and student performance available continuously and as fast as the instructor can post the information. The expanded use of the Internet in our society has changed the standard method of transmitting information in education (Aggarwal & Legon 2006). The academic community uses networks and the Internet for communication and conferencing, and for information access, retrieval, and use. Previous studies confirm that connectivity is beneficial to pedagogy when interaction, discussion, research, or transmission of information is involved; however, there have been few experiments to determine how Internet-enhanced courses affect student learning and understanding (Agarwal & Day, 1998). Publishers are responding to apparent changes in demand for technology while managing prof-
36
itability. Colleges are attempting to restructure to balance resources with changing demands, faculty members are encouraged to learn and incorporate technology into the curriculum with the belief that students will experience improved performance. However, research findings about the pedagogical benefits of computer assisted instruction and computer management systems are inconclusive (Russell, 2002; Morgan, 2003). This paper describes an experiment to determine whether publisher provided textbook technology simply provides accessibility or provides true pedagogical benefits to the student.
Course Technologies There are many terms used in the literature over the past several years to describe the use of computers and technology in education. Computer-based education (CBE) and computer-based instruction (CBI) describe the use of computers for drill and practice, tutorials, simulations, instructional management, supplementary exercises, programming, database development, writing using word processors, and other applications. Computerbased training (CBT) refers to self-paced tutorials frequently used in industry. Computer-assisted instruction (CAI) refers to drill-and-practice, tutorial, or simulation activities offered either by themselves or as supplements to traditional teacher directed instruction (Cotten, 1991). Course management systems (CMS) were developed in the mid-1990’s as distance education developed and expanded. Course management systems are also called learning management systems and virtual environments and may operate on the university’s network or use the Internet to operate from the publisher’s network (Simonson, 2007). These systems are used in higher education and include applications for course content organization and presentation, communication tools, web pages, and course management functions such as materials and activities.
Cognition, Technology, and Performance
Figure 1. Example of applications available with WebCT
Syllabus
Quizzes
Practice Material
Content
Links
Gradebook
Communication Capabilities
CMS (On local server or publisher’s server)
WebCT or Vista and Blackboard are the most commonly used proprietary course management systems in higher education today and are described as the academic system equivalent of enterprise resource planning (ERP) systems (Morgan, 2003). There are also open source or free CMSs such as the Sakai Project and Moodle (Simonson, 2007). These course management systems provide a structured format with builtin applications to facilitate faculty adoption of technology into the course curriculum. Textbook publishers develop packages for use in conjunction with these CMS platforms. The package generally contains all of the instructor tools available from the publisher plus grade book capabilities and communication applications such as chat rooms. Students can access their grades from the CMS and faculty can develop and use some of their own material in conjunction these applications. Figure 1 shows the breadth of common components inherent in a course management system.
Cognitive Skills and Learning Styles The learning process includes three main aspects: cognitive (how knowledge is assimilated), conceptualization (how information is processed), and affective (the influence that motivation, decisionmaking styles, values, emotional responses and preferences has on the individual). The combination of these three aspects define the learning style of an individual (Litzinger & Osif, 1993).
Cognitive styles and learning styles are terms used interchangeably in many situations. Typically, cognitive style refers to more theoretical situations, such as academic research, and learning style refers to practical applications. Measurement of cognitive/learning styles falls somewhere between the measurement of aptitude and the measurement of personality (Liu & Ginther, Fall 1999) . There are several cognition and learning style models. The National Association of Secondary School Principals (NASSP) Student Learning Style Task Force developed a detailed outline of the elements of learning style that effectively summarizes the fundamental aspects of the best known learning style and cognition models (Keefe & Monk, 1990). Table 1 presents a summary of these elements. A learner’s ability to concentrate, learn, and remember is affected by how the learner perceives, interacts with, and responds to the learning environment. These are multidimensional processes affected by various cognitive, perceptual, and environmental elements (Keefe, 1985, 1987).
Cgnition Cognitive styles are information-processing habits, and are both genetic and learned. The measurement of these skills attempts to capture the learner’s processing inclinations and the learner’s ability to identify problems, visualize
37
Cognition, Technology, and Performance
Table 1. Elements of a learning style Styles Cognitive Styles
Perceptual Response
Study/ Instructional Preferences
Element Analytic processing
Able to perceive the elements of problem as separate from the overall problem. Field independent.
Spatial
To identify geometric shapes, rotating them in the mind; to construct objects in mental space.
Discrimination
To find the important elements of a task; to focus attention on required detail and avoid distraction.
Categorization
To use reasonable criteria for classifying information; to form accurate, complete, and organized categories.
Sequential Processing
To process information sequentially or verbally; to understand information presented in a linear fashion.
Simultaneous Processing
To grasp visual spatial relationships; to sense an overall pattern from the relationships of the component parts.
Memory
To retain distinct vs. vague images in repeated tasks; to detect and remember subtle changes in information.
Visual
Prefer information input via visual experience.
Auditory
Prefer information input via auditory experience.
Kinesthetic
Prefer information input via “hands-on” or manipulative experience.
Emotive
Prefer information input via emotional experience.
Persistence
Willingness to work at a task until completed.
Verbal Risk
Willingness to express opinions, speak out, etc.
Verbal-Spatial Preference
Prefer verbal or nonverbal activities.
Study Time
Early Morning
Prefer to study in the early morning.
Late Morning
Prefer to study in the late morning.
Afternoon
Prefer to study in the afternoon.
Evening
Prefer to study in the evening.
Grouping
Prefer whole class or small group or pair grouping.
Posture
Prefer formal or informal study arrangements.
Mobility
Prefer taking frequent breaks or working until done.
Sound
Prefer quiet study areas or some background sound.
Lighting
Prefer bright or lower lighted study areas.
Temperature
Prefer studying in a cool or a warm environment.
Adapted from (Keefe & Monk, 1990)
38
Description
Cognition, Technology, and Performance
solutions, determine what tasks are required to accomplish the solution, and to select, categorize, and mentally manipulate the data required for those tasks (Keefe, 1985). Ausburn and Ausburn (1978) describe three properties of cognitive styles. The first property is that there is generality and stability across tasks and over time, making individuals resistant to training and change. The second property is relative independence of cognitive styles from traditional measures of general ability. The third property is cognitive styles’ relationships with some specific abilities, characteristics, and learning tasks. These researchers assert that cognitive styles have either positive or negative relationships with academic achievement depending on the nature of the learning task (Ausburn & Ausburn, 1978). Previous research indicates there is a substantial positive relationship between cognitive ability and task performance, and cognitive ability helps individuals adapt to new situations, prioritize rules and regulations, and deal with unexpected problems (Klausmeier & Loughlin, 1961; Hunter, 1986; Ackerman, 1987). Other researchers assert that cognitive styles have either positive or negative relationships with performance depending on the nature of the learning task (Ausburn & Ausburn, 1978; Hall, Cegielski & Wade, 2006). Thus we posit: Hypothesis 1: Cognitive ability influences performance.
Perceptual Response Perceptual response elements indicate the sensory mode that the learner will routinely employ when taking in information. Visual information is the primary sensory mode for some learners, while other learners rely on auditory information, and others on information acquired through physical activity. Brain function, perceptual skills,
training, and habit come together to determine a learner’s perceptual responses. Students strong in analytical thought processing process information logically and sequentially, and generally achieve higher grades on objective tests. Visual processors process information nonlinearly and holistically, and achieve lower grades than analytical students (O’Boyle, 1986; Sonnier, 1991; Gadzella, 1995). Individual preferences may change over a lifetime, but remain constant over shorter periods of time (Fleming & Bonwell, 2002). According to Dunn and Griggs (1989), everyone has a unique, specific learning style, and instruction should be designed to best accommodate different methods of learning (Dunn & Griggs, 1998). Learning styles are a multidimensional construct and CMS has the capacity to engage the user at different levels such as visual, aural, reading/writing, or kinesthetic thereby possibly helping users learn, retain and enjoy the process (Dunn, 1996). Thus, we posit: Hypothesis 2: Learning style influences performance. Cognition and learning styles may affect the student’s ability to learn, which will influence how beneficial the CMS is for the student as measured by performance. After controlling for the differences in students’ cognitive skills and learning styles, a more accurate evaluation of the effect of CMS is possible, and may better explain the inconclusive prior research findings about technologies in education in general. We posit: Hypothesis 3: There is a difference in student performance between students in classroom learning environments using CMS and students in classroom learning environments not using CMS after controlling for cognition and learning style.
39
Cognition, Technology, and Performance
Computer Attitudes Prior research suggests that individual attitudes toward computers may affect computer use (Ajzen & Fishbein, 1980; Rainer & Miller, 1996; Al-Khaldi & Al-Jabri, 1998; Chau, 2001; Chang & Lim, 2006). Some research finds that negative attitudes toward computers may also affect individual performance (Eason & Damodaran, 1981). Computer attitudes are pertinent to this study because if individuals have a positive attitude toward computers, they may be more satisfied with computer assisted instruction and computer management systems than students with negative computer attitudes. Any difference in computer attitude may be reflected in student performance. Some students are comfortable using computers and expect a positive outcome from the interaction with a computer based instructional method, while others are anxious when using computers and do not expect to perform well in a computer environment. Students enrolled in classes using CMS may expect to perform poorly because of their negative computer attitudes. This expectation may then negatively affect their performance because they do not fully utilize the CMS. Thus, we posit:
Hypothesis 4: Computer attitude influences performance. and Hypothesis 5: Students with higher computer attitude will perform better than students with lower computer attitude when enrolled in classroom learning environments using CMS. This study examines the effect that CMS have on performance. Learning style and cognition are included as control variables in order to separate the effects of CMS from other variables believed to affect performance. Computer attitude is introduced and is evaluated as a potential relevant variable when evaluating CMS results. Combining the above, we posit: Hypothesis 6: There is a difference in student performance between students in classroom learning environments using CMS and students in classroom learning environments not using CMS after controlling for cognition, learning style, and computer attitude. Table 2 presents a summary of our hypotheses.
Table 2. Hypotheses tested Hypothesis 1: Cognitive ability influences performance. Hypothesis 2: Learning style influences performance. Hypothesis 3: There is a difference in student performance between students in classroom learning environments using CMS and students in classroom learning environments not using CMS after controlling for cognition and learning style. Hypothesis 4: Computer attitude influences performance. Hypothesis 5: Students with higher computer attitude will perform better than students with lower computer attitude when enrolled in classroom learning environments using CMS. Hypothesis 6: There is a difference in student performance between students in classroom learning environments using CMS and students in classroom learning environments not using CMS after controlling for cognition, learning style, and computer attitude.
40
Cognition, Technology, and Performance
of students into the two treatment groups and the two control groups was based on enrollment in particular classes.
METHODOLOGY In today’s technological era, computers are being used extensively as educational tools and instructional devices. The success or failure of computer-assisted instruction (CAI) and course management systems (CMS) depend on a number of variables, including the user’s computer attitude, cognition, and learning style. This research examines the effect of a CMS on performance after controlling for cognition and learning style. The effect of computer attitude on performance when CMS is used is also of interest (see Figure 2). This experiment involves two approximately equally sized groups of students: a control group using no computer provided course material, and a treatment group using WebCT (a popular course management system) as provided by Wiley Publishing. Participants consisted of the students enrolled in four different sections of a class in Principles of Financial Accounting at a large, southeastern university. One instructor taught all four sections. Two sections combined make up the CMS treatment, and two other sections combined make up the control section. Placement
Participants The computer management system used in this study is the WebCT package sold by the textbook publisher to accompany the textbook. The publisher provided the software free of charge to the students and faculty member in support of the experiment. This process avoided any bias the CMS treatment group may have had because of increased textbook costs related to the CMS software. The package included self-help multiplechoice quizzes, grade posting capabilities, communication capabilities, and graded homework and graded quiz capabilities. The treatment group submitted all homework using the CMS and took quizzes using the software. The control group had in class quizzes and submitted homework on paper. Quizzes were timed equally both on-line and in class. The communication capabilities included four chat rooms and the ability to email the course instructor. The course syllabus and
Figure 2. Extension of Keefe model including computer attitudes Cognition
Perceptual Visual Aural Reading/Writing Kinesthetic
General Reasoning
Learning Styles
Computer Attitude CMS
Environment
41
Cognition, Technology, and Performance
the course homework schedule were added to the course material available via WebCT. All WebCT capabilities were available to the students during the semester. Although it was available, students did not use the communication capabilities provided with WebCT.
Materials The non-computer (traditional) students did not use computer aids and served as the control group. All materials available to the computer group were made available to non-computer students in paper format. The traditional students had email access along with both of the treatment groups because this is the official method of communication at this university. All students were required to purchase a note pack that provided notes for the entire semester. The note pack was not available from WebCT to avoid potential bias between groups caused by the control group purchasing copies of the note pack, while the treatment group could have downloaded their note pack. The note pack included outlines of each chapter. A common syllabus was used for the sections, as well as common exams, homework, project, and quizzes. The same textbook was used for all classes.
Data Collection Two Internet accessible surveys were used to collect data for the study. The first survey included demographics, learning styles, and computer attitude. This survey was completed after the first exam but before mid-semester. The second survey included cognition, and questions about usage of the CMS, and was administered the last two weeks of the semester. The usage questions asked about the students’ knowledge of the treatments used in other classes. For example, “Did you share your password to use WebCT with students from other classes that did not use WebCT?” Open-ended
42
questions were also included. For example, “How could the course be improved?” The responses to these questions were used to evaluate cross talk between students in different groups – no appreciable evidence exists to suggest that the treatment and control groups were aware of the experimental condition. Ten points were awarded for completing each of the two surveys, and attendance was collected. Students that did not want to participate were allowed to complete additional homework assignments instead of the surveys. Performance was collected for each exam, each quiz, and each homework assignment. The scores collected were raw scores without any curve. For consistency, one graduate student graded all homework and quizzes. The number of times students accessed the CMS was accumulated at the end of the semester. WebCT has a tracking feature built in to track usage of the CMS. This data was used to confirm which students actually received the treatment.
Instruments The cognition instrument used in this study is a tested and validated instrument from a Kit of Factor-Referenced Cognitive Tests developed by the Educational Testing Service in Princeton, New Jersey. The general reasoning measure was selected as a general measurement of cognition for this study. This instrument scores the individual’s the ability to select and organize relevant information in order to solve a problem (Ekstrom, 1999). An example question follows: “If a man earns $5.75 an hour, how many hours should he work each day in order to make an average of $46.00 per day? a. b. c. d.
subtract divide add multiply
Cognition, Technology, and Performance
Individuals are scored based on the number of questions they get correct minus one-half the number they get wrong. Higher scores represent higher cognitive ability and lower scores represent lower cognitive ability relative to the sample population. The results were categorized into very low, low, average, high, and very high for comparison. This instrument is well established and previously validated (Ekstrom, French, Harman & Dermen, 1999). A learning styles instrument developed and copyrighted by Neil Fleming was selected to categorize students by perceptual response (Fleming & Mills, 1992). The instrument (VARK) categorizes students as visual, aural, reading/writing, kinesthetic, or multimodal and has 13 nonquestions with non-mutually exclusive answers corresponding to one of the four categories. The answers are accumulated by category and the highest score determines whether the student is visual, aural, reading/writing, kinesthetic, or multimodal. If the highest score is shared by two or more categories the student is considered multimodal. Multimodal students may use more than one category equally well. Validity and reliability concerns regarding learning style measurement instruments has been identified as a problem within cognition/learning style research (Cox & Gall, 1981; Ferrell, 1983; Sewall, 1986; Drummond, 1987; Moran, 1991; James & Blank, 1993). The validity and reliability of the majority of the measurement instruments commonly cited in the literature have been questioned, including the LSI, LSQ, and Embedded Figures Test (Sims, Veres, Watson & Buckner, 1986; Tennant, 1988; Allinson & Hayes, 1990). Despite its lack of published reliability, the VARK is appropriate in this study because the instrument is incorporated into the textbook used in the classes included in the study (Kimmel, Weygandt & Kieso, 2004). The VARK appears at the beginning of the textbook with instructions to the student to use it to determine their learning style. Study strategies
for the different learning styles are then outlined that are specific to the textbook. For example, if a student is categorized as a visual learner, they are encouraged to: a. b. c. d.
pay close attention to charts, drawings, and handouts; underline; use different colors; and use symbols, flow charts, and graphs. Visual students are encouraged to:
a. b. c.
recall your “page pictures”; draw diagrams where appropriate; and practice turning your visuals back into words.
In addition, throughout the textbook, icons appear in the margins to guide students’ study of the material based on their learning style determined using the VARK instrument. The ‘Computer Attitude Scale’ (CAS) is a previously validated instrument that measures computer attitude (Nickell & Pinto, 1986). Harrison and Rainer (1992) found that the computer attitude construct has three components: optimism, pessimism, and intimidation. Subsequent research supported the use of the optimism component as a measure of computer attitude (Chau, 2001). The degree of optimism an individual has toward computers is determined based on responses ranging from strongly agree to strongly disagree on a 7-point Likert-type scale. For example, “The use of computers is enhancing our standard of living.” Higher scores indicate more optimism and lower scores indicate lower optimism.
RESULTS The student participants in this study were enrolled in Principles of Financial Accounting. Three hundred and thirty-five students completed
43
Cognition, Technology, and Performance
both surveys, a 99% participation rate. Therefore, non-response bias is not considered to be a problem. Of the 335 students that completed both surveys, five students did not take the final exam and therefore did not complete the class. The analysis was done on the remaining 330 students after looking for but finding no outliers or other incomplete information. This convenience sample provided a sufficient sample size to test the dependent variable, performance, because there are more observations in each cell than there are dependent variables included in that cell (Hair, Anderson, Tatham & Black, 1998). Most (95%) of the participants were between 19 and 22 years of age, in their sophomore or junior year, and full-time students. There were more males (55%) than females (45%), and 54% of the participants’ majors were undeclared or other business majors.
Data Analysis The data was analyzed using the SPSS statistical program. Analysis of variance was used because the dependent variable is continuous and the independent variables are categorical and ordinal (Leeper, 2004). Analysis of variance and analysis of covariance procedures are valid only if the dependent variable(s) are normally distributed, the variances are equal for all treatment groups, and the observations are independent. These analysis procedures are robust with regard to these assumptions except in extreme cases. Violation of the normality assumption has little impact with large sample sizes, such as the one used in this study. However, no such violations were found and performance (the dependent variable) is normally distributed. A violation of the equal variance assumption has minimal impact if the groups are of approximately equal size. Groups are considered approximately equal if the largest group size divided by the smallest group size is less than 1.5 (Hair et al., 1998). This study had a group size quotient of 1.2,
44
well within guidelines. Performance had equal variance, and independence of the observations is evaluated as part of each data analysis. Linearity and multicollinearity among the dependent variables were evaluated using scatter plots, correlation matrix, and variance inflation factors. No problems were identified with linearity or multicollinearity. The relationships between the demographic variables, age, gender, major, and class, and the dependent variables, and performance were evaluated. There were no significant differences between the different intervals of demographic variables and performance. For example, there were no significant differences in performance between students majoring in accounting and students majoring in management. The cognition instrument used in this study is a tested and validated instrument from a Kit of Factor-Referenced Cognitive Tests developed by the Educational Testing Service in Princeton, New Jersey (Ekstrom et al., 1999). The Cronbach alpha for CMS was .85, and for the control group was .85. The learning styles (perceptual response) instrument used in this study is the VARK (Fleming & Bonwell, 2002). The Cronbach alpha for the population was computed for visual (.39), aural (.48), read/write (.41), and kinesthetic (.36). The reliabilities for the various multimodal combinations would fall within this range. This level of reliability is common with these types of instruments (James & Blank, 1993). The ‘Computer Attitude Scale’ (CAS) is a previously validated instrument that measures computer attitude (Nickell & Pinto, 1986). The optimism component has four questions with responses ranging from strongly agree to strongly disagree on a 7-point Likert-type scale. Higher scores indicate more optimism The SPSS statistical program was used to normalize the scores into five intervals to be consistent with the other measures used in the study. The Cronbach alpha was .80 for the population, .81 for CMS, and .78 for control group.
Cognition, Technology, and Performance
Hypothesis Testing Hypothesis 1: Cognitive ability influences performance. An ANOVA was performed to determine the relationship between cognitive ability and performance. The SPSS statistical program was used to normalize the cognition measures into five intervals. Interval one designates very low cognitive ability and interval five designates very high cognitive ability. The analysis revealed that students with higher cognitive ability performed significantly better than students with lower cognitive ability. This finding was expected based on the literature and supports hypothesis 1. Further, the results support using cognitive ability as a covariate in relation to performance in this study to isolate the treatment effect. The findings for hypothesis one and the remaining hypotheses are summarized in Table 3 at the end of this section. Hypothesis 2: Learning styles (perceptual response) influences performance. An ANOVA was performed to determine the relationship between learning style and performance. The analysis revealed that performance did not differ significantly across the different learning styles. Hypothesis 2 is not supported. Learning style (perceptual response) did not influence performance. Therefore, the inclusion or absence of learning style as a covariate in relation to performance will not change the results. Hypothesis 3: There is a difference in student performance between students in classroom learning environments using CMS and students in classroom learning environments not using CMS, after controlling for cognition and learning style. An ANCOVA was performed using performance as the dependent variable, treatment as the independent variable, and cognition as a covariate. Cognition was used as covariate to isolate the treatment effect. The analysis revealed that student performance was not significantly different between students in the CMS group and
students in the control group, after controlling for cognition, failing to support this hypothesis. Hypothesis 4: Computer attitudes influence performance. An ANOVA was performed to determine the relationship between computer attitude and performance. The SPSS statistical program was used to normalize the computer attitude scores into five intervals. Analysis revealed that students’ performance did not differ significantly across the five intervals of computer attitude. Computer attitude did not significantly influence performance. Therefore, hypothesis 4 is not supported. Hypothesis 5: Students with higher computer attitude (more optimism) will perform better than students with lower computer attitude (less optimism) when enrolled in classroom learning environments using CMS. An ANCOVA was performed using performance as the dependent variable, computer attitude as the independent variable, and cognition as a covariate. Cognition was used as covariate to eliminate the effect of this variable and isolate the effect that computer attitude has on performance. The analysis revealed that students with higher computer attitudes (more optimism) did not perform significantly better than students with lower computer attitudes (less optimism). This finding is consistent with the results from hypothesis 4 and does not support hypothesis 5. Students with higher computer attitude (more optimism) do not appear to perform better than students with lower computer attitude (less optimism) when using CMS. Hypothesis 6: There is a difference in student performance between students in classroom learning environments using CMS and students in classroom learning environments not using CMS, after controlling for cognition, learning style, and computer attitude. An ANCOVA was performed using performance as the dependent variable, treatment as the independent variable, and cognition, and computer attitude as covariates. Cognition and computer attitude were used
45
Cognition, Technology, and Performance
Table 3. Summary of results No.
Hypothesis
1
Cognitive ability influences performance.
2
Learning style influences performance.
3
There is a difference in student performance between students in classroom learning environments using CMS and students in classroom learning environments not using CMS after controlling for cognition and learning style.
F-Statistic
pvalue
4
4.689
.001
4
.137
.969
2
1.083
.299
4
Computer attitude influences performance.
4
.387
.818
5
Students with higher computer attitude will perform better than students with lower computer attitude when enrolled in classroom learning environments using CMS.
4
.183
.947
6
There is a difference in student performance between students in classroom learning environments using CMS and students in classroom learning environments not using CMS after controlling for cognition, learning style, and computer attitude.
3
1.054
.305
as covariates to eliminate the effect of these variables and further isolate treatment effects. The analysis revealed that student performance was not significantly different between treatments after controlling for cognition, and computer attitude. Hypothesis 6 is not supported.
DiscSCssion The results from hypothesis 1 testing confirm there is a relationship between cognition and performance. This finding further confirms prior literature and supports the position taken in this study for the need to control for this variable in order to isolate the impact CMS may have on performance (Holliday, 1976; Winn, 1982; Keefe, 1987). Based on the results from testing hypothesis 2, learning styles do not influence performance. The lower Cronbach alphas for the learning styles instrument, although not uncommon for this type of instrument, may have influenced the detection of this relationship in the analysis.
46
df
Results of hypothesis 3 testing indicate no difference in students’ performance after controlling for cognition and learning style. The hypothesis is not supported. Hypotheses 4 and 5 do not indicate a relationship between performance and computer attitude regardless of the testing environment. Hypothesis 6, which analyzed differences in performance between CMS and non-CMS classrooms after controlling for potential influencers, is also not supported. Course management system software, while perhaps facilitating classroom management, does not appear to improve student performance regardless of the individual’s cognitive level, learning style, or computer attitude. If performance is not improved by the addition of course management technology to a course, what are other potential reasons that an adoption decision is made? First, colleges and universities across the nation are confronting difficult factors, including higher student costs, lower state funding, and increased diversity of the student body. Research
Cognition, Technology, and Performance
related to these problems suggests fundamentally restructuring the teaching and learning process through the integration of information technologies (Burke, 1994; Doucette, 1994; Guskin, 1994; Issroff & Scanlon, 2002; Morgan, 2003). College and university faculty and administrators hope to use technologies such as CMS to balance decreased funding with increasing student demands. In this respect, CMS may be able to provide a foundation that allows for more efficient teaching of higher numbers of students while reducing cost associated with traditional paper-based notes, handouts, and exams. Second, many textbook sales are pushed with technology in mind. As a result, when adopting a new textbook today, college faculty members must not only review and select among the numerous textbooks generally available, but must also be knowledgeable enough with the technology to select, implement, and teach students how to use the technology products they select, or in many cases come automatically prepackaged with the book. The technology is often used to promote new textbooks with the stated or implied notion that the technology will improve faculty efficiency and/or improve student performance. Third, some faculty may adopt CMS technology with a misunderstanding that such is also CAI technology. Most CMS systems, however, are not designed to provide this specialized information delivery without substantial work and innovation by the instructor. However, it may be that CAI technology, when embedded with CMS technology, will lead to improved student performance. Thus far, research on CAI technology generally supports this assertion. For instance, past research has found CAI typically produces greater levels of learning when combined with traditional instruction than does traditional instruction alone; however, much of this research was conducted in K-12 (Luyben, Hipworth & Pappas, 2003). Computer-assisted instruction has been found to improve college students’ attitudes toward the topic they are
studying as well as improve student achievement. Instructional time required to teach students using CAI was significantly lower than the instructional time required using conventional instruction, and retention of material learned improved when CAI was used (Kulik, Kulik & Cohen, 1980; Kulik, Kulik & Shwalb, 1986). Nontraditional adult learners have demonstrated improved performance, decreased instructional time, better retention, and improved attitude about the topic under study when CAI was used; however, the size of the effect was related to the type of CAI (Kulik et al., 1986). As an example, distance learning is a primary user of computer applications such as CAI and CMS. The majority of studies comparing traditional face-to-face instruction to distance learning find no significant difference in student satisfaction, learning, or performance (Russell, 2002; Bonds-Raacke, 2006). Prior research advocates the need for studies to determine the connection between cognition, student learning, technology, and general education before adopting electronic media (Kaput, 1992; Morgan, 2003). Technologies like CMS provide more timely feedback, allow for individualized pace and focus of learning, interactive exercises, access to up-to-date information, and the opportunity for drill and practice (Roy & Elfner, 2002) but do not provide the process of reflection on material and interaction with it as is usually the case with CAI. It is this interaction that create an understanding (Brown, Bransford, Ferrara & Campione, 1983; Farnham-Diggory, 1990). However, if use of CMS technology reduces an instructor’s time spent in course management, then perhaps more time will be spent creating a more cohesive and supportive learning environment. Study/instructional preferences are indicators of the learning environment in which a learner feels most comfortable. These preferences include physical parameters such as noise level, light level, and room temperature, as well as the time of day during which the learner learns best. Social as-
47
Cognition, Technology, and Performance
pects are also included in this category, such as the learner’s willingness to speak in public and size of the learning group in which the learner feels most comfortable. The learner’s gender, mental and physical health, and habits form the learner’s study/instructional preferences (Keefe & Monk, 1990). Method of presentation of instructional material and cognitive styles has a significant influence on learning performance among secondary and college level students. Colleges and universities typically present information in lecture format, emphasize reading, and require evidence of learning in written form. This format is most well suited for read/write and aural individuals. Imager (visual) students do best when pictorial presentations are used in educational material (Holliday, 1976; Winn, 1982). Verbal thinking is generally emphasized in research and intelligence testing with masculinity related to visual thinking, and femininity related to verbal thinking (Smith, 1964). Imagers perform better in text-plus-picture conditions, and verbalizers are better in text-plustext conditions where material is presented in only text form. Imagers use diagrams to illustrate their answers more often than verbalizers (Riding & Buckle, 1990; Riding & Cheema, 1991). It has been estimated that 20%-30% of American students are auditory, approximately 40% are visual, and the remaining are some combination (Bissell, White & Zivin, 1971; Dunn & Dunn, 1979). Lessons may be learned from this and related research to establish classrooms that are comfortable learning environments. There is a common belief in higher education that students drive faculty and institutions to use technology; however, in a recent study at a large university faculty reported that students actually discouraged them from using CMS because the students had difficulty gaining access to the CMS and were uncomfortable with technology in general (Morgan, 2003). Faculty members reported they were not confident about the reliability of CMS and were doubtful about the extent to which CMS use improves pedagogy. 48
A study on the effects of CMS found that a paperless disbursement of information was possible. The textbook, syllabus, grade book, and assignments were available through a CMS and were available twenty-four hours a day, seven days a week. The researchers posit the continuous availability and nontraditional format would improve student performance. They were disappointed to find that student performance did not improve and, in fact, several students either did not access the site or did not read the instructions carefully enough to fulfill course requirements. Many students did not look at or follow the web page links to access resources available on the site (Foreman & Widmayer, 2000). Motivation is an important variable in learning and academic performance (Lang & Hall, 2006). The students did not demonstrate the motivation to use the materials provided. The students in our research appear to have followed this same pattern, accessing the CMS only as required by the course despite content being available for their voluntary use. It is possible that the CMS may have had a greater effect on performance if students had been more proactive in its use. Reliability and boundaries of the system were a problem in this experiment. The servers or access to the servers was unavailable a few times. This occurred once when homework was due, and was confirmed with the publisher. Students were given an extension of time to complete the homework. This was frustrating for the students because of their concern about possibly losing points and wasted time. This was frustrating to the professor because of the technical problems of extending the time after the original due date had passed to submit homework. Subsequently, students reported the homework was not available, but this could not be confirmed with the publisher. Further, homework solutions required specific formats such as decimals and capitalization, which further frustrated students. These additional frustrations experienced by the students using WebCT may have influenced their
Cognition, Technology, and Performance
willingness to use the technology and thereby their performance. In all, this and other research indicates that the verdict on the effectiveness of CMS to improve student performance is still not clear. While research on CAI implies the opposite, this research shows that CMS is not effective. Given that most students prefer the classroom experience to straight computer-assisted learning, we assert that course management technology by itself may not improve performance, but that features inherent in CAI technology may be a beneficial extension of both CMS and this research. By combining the opportunity to interact and work with course material through technology with other desirable classroom environmental factors (class size, time temperature, etc.), CMS may prove beneficial both to instructor effectiveness and student performance.
CONCLUS ION The purpose of the experiment was to evaluate whether student performance was improved when CMS was used, after controlling for cognition, learning style, and computer attitude. Computer attitude was introduced as a variable that may influence the learning process and, therefore, the relationship between CMS and performance. The data collected in this study supported the hypothesis that cognition influences performance. Learning style was found not to influence performance. Cognition and learning styles are frequently cited in the literature as influencing performance both with technology and without technology, so this finding partially supports the literature. The hypothesis that CMS influences performance was not supported. Cognition and learning styles were believed to influence performance based on prior literature. The question addressed in this study was whether or not CMS would influence performance after controlling for the cognition and learning style.
The standard belief in higher education is that CMS is beneficial in the learning process. While the empirical evidence to support such benefit is not found in this study, there may be potential to derive efficiencies from course management systems that would indirectly improve the education process. If the use of these systems gives faculty more time away from mundane tasks in order to further develop curriculum, this could indirectly improve the performance and satisfaction of the students. The training required in integrating CMS and the course development issues encountered by faculty to use the technology may cause them to rethink approaches used in the classroom, thus improving the learning process. The learning process may improve because of more student/faculty interaction using communication capabilities, faster performance feedback, and potential tracking of student effort by monitoring the number of computer accesses and duration of use. By considering the effects of cognition and technology acceptance in conjunction with CMS, instructors may be better able to develop courses and instructional techniques that improve the ultimate outcome for the students.
ReferenCES Ackerman, P. L. (1987). Individual differences in skill learning: An integration of psychometric and information processing, perspectives. Psychological Bulletin, 102(3-27). Agarwal, R., & Day, A. E. (1998). The Impact of the Internet on Economic Education. Journal of Economic Education, Spring, 99-110. Aggarwal, A. K., & Legon, R. (2006). Case study Web-based education diffusion. International Journal of Web-Based Learning and Teaching Technologies, 1(1), 49-72. Ajzen, I., & Fishbein, M. (1980). Understanding Attitudes and Predicting Social-Behavior, Prentice Hall, Englewood Cliffs, NJ.
49
Cognition, Technology, and Performance
Al-Khaldi, M. A., & Al-Jabri, I. M. (1998). The relationship of attitudes to computer utilization: New evidence from a developing nation. Computer in Human Behavior, 14(1), 23-42.
Drummond, R. J. (1987). Review of Learning Styles Inventory, in Test Critiques, Sweetland, D. J. K. R. C. (Ed.) Test Corporation of America, Kansas City, Missouri, 308-312.
Allinson, C. W., & Hayes, J. (1990). The validity of the Learning Styles Questionnaire. Psychological Reports, 67, 859-866.
Dunn, R. (1996). How to implement and supervise a learning style program, Association for Supervision and Curriculum Development, Alexandria, VA.
Ausburn, L. J., & Ausburn, F. B. (1978). Cognitive styles: Some information and implications for instructional design. Educational Communication and Technology, 26(4), 337-354. Bissell, J., White, S., & Zivin, G. (1971). Sensory modalities in children’s learning, in Psychology and educational practice, Lesser, G. S. (Ed.) Scott, Foresman, & Company, Glenview, IL. Brown, A. L., Bransford, J. D., Ferrara, R. A., & Campione, J. C. (1983). Learning, remembering, and understanding, in Handbook of Child Psychology: Cognitive Development, Wiley, 77-166. Burke, J. (1994). Education’s new challenge and choice: Instructional technology--Old byway or superhighway? Leadership Abstracts, 7(10), 22-39. Chau, P. Y. K. (2001). Influence of computer attitude and self-efficacy on IT usage behavior. Journal of End User Computing, 13(1), 26-33. Chang, K. T., & Lim, J. (2006). The role of interface elements in Web-mediated interaction and group learning: Theoretical and empirical analysis. International Journal of Web-Based Learning and Teaching Technologies, 1(1), 1-28. Cotten, K. (1991, 8/31/01). Computer Assisted Instruction. Retrieved June 11, 2003, from http:// www.nwrel.org/scpd/sirs/5/cu10.html Cox, P. W., & Gall, B. G. (1981). Field dependenceindependence and psychological differentation, Educational Testing Service, Princeton. Doucette, D. (1994). Transforming teaching and learning using information technology. Community College Journal, 65(2), 18-24. 50
Dunn, R., & Griggs, S. (1998). Learning styles: Quiet revolution in American secondary schools, National Association of Secondary School Principals, Reston, VA. Dunn, R. S., & Dunn, K. J. (1979). Learning styles/teaching styles: Should they. Can they be mateched? Educational Leadership, 36, 238244. Eason, K. D., & Damodaran, I. (1981). The needs of the commercial users, in Computer Skills and the User Interface, Atly, M. J. C. J. I. (Ed.) Academic Press, New York, NY. Ekstrom, R. B., French, J. W., Harman, H. H., & Dermen, D. (1999). Manual for Kit of FactorReferenced Cognitive Tests 1976, Educational Testing Service, Princeton, NJ. Farnham-Diggory, S. (1990). “Schooling”: The developing child, Harvard University Press, MA, Cambridge. Ferrell, B. G. (1983). A factor analytic comparison of four learning-style instruments. Journal of Educational Psychology, 75, 33-39. Fleming, N. D., & Bonwell, C. C. (2002). VARK pack Version 4.1. Fleming, N. D., & Mills, C. (1992). Helping students understand how they learn, Magma Publications, Madison, Wisconsin. Foreman, J., & Widmayer, S. (2000). How online course management systems affect the course. Journal of Interactive Instruction Development, Fall, 16-19.
Cognition, Technology, and Performance
Gadzella, B. M. (1995). Differences in academic achievement as a function of scores on hemisphericity. Perceptual and Motor Skills, 81, 153-154.
Keefe, J. W. (1985). Assessment of learning style variables: The NASSP task force model. Theory Into Practice 24, 138-144.
Guskin, A. E. (1994). Reducing student costs & enhancing student learning, Part II: Restructuring the role of faculty. Change, 26(5), 16-25.
Keefe, J. W. (1987). Learning style theory and practice, National Association of Secondary School Principals, Reston, VA.
Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate Data Analysis, Prentice-Hall, Inc., Upper Saddle River, New Jersey.
Keefe, J. W., & Monk, J. S. (1990). Learning style profile examiner’s manual, National Association of Secondary School Principals, Reston, VA.
Hall, Dianne J., Cegielski, Casey G., Wade, James N. (2006). Theoretical value belief, cognitive ability, and personality as predictors of student performance in object-oriented programming environments. Decision Sciences Journal of Innovative Education 4(2), 237-257. Harrison, A. W., & Rainer, R. K. J. (1992). The influence of individual difference on skill in end-user computing, Journal of Management Information Systems, 9(1), 93-111. Holliday, W. G. (1976). Teaching verbal chains using flow diagrams and text. AV Communications Review, 24(1), 63-78. Hunter, J. E. (1986). Cognitive ability, cognitive aptitude, job knowledge, and job performance. Journal of Vocational Behavior, 29, 340-362. Issroff, K., & Scanlon, E. (2002). Using technology in higher education: An activity theory perspective. Journal of Computer Assisted Learning, 18, 77-83. James, W. B., & Blank, W. E. (1993). Review and critique of available learning-style instruments for adults. New Directions for Adult and Continuing Education, 39(Fall). Kaput, J. (1992). Technology and mathematics education. In Handbook or research on mathmematics and teaching and learning (D. A. Grouws ed., pp. 515-556). New York, NY: Macmillan Publishing Company.
Kimmel, P. D., Weygandt, J. J., & Kieso, D. E. (2004). Financial accounting tools for business decision making, John Wiley & Sons, Inc., New York, NY. Klausmeier, H. J., & Loughlin, L. J. (1961). Behaviors during problem solving among children of low, average, and high intelligence. Jouranl of Educational Psychology, 52, 148-152. Kulik, C.-L. C., Kulik, J. A., & Shwalb, B. J. (1986). The effectiveness of computer-based adult education: A meta-analysis. Journal of Educational Computing Research, 2(2), 235-252. Kulik, J. A., Kulik, C. L. C., & Cohen, P. A. (1980). Effectiveness of computer-based college teaching: A meta-analysis of findings. Review of Educational Research, 50(4), 525-544. Lang, T. K., & Hall, D. (2006). Academic motivation profile in business classes. Academic Exchange Quarterly, Fall 2005, 145-151. Leeper, J. D. (2004). Choosing the correct statistical test. Retrieved February 27, 2004. Litzinger, M. E., & Osif, B. (1993). Accommodating diverse learning styles: Designing instruction for electronic information sources, in What is good instruction now? Library instruction for the 90s, Shitaro, L. (Ed.) Pierian Press, Ann Arbor, MI, Pierian Press Liu, Y., & Ginther, D. (Fall1999). Cognitive styles and distance education. Online Journal of Distance Learning Administration, II(III).
51
Cognition, Technology, and Performance
Luyben, P. D., Hipworth, K., & Pappas, T. (2003). Effects of CAI on academic performance and attitudes of college students. Teaching of Psychology, 30(2), 154-158.
Roy, M. H., & Elfner, E. (2002). Analyzing student satisfaction with instructional technology techniques. Industrial and Commercial Training, 34(7), 272-277.
Moran, A. (1991). What can learning styles research learn from cognitive psychology? Educational Psychology, 11(3/4), 239-246.
Russell, T. (2002). The “No Significant Difference Phenomenon” Website. Retrieved September 14, 2003, from http://teleeducation.nb.ca/nosignificantdifference
Morgan, G. (2003). Faculty use of course management systems, University of Wisconsin System, Boulder, CO. Nickell, G. S., & Pinto, J. N. (1986). The computer attitude scale. Computers in Human Behavior, 2, 301-306. O’Boyle, M. W. (1986). Hemispheric laterality as a basis for learning: What we know and don’t know., in Cognitive classroom learning: Understanding, thinking, and problem solving, Andre, G. D. P. T. (Ed.) Academic Press, New York, 21-48. Bond-Raacke, J. (2006). Students’ attitudes towards introduction of course Web site. Journal of Instructional Psychology, 33(4), 251-255. Rainer, R. K. J., & Miller, M. D. (1996). An assessment of the psychometric properties of the computer attitude scale. Computers in Human Behavior, 12(1), 93-105. Riding, & Buckle, C. F. (1990). Learning styles and training performance, Training Agency, Sheffield. Riding, & Cheema (1991). Cognitive styles -- An overview and integration. Educational Psychology, 11(3), 193-215.
52
Sewall, T. J. (1986). The measurement of learning styles: A critique of four assessment tools. ERIC Document 261 247 Simonson, M. (2007). Course Management Systems. The Quarterly Review of Distance Education, 8(1), xii-ix. Sims, R. R., Veres, J. G., Watson, P., & Buckner, K. E. (1986). The reliability and classification stability of the learning styles inventory, in Educational and Psychological Measurement, 753-760. Smith, I. M. (1964). Spatial ability, Knapp, San Diego, CA. Sonnier, I. L. (1991). Hemisphericity: A key to understanding individual differences among teachers and learners. Journal of Instructional Psychology, 18(1), 17-22. Tennant, M. (1988). Psychology and adult learning, Routledge, London. Winn, W. D. (1982). The role of diagrammatic representation in learning sequence, identification, and classification as a function of verbal and spatial ability. Journal of Research in Science Teaching, 19(1), 79-89.
53
Chapter IV
The Role of Organizational, Environmental and Human Factors in E-Learning Diffusion Kholekile L. Gwebu University of New Hampshire, USA Jing Wang Kent State University, USA
AbSTRra Improvements in technology have led to innovations in training such as Electronic Learning (E-learning). E-learning aims to help organizations in their training initiatives by simplifying the training process and cutting cost. It also attempts to help employees in their learning processes by making learning readily accessible. Unfortunately, the diffusion of this innovation has not been as successful as was initially predicted. In this paper we explore the drivers behind the diffusion of e-learning. Apart from the factors investigated by previous research, we believe that one more dimension, -human factors- should be taken into account when evaluating the diffusion of a training innovation, since learners are, to a large extent, the central issue of training. In the case of e-learning we believe that motivation plays a key role in the diffusion of e-learning.
INTRODUCTION With the rapid improvement in technology and the growing demand for a knowledge- based labor force, the demand for e-learning has grown considerably over the past few years. E-learning has provided organizations and employees with
tremendous advantages over traditional training (Li & Lau, 2006). It transcends the limitation of time and space and has been reported to provide companies with time and cost saving benefits in the long run (Li & Lau, 2006; Ong, Lai, & Yishun, 2004; Zhang, 2004). In a recent report, Deloitte and Touche (2002) spell out some of the major
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Role of Organizational, Environmental and Human Factors
advantages of e-learning including: increased volume of training, geographic distribution and reusability of content. E-learning gives organizations the ability to simultaneously train a larger percentage of employees than does traditional classroom-based training as employees can be trained anytime from anywhere. Moreover, it permits large dispersed organizations to train all their employees with homogeneous content. This is extremely useful for organizations that want to ensure that employees gain standardized skills and knowledge. Furthermore, it has the power to bring people together for collaborative learning (Zhang & Nunamaker, 2003). Along with its unique advantages, the improvements in technology have also facilitated the adoption and implementation of e-learning. The wide accessibility of the internet, increased bandwidth, better delivery platforms, and the growing selection of high-quality e-learning products have all added to the feasibility and attractiveness of e-learning (McCrea, Gay, & Bacon, 2000). The strategic importance of e-learning is real and many companies have been investing heavily in this education sector (Huynh, Umesh, & Valacich, 2003). In fact, 95% of the respondents of an American Society for Training and Development survey conducted in 2003 indicated that they had used some form of e-learning in their organizations (Renée, Barbara, & Eduardo, 2005) . However, many e-learning initiatives are not living up to initial expectations. According to a study done by the Silicon Valley World Internet Center on corporate e-learning (Duggan & Barich, 2001), out of 44 respondents, only 21% indicated a very high level of executive confidence in e-learning; 58% regarded top management confidence as moderate, 15% unknown and 6% as very low. Additionally, a number of studies have suggested that a large number of e-learning initiatives fail (Hamid, 2001). Such findings have spurred research which attempts to identify factors which contribute to the success of e-learning.
54
One research stream has primarily focused on the effect of technology on the success of e-learning. Researchers have indicated that text-based elearning systems tend to make learners disengaged and have proposed the use of different multimedia systems in e-learning (Merchant, Kreie, & Cronan, 2001; Zhang, 2004). Prototype systems that are multimedia integrated are also developed and tested to demonstrate the important role of technology in e-learning (Sampson, Karagiannidis, & Cardinali, 2002; Zhang, 2004). Although such studies have improved our understanding of the alignment between different technologies and e-learners, they remain hampered by one major limitation: they adopt a technological deterministic view and postulate direct links between technology and e-learning success. By its very nature, such an approach propagates technological materialism and amplifies technology specifics. Human action, interpretation, and organizational and environmental contexts play little role in this stream of research. Hence, this approach provides relatively little detail about the organizational contexts and human action that shape the observed e-learning outcome. Such a materialistic view diminishes the importance of human agency, organizational structures, and complex social environments and falls short in explaining why identical e-learning technologies succeed in some organizations but fail in others. Hence, an adequate understanding of the factors that facilitate the success of corporate e-learning requires a more balanced view which does not privilege technology over human agency and the social context (Bruckman, 2002). Another research stream has challenged the technological deterministic view and has focused on the social aspects of e-learning. A number of studies have examined the way in which organizational culture (Harreld, 1998; Nurmi, 1999), the trainers (Chute, Thompson, & Hancock, 1999; Wagner & Reddy, 1987), and differences in individual training styles (Clariana, 1997; Cohen, 1997) influence e-learn-
The Role of Organizational, Environmental and Human Factors
ing initiatives. Although this body of literature has the potential to offer a richer understanding of the role of social forces in e-learning, it tends to employ a uni-dimensional approach, isolating the influence of human agency from the influence of organizational structures on e-learning. Following the second stream of research, this study challenges the determinist view of technology and focuses on the social factors facilitating the success of e-learning. However, it differs from prior work in that it postulates that both human agency (the e-learners) and the organizational context play an important role in the outcome of corporate e-learning initiatives. Focusing on only one dimension provides only part of the story. Therefore, the goal of this study is to develop an integrated conceptual model that explains the influences of both the organizational and the human factors on the successful diffusion of e-learning in contemporary corporations. While various organizational and individual factors can be explored, we limit the scope of our conceptualization to two organizational factors and one individual factor: organizational complexity, bureaucratic control, and motivation. Our proposed conceptual model emerges at the crossroads of four areas of inquiry: e-learning, motivation, the innovation diffusion literature, and technology-related changes. Research in technology-related organizational changes provides an overview of the limitations of the technological deterministic perspective in explaining the outcome of e-learning initiatives. The e-learning, motivation, and innovation diffusion literature helps us conceptualize the relationship between the identified organizational and individual factors and the success of e-learning. The rest of the paper is organized as follows. The next section defines major terms used throughout the paper in order to eliminate ambiguity which may arise from differences in terminology. This is largely due to the vast number of definitions that have emerged from different fields of study over the years. A literature review on e-learning, innovation, and motivation is then conducted, fol-
lowed by a set of propositions. In the methodology section of the paper, we describe the manner in which a pilot study is conducted. Subsequently, we present our findings and engage in discussion of the results of the study. Finally, conclusions are drawn from the result analysis, a discussion of the limitations of this study is presented, and areas of future research are proposed.
DEFINITIONS E-Learning Over the years the term “e-learning” has been used and interpreted in many different ways in the literature. Some authors use it to refer to the use of any form of electronic learning tool such as radio, television, or computers, to deliver learning materials. Urdan and Weggen (2000) describe e-learning as “the delivery of content via all electronic media, including the internet, intranets, extranets, satellite broadcast, audio/ videotape, interactive TV and CD-ROM” .They also use the term synonymously with technology-based learning. In this paper we will use the term e-learning to refer to two forms of contemporary learning. These include Web-based and computer-based learning. Web-based learning represents learning conducted via the Internet or an Intranet or Extranets or a combination of all three. Computerbased learning on the other hand only includes learning which utilizes CD-ROM or other training technology on a stand alone personal computer.
Diffusion Rogers (1995) described the diffusion of innovations as “the process by which an innovation is communicated through certain channels over time among members of a social system”. Based on this definition, the major players in any innovation diffusion process should be the participants,
55
The Role of Organizational, Environmental and Human Factors
also referred by Roger as adopters. This has lead to many researchers and practitioners using the “number” of adopters as a measure of successful diffusion. However, research in information systems and organizational change has long suggested that adoption differs from the real usage of a technology. Using only the number of adopters as the measure of e-learning diffusion success neglects one important principle of e-learning—its goal is to enable organizations to enhance their effectiveness and competitive position (Phillips, 1997). Therefore, rather than solely considering the number of adopters as the indicator of success, we use a perceptional measure, in other words, managers’ perception of how successful the companies’ e-learning initiative is, as an indicator for the success of e-learning diffusion.
LITERATURE REVIEW The literature is examined in three distinct sections: 1.e-learning, 2. innovation theories, and 3.motivation theories. The purpose is to first investigate the research that been done in these three fields then to explore how innovation, motivation and e-learning interact in an organizational context.
E-Learning With its increasing popularity and strategic importance, e-learning has received ample attention both from practitioners and scholars. In the industry, companies have and continue to invest heavily and deploy various technologies including learning management systems, learning content management systems, and reusable learning objects. In the academia, a considerable number of studies have investigated the effects of different technologies on e-learning outcomes. For examples, some studies have examined the effects of text-based systems versus multimedia systems on e-learning success (Merchant et al.,
56
2001; Zhang, 2004). Different prototypes and architectures have also been developed in order to improve the outcome of e-learning (Sampson et al., 2002; Zhang, 2004). Although technology plays an important role in e-learning, one should not presume that if a learning system is built, learners will quickly begin to use it. Studies primarily focusing on the materialistic attributes of e-learning systems assume technology to be an objective, external, and independent force that has relatively deterministic impact on e-learning outcomes. Such studies yield seemingly universal claims as “introduction of good e-learning technology will lead to more successful diffusion of e-learning.” But as management in many organizations has discovered, the availability of sophisticated e-learning technologies does not guarantee success (Servage, 2005) and a large number of organizational elearning endeavors failed in the past (Hamid, 2001). A major concern with the deterministic view of technology is that it downplays the role of human action, interpretation, and organizational and environmental contexts in e-learning and provides little insight into how e-learning success is shaped by such social forces. Such a view is insufficient in explaining why identical e-learning technologies yield divergent outcomes in different organizations. Researchers in information technologies and organizational studies have long pointed out that an identical technology can be enacted differently in different organizational context (Boudreau & Robey, 2005; Orlikowski, 1992, 1993). They adopt a less deterministic view and propose that both human agency and the social context within which a technology operates play an important role in the outcome of the technology. Similar arguments can be made in the case of e-leaning. Adopting this less deterministic view, a number of studies have focused on the social aspects e-learning. For instance, Harreld (1998) pointed out that it is prohibitive for management to impose new technology and management process changes in
The Role of Organizational, Environmental and Human Factors
an organization where the organization culture is not ready to embrace the changes. Other researchers shift their focus from the organization aspect to the human aspect of e-learning. They address this issue by tackling the trainer’s role or individual training styles. They accentuate that to ensure the successful implementation of e-learning, trainers should assume different roles in e-training to those they typically assume when they engage in classroom-based training. Typical additional roles include instructional designer, instructional developer, materials supporter, technology supporter, facility supporter, and distance-site facilitator (Abernathy, 1998; Chute et al., 1999). Another school of researchers attempts to identify the role of individual learning styles on the effectiveness of e-learning. For example, Clariana (1997) use Kolb, Rubin, and McIntyre’s (1979) Learning Style Inventory (LSI) to study training styles in Computer Assisted Learning (CAL). He found that learning style dimensions shifted after a certain period of exposure to CAL. The degree of the shift varies with a learner’s ability and the length and extent of exposure to CAL. However, another study done by Cohen (1997) did not reveal a learning style shift after one year of exposure to CAL. Other studies also have mixed results. Gunawardena and Boverie (1992) found that there was no significant correlation between learning style and how students interact with media and methods of instruction, but a correlation did exist between learning style and students’ satisfaction. However, Larsen (1992) concluded from her study that both effectiveness and satisfaction are independent of students’ learning styles. While this stream of research has shed valuable insights into some of the social factors that may contribute to the successful implementation of e-learning in organizations, one limitation is that they only focus on a single dimension at a time, isolating the role of organizational factors from that of the human agents. This one-dimensional approach fails to reflect the complexity of reality
and may also partly explain their inconsistent results. This paper proposes an integrated model that incorporates different dimensions—organizational structure, environmental factors and human factors (motivation in particular) and provides an in-depth theoretical discuss on how those dimensions work together to influence the diffusion of e-learning.
CONCEPTUAL FOUNDATION To identify organizational factors that may contribute to the successful diffusion of e-learning, we draw on the rich stream of research on diffusion of innovation. As we limit the scope of our conceptualization to one human factor, in other words, motivation, literature on motivation will be used as the foundation for our theoretical discussion of the relationship between individual motivation and e-learning success.
Innovation Innovation literature is broad and diverse. According to Wolfe (1994), during the five years foregoing his study on organization innovation, 351 dissertations and 1299 journal articles addressing organizational innovation were written. The most basic questions at the heart of most researchers in the field of innovation adoption and diffusion are questions such as “What organizational structures and management processes facilitate or inhibit innovation?” and “Why are some organizations more innovative than others?”(Damanpour, 1991; Fichman, 2001, 2004). Researchers have also attempted to answer questions such as “Why do certain innovations diffuse successfully in certain organizations and not in others?”(Chen, 1983; Fichman, 2001, 2004). E-learning can be conceptualized as an innovation in learning and the rich innovation literature furnishes conceptual tools that aid in the identification of key organizational factors that may contribute to the success of e-learning diffusion. 57
The Role of Organizational, Environmental and Human Factors
Organizational Structural Theories Extant literature on innovation is replete with hypotheses, models, and theories that seek to identify organizational structural factors that facilitate the diffusion of innovations. The theories can all be categorized under the umbrella term Organizational Structural Theories. Structural theories have evolved over the years from uni-dimensional to middle-range theories. As explained by Damanpour and Gopalakrishan (1998), unidimensional theories use organization structural variables such as vertical differentiation (the number of levels in an organization hierarchy) to explain innovation adoption and diffusion. For instance, some researchers posit that vertical differentiation is negatively associated with innovation adoption and diffusion as it increases links in communication channels, thereby inhibiting the flow of innovative ideas (Damanpour & Gopalakrishnan, 1998; Hull & Hage, 1982). Other variables which are commonly considered in these theories are: functional differentiation (the number of different functional units in an organization), specialization (different areas of expertise in an organization), professionalism (professional knowledge including employees’ education and experience), formalization (the degree to which rules and procedures are followed in an organization), and centralization (whether decision making is centralized or distributed) (Damanpour, 1987). One limitation with uni-dimensional theories is that they neglect other dimensions and do not reflect the complexity of the real world (Damanpour & Gopalakrishnan, 1998). Researchers also criticize them due to the inconsistencies between their prediction and the results gathered from empirical studies (Downs & Mohr, 1976). This limitation in uni-dimensional theories leads to the development of Middle- range theories. Middle range structural theories incorporate other contingency factors as an explanation of the contradictory findings in prior work. Examples
58
of middle range structure theories include the dual core theory, which focuses on differentiation between types of innovation (Daft, 1978; Damanpour, 1991), the theory of innovation radicalness which examines incremental versus radical innovations (Dewar & Dutton, 1986), the ambidextrous theory of innovation which focuses on the various stages of the innovation process (Damanpour & Gopalakrishnan, 1998; Zmud, 1982). These theories postulate that the relationship between organization structural variables studied by uni-dimensional theories and diffusion of innovation is contingent on the different types of innovation, or radicalness of innovation, or stages of innovation. In the following section, we will only examine the ambidextrous theory in greater detail. This is because depending on the organization using e-learning, it could be categorized as either an administrative innovation or a technical innovation, or it could be categorized as either a radical innovation or an incremental innovation. This relativity in categorization has made the findings in the dual-core theory and radicalness theory of little value to our study. The ambidextrous theory on the other hand, is of more interest to us since there is no relativity regarding two stages of e-learning innovation. The ambidextrous theory categorizes organizational innovation into two distinct stages, the initiation stage and the implementation stage (Duncan, 1976). The initiation stage consists of all the activities pertaining to problem perception, information gathering, attitude formation and resource development leading to the decision to adopt (Rogers, 1995). This is synonymous with the term adoption which we use in this paper. The implementation stage is composed of all events and actions relating to change in both the innovation and the organization (Duncan, 1976; Rogers, 1995). This stage also incorporates the diffusion of innovation. The ambidextrous theory explores the relationship between organization structural variables and the two stages of the adoption process. Based
The Role of Organizational, Environmental and Human Factors
on its finding, high organization complexity (specialization, functional differentiation, and professionalism) and low bureaucratic control (formalization, centralization, and vertical differentiation) facilitate the adoption of an innovation, while low complexity and high bureaucratic control facilitate the diffusion of innovations (Damanpour & Gopalakrishnan, 1998). Uni-dimensional theories suggest that organizational complexity is positively related to innovation adoption and diffusion whereas the degree of bureaucratic in an organization is negatively associated with innovation adoption and diffusion (Damanpour & Gopalakrishnan, 1998). When applied to e-learning, this would mean that organizations with high organization complexity should expect e-learning to be adopted and diffused while in organizations with a high level of bureaucratic control adoption and diffusion are unlikely. The ambidextrous theory adds one more dimension (stages of innovation) to the uni�dimensional theory. Besides pointing out that organization complexity has a positive effect while bureaucratic control has a negative impact on both the adoption and diffusion of innovation, the ambidextrous theory also examines the correlation between structural variables and the two stages of innovation. According to the theory, the correlation between organization complexity and initiation of innovation is high while the correlation between organization complexity and diffusion is low. On the other hand, the correlation between organization bureaucracy and initiation of innovation is low while the correlation between bureaucracy and diffusion of innovation is high (Damanpour, 1991). As seen in the above discussion, uni-dimensional as well as middle-range structural theories of innovation usually aim to specify organizational structural characteristics that lead to the adoption and diffusion of innovation. However, most results from empirical studies using structural
theories tend to conflict (Damanpour, 1991). The problem with these theories is that they do not take into account other variables such as the business environment and human factors when attempting to explain how innovations diffuse into an organization.
Other Dimensions Environmental Dimension To explain the variance between the findings in empirical studies conducted using middle-range theories, innovation scholars have attempted to develop more sophisticated and more comprehensive models, which take into consideration multiple dimensions of innovation. For example, Damanpour and Gopalakrishnan (1998) added the environmental context into their framework to investigate how organizational structures and environmental factors influence the adoption of innovation. Damanpour and Gopalakrishnan focus on the dynamism of the environment and further classify environmental dynamism into two components: environmental stability and environmental predictability. These two components have resulted in four combinations of environment characteristics: stable and predictable, stable and unpredictable, unstable and predictable, unstable and unpredictable. They argue that organizations in environments which are stable and predictable tend to adopt few innovations slowly in contrast to organizations in environments which are unstable and unpredictable. Additionally, organizations in both unstable and unpredictable environments tend to have cultivated a culture of innovation. Employees in such organizations are continuously encouraged to make use of new innovations and to be creative. Moreover, the organizational structure, culture, and administrative system encourage employees to be innovative and also make use of innovations. Thus, when it comes to innovations such as e-learning one would expect that orga-
59
The Role of Organizational, Environmental and Human Factors
nizations in an environment characterized by instability and unpredictability, most members would have at some point experimented with elearning packages. However, the sustained use of this innovation would largely be dependent on the human factors which are discussed below.
Human Factors Although organization structural variables and the environment provide important insights into various factors which may influence the adoption and in some cases the diffusion of innovations, they cannot fully explain why certain innovations such as e-learning tend to diffuse in some organizations and not in others. We believe that when analyzing the diffusion of innovations that involve the transformation of human resources such as e-learning, human factors need to be taken into consideration. This argument is consistent with the recent development in information systems research and organizational change. While earlier work in these two fields tends to privilege technology over human agency, there is an increasing tendency to emphasize the importance of human agents in enacting technologies (Boudreau & Robey, 2005; Orlikowski, 1992, 1993). This human agency position suggests that individuals do not passively accept and use technology. In contrast, they actively enact technologies in different ways. They can use it minimally, maximally, or improvise in ways that are hard to anticipate. Hence, it stands to reason that human factors should be incorporated in order to have a complete understanding of e-learning success. In particular, we focus on one human factor, motivation. Motivation has been suggested as being one of the key drivers of users’ technology acceptance and subsequent usage behavior.
Motivation Motivation theories seek to identify and explain the factors that influence human behavior, particu-
60
larly the way in which individuals react to the actions of others around them and the stimuli in their environment (Wilkinson, Orth, & Benfari, 1986). Over the years several general theories on motivation have emerged, the most discussed being Maslow’s Hierarchy of Needs theory, Alderfer’s ERG Theory, Herzberg’s Two Factor Theory and McClelland’s Learned Needs Theory. Motivation theories can be categorized into three broad categories namely, Content Theories, Process Theories, and Reinforcement Theories. Content theories advocate that individuals are motivated to fulfill needs. As stated by Knoop (1994), Content Theories focus on identifying values conducive to, but not necessarily causal to, job satisfaction. Maslow’s hierarchy of Needs Theory, Alderfer’s ERG Theory and Herzberg’s Two-Factor Theory, all fall under the category of content theories. Process theories on the other hand are mainly concerned with explaining the manner in which people think and behave to get what they want. Motivation Theories included under this category include the Equity Theory and the Expectancy Theory. Reinforcement Theories are concerned with the effects of rewards upon motivated behavior. McClelland’s Acquired Needs Theory is an example of such a theory. Although these theories differ in how they define motivation, they all agree that if people are not motivated they will not engage in certain behavior.
Drivers of Motivation Drawing from Motivation Theory we were able to identify two broad drivers of motivation for human behavior, namely, intrinsic motivation and extrinsic motivation. Davis, Bagozzi, and Warshaw (1992) found intrinsic and extrinsic motivation to be key drivers of behavioral intention to use a technology. Vallerand (1997) mentions that intrinsic motivation refers to the pleasure and inherent satisfaction derived from a specific activity. Intrinsic rewards are those that come from the work itself, for instance, the feeling of
The Role of Organizational, Environmental and Human Factors
accomplishment and success one experiences from performing a task they enjoy. Both the nature of the task and the compatibility of the person with the task primarily influence intrinsic rewards. Venkatesh (1999) states that research in psychology advocates that intrinsic motivation during training leads to favorable outcomes. However, individuals tend to determine intrinsic motivation thus organizations cannot have a huge impact on various outcomes determined by intrinsic motivation. Extrinsic motivation highlights performing a specific behavior in order to achieve a specific goal (Deci & Ryan, 1987). Extrinsic rewards are those that have are not necessarily related to a task, but have strong motivational effects. Examples of these include pay, benefit, and recognition programs, and are influenced primarily by the organization. Organizations may make use of extrinsic rewards to attempt to motivate employees to achieve or perform various tasks and activities.
E-Learning and Motivation People in organizations differ from one another in their motivation to learn and participate in training programs (Ukens, 2001). Most motivational theories have not been able to agree on whether people are primarily motivated by intrinsic or extrinsic factors. Bruno and Osterloh (2002) argue that intrinsic and extrinsic motivation are interlinked and as such companies cannot opt for one or the other in isolation. It is therefore imperative that both be considered when attempting to assess whether or not employees are motivated to perform certain tasks, in this case e-learning. Research suggests that motivation in general (whether it is intrinsic or extrinsic) is an important factor driving perceptions and behavior, even in a training context (Bruno & Osterloh, 2002; Pierce & Delbecq, 1977). Compared to traditional classroom-based learning initiatives, corporate e-learning initiatives are particularly susceptible
to high dropout rates because participation is typically voluntary and often goes unsupervised. Oftentimes, companies require that e-learning courses be taken outside the work environment during one’s own time. However, studies have shown that most employees not only prefer to take e-courses during work hours but they also prefer to take them in the workplace (ASTD, 2006). If employees are not self-motivated to work outside the work environment because they cannot perceive any meaningful benefits to such work, they are likely to drop out of e-learning programs. Previous research has found that individuals tend to persist at activities that are intrinsically motivating (Rieber, 1991). Rosenberg (2001) argues that employees will only embrace learning when they perceive direct relevance and benefit (extrinsic motivation) of the learning program for themselves and when they sense support from the firm. It is therefore imperative that firms understand the factors which motivate employees (such as firm support in terms of time and sponsorship) to engage in e-learning in order to ensure its successful diffusion into the organization. When applied to the context of e-learning adoption, the theories of motivation are consistent with other technology adoption theories. The Technology Acceptance Model (TAM) proposed by Davis (1989) suggests that when users are presented with a new technology, the factors that influence their decision on whether or not to use it include the perceived usefulness of the innovation, in other words. The degree to which the innovation would enhance that employee’s job performance and perceived ease of use, or, the degree to which an employee believes that using a particular innovation would be free of effort. Perceived enjoyment, otherwise known as the extent to which an activity is perceived as being enjoyable in its own right (Davis et al., 1992), has also been found to significantly influence perceived ease of use (Hwang & Yi, 2003). Perceived ease of use of a system and perceived usefulness of the system are both extrinsic moti-
61
The Role of Organizational, Environmental and Human Factors
vators whereas the enjoyment derived from using the system is an intrinsic motivator (Igbaria & Livari, 1995). Therefore, several factors will influence an employee’s perception about a particular technology which in turn will influence actual usage of that technology. Together these factors contribute to an employee’s motivation to make use of a particular technology. To this end, we have seen from the literature that organizational structural variables (Daft, 1978; Nord & Tucker, 1987; Zmud, 1982) and the environment (Damanpour & Gopalakrishnan, 1998) in which an organization operates are key drivers of the diffusion of innovations in general. In the case of e-learning however, we believe that human factors such as motivation of people within an organization also plays a critical role in whether or not e-learning will diffuse. If the majority of the employees of an organization do not feel motivated to engage in e-learning, the e-learning initiative of that organization will not diffuse. This leads to our first proposition: P1. The diffusion of leaning innovations such as e-learning in organizations is not only a function of organizational and environmental factors, but also a function of human factors such as motivation. Furthermore, we believe that human factors such as motivation may in fact moderate the relationship between organizational and environmental variables and the diffusion of e-learning. We therefore propose that the following:
P2. Human factors such as motivation will moderate the relationship between organizational factors and e-learning diffusion. P3. Human factors such as motivation will moderate the relationship between environmental factors and e-learning diffusion.
PROPOSED FRAMEWORK Our proposed framework stems from the above discussion. According to our argument, human factors such as motivation, organizational variables, and the environment all influence e-learning diffusion. The Figure 1 summarizes the proposed relationship between motivation, organizational variables, and environment. The function in Figure 1 suggests that motivation is a key factor in e-learning diffusion. If employee motivation is low or non existent (zero), then e-learning diffusion will not occur (it will also be zero). However, if organizational structural or structural variables alone are poor/low or zero, diffusion may still occur if motivation is high. Figure 2 graphically illustrates the propositions presented above. If motivation to engage in e-learning is high among employees and organization structural variables and environmental variable are positive, then successful e-learning diffusion will be highly likely. If motivation is positive but structural and environmental variables are negative, then diffusion is still likely. However, if motivation is low and structural and environmental variables are positive diffusion is unlikely, and if motivation is low and structural
Figure 1. The relationship between e-learning, motivation, structural variables and the operating environment
organizational
E-learning Diffusion = f [motivation (Organizational Variables + Environment)]
62
The Role of Organizational, Environmental and Human Factors
Figure 2. The relationship between e-learning, motivation, structural variables and the business operating environment-the likelihood of e-learning diffusion 1
4 Likely
Employee Motivation
2
Highly likely
3
Highly Unlikely
Unlikely
Structural Variables & Environment
and environmental variables are negative then e-learning diffusion will be highly unlikely. Therefore, the desirable state for organizations engaged or wishing to engage in e-learning initiatives is to be in the top right-hand portion of the quadrant where employee motivation is high and structural and environmental variables are positive. If an organization is in the top lefthand section (1) of the diagram it needs to focus on structural variables in order to move toward 4. Activities that focus on building organization complexity (specialization, functional differentiation, and professionalism) and reducing organization bureaucratic control (formalization, centralization, and vertical differentiation) are necessary to push such organizations to 4. If an organization is in the bottom right-hand portion of the quadrant (3), it needs to focus on activities which cultivate employee motivation in order to move to 4. These and other activities are summarized in Figure 3 and include actively, providing flexible learning hours for employees, involving top management, providing relevant content, attaching financial and non financial incentives for completing e-learning modules or programs, providing easy to use technology, and making the technology readily accessible for employees. Finally, organizations in the bottom left-hand portion of the quadrant (2) need to improve both
the structural variables and the employee motivation in order to move to 4. However, trying to improve both simultaneously may in many cases prove too much of a challenge. Therefore, an organization may wish to first focus on first improving employee motivation which will move them to 1. Thereafter they may focus of techniques for improving structural elements in order to move to 4.
PILOTDY Although the purpose of the study was to develop rather than empirically test the conceptual framework presented above, we had the opportunity to conduct a pilot study to assess some of the above stated conceptual propositions. Of interest to us was the role of motivation (a human factor) on e-learning diffusion in corporate environments. Data for this study was collected from four corporations based in North East Ohio from both the employees and management.
Instrumentation The instruments used included two questionnaires. One was specifically for the management and the other was specifically for employees.
63
The Role of Organizational, Environmental and Human Factors
Figure 3. Activities that improve employee motivation to engage in e-learning Corporate Support Top management participation and involvement Flexible learning hours Relevant content Incentives Monetary o Cash Bonuses o Base salary increase o Stock options Non-monetary o Promotion to higher profile position o Recognition o Accolades Technology Easy to use Enjoyable to use Readily Accessible Technical support
The questionnaire for managers was developed by the researchers based on the literature review on e-learning, innovation theories, and motivation theories. The questionnaire for employees was based on an instrument formed by Wherry and South (1977) and the literature review. The questions in both questionnaires regarding employee motivation, organization structure, and effectiveness of e-learning were formulated to be answered using a five-point Likert scale with 1=strongly disagree, 2= disagree, 3= neutral, 4= agree, and 5=strongly agree.
The Management Questionnaire Considering that management would have better knowledge about the aggregate information of the company, we designed a questionnaire specifically for the management. The aim of the management questionnaire was to assess organizational structural variables such as Organizational Complexity (specialization, function differentiation, professionalism) and Bureaucratic Control (formaliza-
64
tion, centralization, vertical differentiation). Based on the literature review on innovation theories, 6 questions were designed to measure the organization structural variables. Although, the main thrust of the management questionnaire was to determine the various structural variables in the solicited organization, we also added a second dimension to assess the effectiveness of e-learning diffusion and management’s perceptions about the benefits of e-learning. Additionally, a question was also included to assess the environment in which the organizations are currently operating in. This question asked about the level of competition perceived by management. The final part the management questionnaire sort general personal information about the manager, such as their age, gender, and educational background.
The Employee Questionnaire The employee questionnaire aimed to address the issue of employee motivation to engage in e-learning. The questionnaire was based on an
The Role of Organizational, Environmental and Human Factors
instrument formed by Wherry and South (1977) to evaluate employee motivation. The instrument evaluates eight broad categories of intrinsic and extrinsic motivation namely: (1) responsibility, (2) challenge vs. boredom, (3)high level of activity, (4) goal orientation, (5) social recognition, (6) being appreciated, (7) being judged reliable, and (8) immediate gratification. The first four items in the above list cover intrinsic motivation while the last four cover extrinsic motivation (Wherry & South, 1977). Seventy sample questions are given in the Wherry and South (1977) instrument which, may be used to evaluate the eight broad categories of intrinsic and extrinsic motivation. We took some of these questions and used them in an e-learning context. Our questionnaire was divided into two distinct sections. A five-point Likert scale was used in the first section to assess questions about intrinsic and extrinsic motivation, while the second part of the employee questionnaire was comprised of general questions such as employee age, gender, years of working experience, and level of education.
FINDINGS The Demographic data of the respondents is summarized in Table 1. Table 2 outlines some of the findings of the pilot study, regarding the independent variables motivation, environment,
organizational structure and the dependent variable successful diffusion of e-learning. The second column in Table 2 (Motivation x1) shows the mean motivation levels of employees to engage in e-learning. For all companies on average employees seem motivated to engage in e-learning but the degree of motivation varies from company to company. The next column (Environment x 2) shows the environment in which each of the firms operates. Recall from the literature that an organization can either be in a stable and predictable environment or an unstable and predictable environment or a stable and unpredictable environment or an unstable and unpredictable environment. If an organization is in an unstable and unpredictable environment a score of +1 was assigned. If an organization is in an unstable and predictable or stable and unpredictable environment the environment is considered neutral and a score of 0 was assigned. Finally if an organization was in an environment that is stable and predictable environment a score of -1 was assigned. On average all the companies appear to be operating in an unstable and unpredictable environment. Next, organizational complexity and bureaucratic control variables were considered. Two companies, (1 and 4) were not complex in structure, while company 3 was complex and company 2 was somewhere in the middle. Previous research (Damanpour, 1991; Hull & Hage, 1982) suggests that there is a negative relationship bureaucratic
Table 1. Employee demographics Gender
Industry
Education Level
Male
Female
Service
Manufacturing
Bachelor Degree
No Bachelor Degree
65%
35%
43%
57%
81%
19%
65
The Role of Organizational, Environmental and Human Factors
Table 2. Descriptive statistics Variable Company
Motivation (x1)
Environment (x2)
Diffusion (x4)
1
0.46
1
C = -0.33 B = 0.00
0.5
2
0.54
1
C = 0.00 B = -0.67
0.75
3
0.51
1
C = 0.67 B = 0.67
0.5
4
0.54
1
C = 1.00 B = 0.33
1
Range of values for variables
-1≤ x1≤1
-1≤ x2≤1
-1≤ x3≤1 -1≤ C≤1 -1≤ B≤1
0≤ x4≤1
control and innovation, therefore a score of -1 was given to companies that are displayed high levels of bureaucratic control and a score of 1 was given to those which displayed low levels of bureaucratic control. Half the companies appear to have some level of bureaucratic control. Finally, successful diffusion of e-learning was assessed. The diffusion of e-learning could either be successful or unsuccessful. If fully successful a score of 1 was assigned while if unsuccessful a score of 0 was assigned. Companies scoring 0.5 and below were considered to be unsuccessful in their e-learning diffusion initiative while those scoring above 0.5 were considered successful in the diffusion of their e-learning initiative. Company 2 and 4 appear to be successful while company 1 and 3 appear to be unsuccessful in the diffusion of their e-learning initiative.
DISCssion The pilot study yielded some interesting results. Although a small and convenient sample was used, which subsequently prevents the research from making any probability statements, the descriptive statistics offered some interesting insights into the various variables examined in the study. Upon examination of the data it is evident that all
66
Organization Structure (x3)
companies reported that they have been operating in an unstable and unpredictable environment, so it is difficult to determine the impact of the environment on the successful diffusion of e-leaning. No pattern is evident from the relationship between organizational variables and the successful diffusion of e-learning, however due to the small sample size it is difficult to draw generalized conclusions from this. A very interesting finding from the pilot, however, is that there seem to be some differences in the motivation levels of the employees to engage in e-learning. Company 1 and 3 had the lowest employee motivation levels and also the lowest levels of e-learning diffusion, while company 2 and 4 which had the highest levels of employee motivation and the highest levels of e-learning diffusion.
CONCLUSIONS AND DIRECTIONS FOR FUTURE RESEARCH The purpose of this study was to conceptually explore the factors that influence e-learning diffusion in contemporary corporations. We treated e-learning as an innovation in learning and applied concepts from literature on organizational change, innovation diffusion, and motivation to
The Role of Organizational, Environmental and Human Factors
assess the factors that influence the diffusion of e-learning in organizations. Among these factors are organizational variables such as Organizational Complexity (specialization, function differentiation, professionalism) and Bureaucratic Control (formalization, centralization, vertical differentiation). Previous literature has also looked into the effect of the environment, so we also factored this into the analysis. However, in the case of e-learning we believe that human factors may play a crucial role in the determination of whether or not an innovation such as e-learning will successfully diffuse into an organization. We considered just one human factor (employee motivation) and suggested that motivation may not only be a factor that contributes towards the successful diffusion of e-learning, but without employee motivation e-learning initiatives in an organization could fail. Armed with this framework, future researchers could empirically explore the relationships between these three factors. We conducted a pilot study to get insights into relationships between employee motivation, organization variable, and the environment and found that in organizations which had lower levels of employee motivation the diffusion of e-learning was not as successful as organizations with higher levels of employee motivation. Clearly the pilot study has several limitations which future research could address. The first major limitation to the study was the small sample size. For the results to be generalized it’s necessary to substantially increase the sample size. A second limitation with the pilot study is the sampling procedure used. A convenient sample of companies is used. Ideally, a random sample should have been used. Without a random sample we cannot and do not make any probability statements and are obliged to only report descriptive statistics. Despite the above limitations of the pilot study, the main contribution of this paper, the conceptual framework, offers researchers the opportunity
to investigate a myriad of research questions related to the successful diffusion of e-learning, by considering e-learning as an innovation in learning and factoring in human factors. In this paper the human factor considered was motivation. Future researchers may wish to look into the role of other human factors. For example they may consider issues such as the impact of the level of technological expertise among employees on e-learning diffusion. This approach of considering organizational, environmental, and human factors together rather than in isolation as have some previous studies, has the potential to advance knowledge on factors which contribute to the successful diffusion of e-learning.
REFERENC Abernathy, D. J. (1998). The WWW of distance learning: Who does what and where? Training and Development, 52(1), 29-30. ASTD. (2006). State of the industry report (Online). Available at www.astd.org Boudreau, M.-C., & Robey, D. (2005). Enacting integrated information technology: A human agency perspective. Organization Science, 16(1), 3-18. Bruckman, A. (2002). The Future of e-learning communities. Communications of the ACM, 45(4), 60-63. Bruno, S. F., & Osterloh, M. (2002). Successful management by motivation: Balancing intrinsic and extrinsic incentives. Berlin Herdelberg: Springer-Verlag Chen, E. K. Y. (1983). Multinational corporations and technology diffusion in hong kong manufacturing. Applied Economics, 15(3), 309-312. Chute, A. G., Thompson, M. M., & Hancock, B. W. (1999). The Mcgraw-Hill handbook of distance learning. New York: McGraw-Hill.
67
The Role of Organizational, Environmental and Human Factors
Clariana, R. B. (1997). Considering learning style in computer-assisted learning. British Journal of Education Technology, 28(1), 66-68. Cohen, V. L. (1997). Learning styles in a technology-rich environment. Journal of Research on Computing in Education, 29(4), 339-350. Daft, R. L. (1978). A dual-core model of organizational innovation. Academy of Management Journal, 21(2), 193-210. Damanpour, F. (1987). The adoption of technological, administrative, and ancillary innovations: Impact of organizational factors. Journal of Management, 13(4), 675. Damanpour, F. (1991). Organizational innovation: A meta-analysis of effects of determinants and moderators. Academy of Management Journal, 34(3), 555-590.
empirical analysis. Management Science, 32(11), 1422-1433. Downs, G. W., & Mohr, L. B. (1976). Conceptual issues in the study of innovation. Administrative Science Quarterly, (21), 700-714. Duggan, S., & Barich, S. (2001). The knowledge economy and corporate e-learning. CA: The Silicon Valley World Internet Centre. Duncan, R. B. (1976). The ambidextrous organization: Designing dual structures for innovation. In R. H. Kilmann, L. R. Pondy & S. D. P. (Eds.), The Management Of Organization: Strategy And Implementation (Vol. 167-188). New York: North- Holland. Fichman, R. G. (2001). The role of aggregation in the measurement of it-related organizational innovation. MIS Quarterly, 25(4), 427-455.
Damanpour, F., & Gopalakrishnan, S. (1998). Theories of organizational structure and innovation adoption: The role of environmental change. Journal of Engineering and Technology Management, 15(1), 1-24.
Fichman, R. G. (2004). Going beyond the dominant paradigm for information technology innovation research: Emerging concepts and methods. Journal of the Association for Information Systems, 5(8), 314-355.
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 318-340.
Gunawardena, C. N., & Boverie, P. E. (1992). Impact of learning styles on instructional design for distance education. Paper presented at the World Conference of the International Council of Distance Education,, Bangkok Thailand.
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1992). Extrinsic and intrinsic motivation to use computers in the workplace. Journal of Applied Social Psychology, 22(14), 1111-1132. Deci, E. L., & Ryan, R. M. (1987). The support of autonomy and the control of behavior. Journal of Personality and Social Psychology, 53(6), 1024-1037. Deloitte Research. (2002). From e-learning to enterprise learning, becoming a strategic organization. Retrieved from www.dc.com/research Dewar, R. D., & Dutton, J. E. (1986). The adoption of radical and incremental innovations: An
68
Hamid, A. A. (2001). E-Learning-Is it the’’E’’or the learning that matters? The Internet and Higher Education, 4(3-4), 311-316. Harreld, J. B. (1998). Building faster, smarter organizations. In D. Tapscott, A. Lowy & N. Klym (Eds.), Blueprint the digital economy: Creating wealth in the era of e-business. New York: McGraw Hill. Hull, F., & Hage, J. (1982). Organizing for innovation: beyond burns and stalker’s organic. Sociology, 16(4), 564-577.
The Role of Organizational, Environmental and Human Factors
Huynh, M. Q., Umesh, U. N., & Valacich, J. S. (2003). E-Learning as an emerging entrepreneurial enterprise in universities and firms. Communications of the Association for Information Systems, 12, 48-68. Hwang, Y., & Yi, M. Y. (2003). Predicting the use of web-based information systems: Self-efficacy, enjoyment, learning goal orientation, and the technology acceptance model. International Journal of Human-Computer Studies, 59(4), 431-449. Igbaria, M., A., & Livari, J. (1995). The effects of self-efficacy on computer usage. OMEGA International Journal of Management Science., 23(6), 587-605. Knoop, R. (1994). Work values and job satisfaction. Journal of Psychology, 128(6), 683. Kolb, D. A., Rubin, I. M., & McIntyre, J. M. (1979). Organizational Psycho1ogy:An Experiential Approach (3rd ed.). Englewood Cliffs, NJ: Prentice-Hall. Larsen, R. E. (1992). Relationship of learning style to the effectiveness and acceptance of interactive video instruction. Journal of Computer-based Instruction, 19(1), 17-21. Li, F. W. B., & Lau, R. W. H. ( 2006). On-demand e-learning content delivery over the internet. International Journal of Distance Education Technologies, 4(1), 46-55. McCrea, F., K. , Gay, R., & Bacon, R. (2000). Riding the big waves: A white paper on the B2B e-learning industry. In (pp. 1-51.). San Francisco: Thomas Weisel Partners. Merchant, S., Kreie, J., & Cronan, T. (2001). Training end users: Assessing the effectiveness of multimedia CBT. Journal of Computer Information Systems, 41(3), 20-25. Nord, W. R., & Tucker, S. (1987). Implementing routine and radical innovation. Lexington, MA: Lexington Books.
Nurmi, R. (1999). Knowledge-intensive firms. In J. W. Cortada & J. A. Woods (Eds.), The knowledge management yearbook. Boston: Butterworth-Heinemann. Ong, C.-S., Lai, J., & Yishun, W. (2004). Factors affecting engineers’ acceptance of asynchronous e-learning systems in high-tech companies. Information, 41(6), 795-804. Orlikowski, W. J. (1992). The duality of technology - Rethinking the concept of technology in organizations. Organization Science, 3(3), 398-427. Orlikowski, W. J. (1993). Case tools as organizational-change - Investigating incremental and radical changes in systems-development. MIS Quarterly, 17(3), 309-340. Phillips, J. J. (1997). Handbook of training evaluation and measurement methods (3rd ed.). Houston, TX: Gulf Publishing. Pierce, J. L., & Delbecq, A. (1977). Organization structure, individual attitudes and innovation. Academy of Management Review, 2(1), 27. Renée, E. D., Barbara, A. F., & Eduardo, S. (2005). E-Learning in organizations. Journal of Management, 31(6), 920-940. Rieber, L. R. (1991). Animation, incidental learning, and continuing motivation. Journal of Educational Psychology, 83(3), 318. Rogers, E. M. (1995). Diffusion of innovations (Vol. 4th). New York: The Free Press. Rosenberg, M., 2001. . (2001). E-learning: Strategies for delivering knowledge in the digital age. Toronto: McGraw-Hill. Sampson, D., Karagiannidis, C., & Cardinali, F. (2002). An architecture for Web-based e-learning promoting re-usable adaptive educational e-content. Educational Technology and Society, 5(4), 27-36.
69
The Role of Organizational, Environmental and Human Factors
Servage, L. (2005). Strategizing for workplace elearning: Some critical considerations. The Journal of Workplace Learning, 17(5-6), 304-317.
Wherry, R. J., & South, J. C. (1977). A Worker Motivation Scale. Personnel Psychology, 30(4), 613-636.
Ukens, L. (2001). What smart trainers should know: The secrets of success from the world’s foremost experts. San Francisco: John Wiley & Sons, Inc.
Wilkinson, H. E., Orth, C. D., & Benfari, R. C. (1986). Motivation theories: An integrated operational model. SAM Advanced Management Journal, 51(4), 24.
Urdan, T. A., & Weggan, C. C. (2000). Corporate e-learning: exploring a new frontier. New York: Hambrecht & Co.
Wolfe, R. A. (1994). Organizational innovation: review, critique and suggested research. Journal of Management Studies, 31(3), 405-431.
Vallerand, R. J. (1997). Toward a hierarchical model of intrinsic and extrinsic motivation. Advances in Experimental Social Psychology, 27, 271-360.
Zhang, D. (2004). Virtual mentor and the lab system—Toward building an interactive, personalized, and intelligent e-learning environment. Journal of Computer Information Systems, 44(3), 35-43.
Venkatesh, V. (1999). Creation of favorable user perceptions: Exploring the role of intrinsic motivation. MIS Quarterly, 23(2), 239-260. Wagner, E. D., & Reddy, N. L. (1987). Design considerations in selecting teleconferencing for instruction. The American Journal of Distance Education, 1(3), 49-56.
70
Zhang, D., & Nunamaker, J. F. (2003). Powering e-learning in the new millennium: An overview of e-learning and enabling technology. Information Systems Frontiers, 5(2), 207-218. Zmud, R. W. (1982). Diffusion of modern software practices: Influence of centralization and formalization. Management Science, 28(12), 1421-1431.
71
Chapter V
Distance Education:
Satisfaction and Success Wm. Benjamin Martz, Jr. Northern Kentucky University, USA Morgan Shepherd University of Colorado at Colorado Springs, USA
AbSTRra • •
Almost 3.5 million students were taking at least 1 online course during the fall 2006 term. The 9.7 % growth rate for online enrollments far exceeds the 1.5 % growth of the overall higher education student population. (Allen and Seaman, 2007)
By 2006, the distance education industry was well beyond $33.6 billion (Merit Education, 2003). As with most markets, 1 of the keys to taking advantage of this growing market is customer satisfaction. Therefore the greater the student satisfaction in a distance program, the more likely that program will be successful. This paper identifies 5 key components of satisfaction for distance education programs through a student satisfaction questionnaire and factor analysis. A questionnaire was developed using these variables and administered to 341 distance students. The results revealed 5 constructs for student satisfaction in a distance education program (Martz and Reddy, 2005; Martz and Shepherd, 2007). Using these factors as guidance, this paper extends those findings to provide some operational and administrative implications.
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Distance Education
INTRODUCTION – WHY DO WE CRE ABOUT SATISFACTION The education industry is being transformed by the ever-growing presence of distance education. The industry has many segments and ancillary components, including higher education, corporate training, and IT infrastructure. The distance education industry has several market drivers that educators, colleges, and businesses must take a serious look at to successfully implement distance education programs and courses. Howell et al. (2003) posit that those interested in distance learning education must be well informed about the trends within the industry. One overriding concern is that the interest in distance education is driving a huge investment without an understanding of what will make a distance education program successful. The first market driver is the significant market potential. International Data Corporation sets the worldwide value of the e-learning market at $23 billion for 2004. Over 90% of all colleges are expected to offer some form of online courses by 2004 (Institute of Higher Education Policy, 2000). Over 3.5 million students took at least one online course in fall 006 (Allen and Seaman, 2007). The corporate segment of this industry is growing substantially as well. Corporations envision online training warehouses that will save large amounts of training dollars. Estimates have this training warehouse market segment at $11 billion in 2003 (Kariya, 2003). At the same time, major corporations are expanding their corporate university / distance learning programs. AACSB International (1999) offers that there are more than 1600 corporate universities with their eyes on a distance learning component and further predicts there will be more corporate than traditional universities by 2010. The corporate market growth is being driven by the profit potential. Corporate managers and college administrators envision significant cost reduction (Traupel, 2004) and significantly higher
72
demand (Sausne, 2003) for training/education, both of which can create higher profits. For example, elective classes that do not have enough students enrolled in on-campus classes may pick up enough distance students to make teaching the course more feasible (Creahan and Hoge, 1998). The college or school’s mission or charter represents another driver to implement distance education programs. As most educational institutions serve a geographical region, either by charter or mission, a distance-learning program may be a practical method to help satisfy this strategic mission (Creahan and Hoge, 1998). The distance education model can be seen as a way to improve profits and improve the ability to provide education to students who may have trouble coming to a certain geographic location. A second major driver is the dramatic change in career expectations by both employers and employees. These changes are being seen in the concept of life-long learning (Howell, et al. 2003). Today, employees are not expected to stay in the same job for long periods of time. Careers today encompass jobs in multiple industries, combinations of part-time work in multiple jobs, telecommuting, leaving and re-entering the full-time work force and switching jobs more often than in the past. Today’s employee readily envisions the need to maintain a level of knowledge current with the career demands (Boyatzis and Kram, 1999). In contrast to the market drivers, the “commercialization” of education also raises concerns about the basic process of learning (Noble, 1999). By definition, distance education changes the basic paradigm of the education environment. In turn, this means that students will probably respond differently to this environment than they do to the traditional classroom. Some of the responses will be intended and some will be unintended. For example, what are some inhibitors to learning or fundamental problems caused by using a distance learning environment? One problem identified is the lower student retention found in distance programs. Students
Distance Education
seem to find it easier to “drop out” of a class. Carr (2000) reports a 50% drop out rate for online courses. The willingness and ease by which to drop online classes is a real concern for higher education segment, but may be even more problematic for the corporate segment. Corporations that schedule training and then anticipate a certain level of trained employees by the end of the distance class will court disaster if drop out rates are higher than expected. One area of explanation for the high drop out rate is the lack of social interaction found in the distance environment. Haythornthwaite et al., (2000) looked at how social cues such as text without voice, voice without body language, class attendance without seating arrangements and students signing in without attending internet class impacted students “fading back.” They found that the likelihood of students “fading back” is greater in distance learning classes than in traditional face-to-face classes. Other researchers such as Hogan and Kwiatkowski (1998) and Hearn and Scott (1998) argue that the emotional aspects of teaching methods have been ignored in the distance environment and that before adopting technology for distance teaching, education must find a way to supplement the social context of learning. Finally, Kirkman et al. (2002) suggested that trust might be a significant characteristic is the relationship between the student and the instructor in a learning environment. Their concern is that the distance environment may adversely impact this trust component. Another body of research that can be useful in understanding the distance education environment is from researchers studying how people react when introduced to technology. Poole and DeSanctis (1990) suggested a model called Adaptive Structuration Theory (AST). The fundamental premise of the model is that the technology under study is the limiting factor or the constraint for communication and that the users of the technology figure out alternative ways to communicate over the channel (technology). A good example
here is how a sender of email may use combinations of keyboard characters or emoticons (i.e. :) – sarcastic smile, ;) – wink, :o – exclamation of surprise) to communicate more about their emotion on a subject to receiver. Ultimately, whether from an educational perspective or from a corporate perspective, the key to realizing the potential of distance education is trading off the benefits with the concerns to produce a quality product. Not all distance education programs have been a success (NEA, 2002). Peters and Waterman (1982) have popularized the notion that customer satisfaction defines a quality product and a quality product, in turn, leads to a successful company. With these perspectives in mind, we suggest that customer satisfaction is an important measure of quality for distance education programs. Therefore, one of the key leading indicators to a program’s success will be the satisfaction of one of its key stakeholders – its students. It is this leading indicator that we wish to explore further.
METODOLOGY – HOW WE COLLECTED DATA The distance program used in this study is the Distance MBA offered by the College of Business at the University of Colorado in Colorado Springs (UCCS-DMBA). The UCCS-DMBA is one of the largest, online, AACSB-accredited MBA programs in the world. The majority of these students are employed full-time. The program used in this study has been in existence since Fall 1996 and has over 179 graduates. In Fall 2002, the program served 206 students from 39 states and 12 countries. The program offers an AACSB accredited MBA and its curriculum parallels the on-campus MBA curriculum. Close to 33% of the enrolled students are female. The oldest student enrolled is 60 years old and the youngest is 22. The average age of all students enrolled is 35. Over 25 Ph.D. qualified instructors participate in
73
Distance Education
developing and delivering the distance program annually. Recently, the news magazine, US News and World Report (2001) classified the program as one of the top twenty-six distance education programs. Each year, the program administration surveys the students as part of an on-going assessment process to maintain accreditation. To help gather the data we needed, a 49-question questionnaire was developed asking about the concerns identified from literature discussed earlier. Five-point
Likert questions (Strongly Disagree; Disagree; Neutral; Agree; Strongly Agree) were used to have students rate their experience with courses in the DMBA program. The questionnaire was sent to 341 students enrolled in the DMBA program. Data gathered from each subject included the subject’s grade, gender, number of courses taken, student status, amount of time expected to spend in the reference course, and the amount of time actually spent in the reference course (Martz et al., 2004).
Table 1. Questions that correlate significantly to satisfaction
ID
Question Statement
Correlation Coef.
74
Sign.
16
I was satisfied with the content of the course
.605
.000
17
The tests were fair assessments of my knowledge
.473
.000
18
I would take another distance course with this professor
.755
.000
19
I would take another distance course
.398
.000
20
The course workload was fair
.467
.000
21
The amount of interaction with the professor and other students was what I expected.
.710
.000
22
The course used groups to help with learning
23
I would like to have had more interaction with the professor.
26
The course content was valuable to me personally
28
Grading was fair
30
Often I felt “lost” in the distance class
31
.495
.000
-.508
.000
.439
.000
.735
.000
-.394
.000
The class instructions were explicit
.452
.000
33
Feedback from the instructor was timely
.592
.000
34
I received personalized feedback from the instructor
.499
.000
36
I would have learned more if I had taken this class oncampus (as opposed to online)
-.400
.000
37
This course made me think critically about the issues covered.
.423
.000
38
I think technology (email, web, discussion forums) was utilized effectively in this class
.559
.000
39
I felt that I could customize my learning more in the distance format.
.254
.001
42
The course content was valuable to me professionally
.442
.000
43
I missed the interaction of a “live,” traditional classroom
46
Overall, the program is a good value (quality/cost)
-.341
.002
.258
.017
Distance Education
RESLTS – WHAT MAKES A DIFFERENCE The results to one question, “Overall, I was satisfied with the course,” were used as the subject’s measure of general satisfaction. The data set was loaded into SPSS for analysis. Table 1 details twenty-two questions that proved significantly correlated to the reported satisfaction. These twenty-two questions identify characteristics to pay attention to in order to improve student satisfaction. With such a large number of variables impacting “satisfaction,” a more detailed analysis was needed. Kerlinger’s (1986) discussion of factor analysis as a research technique “to explore variable areas in order to identify the factors presumably underlying the variables (p. 590)” matches well with our need. A SPSS factor analysis was performed using Principal Component Analysis with a Varimax Extraction on those questions that had proven significantly correlated to satisfaction (Table 1). Using the values created from the factor analysis, each question was allocated to categorical factor. The components created from these questions clustered together around five categories that explain 66.9% of the variance in satisfaction. How well these questions fit into each category was determined and statistically, all reliability coefficients (Cronbach Alpha) are above .7000 and all eigenvalues are above 1.00 thus indicating an acceptable level for a viable factor (Kline, 1993; Nunnally, 1978). Bottom line is this; if we improve these characteristics then we improve satisfaction. In summary, twenty-two variables from the questionnaire proved significantly correlated to satisfaction. An exploratory factor analysis (Tucker and MacCallum, 1997) grouped those twenty-two variables into five constructs that we labeled: Interaction with the Professor; Fairness; Content of the Course; Classroom Interaction; and Learning Value. Each construct is summarized below.
Interaction with the Professor: The first component for study combines five questions: Q18, Q21, Q33, Q34. The ideas permeating these questions center on feedback and interaction with the professor. The results show that feedback that is timely and personalized helped raise the satisfaction ratings. Fairness: Q17, Q20, Q28 and Q31 create the second construct. These questions deal explicitly with the fairness of the tests, the course workload and grading, and the explicitness of the course instructions. All of these questions deal implicitly with setting the expectations for the course. The better the expectations are set, the higher the satisfaction. Course Content: The questions, Q16, Q26, Q39, Q42, center on course content. The questions look at content from m both a personal and professional level. As one would hope, “good” course content influences the basic satisfaction of that course. Classroom Interaction: Questions, Q23, Q30, Q36, Q43, all deal with interaction and all have negative loading values. This is interesting because the statements in these questions were testing many of the concerns expressed with the “commercialization” of distance education. This data concludes those inhibitors are not significant. A negative correlation means that the students that were more satisfied with their online experience: 1.) did not miss the interactions of a traditional classroom (Q43); 2.) did not want more interactions (Q30); 3.) did not think they would have learned more if they took the course in a traditional environment. on-campus; 4.) and, did not exhibit the feeling of being lost in the class. Learning Value: The last factor (Q19 Q22, Q37, Q38, Q46) looks at learning value. Questions ask the subject directly to rate the value of the course and whether or not the subject would
75
Distance Education
take another distance course. Other questions ask for ratings on “learning” for groups and thinking “critically.” Again, as one would anticipate, the more value a student perceives in the course the more likely they are satisfied.
MANAGING TO THE RESULTS As mentioned earlier, organizations and colleges use distance education courses not only to provide an alternative delivery method for students but also to generate revenues. As the number of distance courses and programs increase at an exponential rate, the necessity to enhance the program’s success both academically and financially also takes prominence. Competition among the various distance programs for students will grow. The results of this study can help suggest a set of operational recommendations that can impact online program success (Table 2). The results in this study indicate that timely and personalized feedback by professors results in a higher level of satisfaction by students. This suggests that distance education administrators should work closely with their faculty and offer
ways to enrich the teacher-student relationships. Paradoxically, a faculty member needs to use technologies to add a personal touch to the virtual classroom. For example, faculty should be encouraged to increase the usage of electronic discussion forums, responding to email within 24 to 48 hours, keeping students up-to-date with the latest details related to the course. Good course content and explicit instructions increase student satisfaction in the virtual classroom. So, it may well be that these characteristics set and manage the expectations for the distance student. This result suggests that faculty should have complete websites with syllabi and detailed instructions. In turn, distance education administrators should focus their attention on providing faculty with technical support such as good website design, instructional designer support, test design, user interaction techniques, etc. appropriate for distance learning. It seems that the students’ notions of the value they receive from a distance course combine learning and technology. Technology in this case not only refers to the actual software and hardware features of the delivery platform but also how well technology is adapted to the best practices
Table 2. Suggestions to help increase online program success
76
1
Have instructors use a 24-48 hour turnaround for email
2
Have instructors use a 1 week turnaround for graded assignments
3
Provide weekly “keeping in touch” communications
4
Provide clear expectation of workload
5
Provide explicit grading policies
6
Explicitly separate technical and pedagogical issues
7
Have policies in place that deal effectively with technical problems
8
Provide detailed unambiguous instructions for coursework submission
9
Provide faculty with instructional design support
10
Do not force student interaction without good pedagogical rationale
11
Do not force technological interaction without good pedagogical purpose
12
Collect regular student and faculty feedback for continuous improvement
Distance Education
of teaching. The negative correlation implies that if technology is available but not used, it lowers satisfaction. For the program administrator, this would suggest adoption of distance platforms that are customizable at the course level with respect to displaying technological options.
CONCLUSION The goal of this study was to identify potential indicators for satisfaction with distance education. Background literature surrounding the traditional versus virtual classroom debate helped develop a 49-question questionnaire. The questionnaire was administered to MBA students in a top-tier distance education program. A factor analysis extracted five basic constructs correlated to satisfaction: Professor Interaction, Fairness, Course Content, Classroom Interaction and Technology Use & Value. Using these categories, several recommendations for implementing and managing a distance program were provided. Since this study was first published, the authors have executed multiple distance courses and collected experiential data to confirm the potential for the suggestions. The communication and explicit instructions seem to have the desired impact. In fact, the instantiations of these – detailed syllabi, explicit assignment instructions, consistent (one week) turnaround on graded assignments all have been mentioned in student reviews for the courses.
REFERENC Allen, I. E., & Seaman, J. (2007). Online nation. Sloan Consortium and Babson Survey Research Group, AACSB (1999). Corporate universities emerge as pioneers in market-driven education,” Newsline, Fall 1999
Boyatzis, Richard E., & Kram, K. E. (1999). Reconstructing management education as lifelong learning,” Selections, 16(1),17-27. Carr, S. (2000). As distance education comes of age the challenge is keeping students. Chronicle of Higher Education, February 11. Creahan, T. A., & Hoge, B. (1998). Distance learning: Paradigm shift of pedagogical drift. Presentation at Fifth EDINEB Conference, September, 1998, Cleveland, Ohio. Gustafsson, A., Ekdahl, F., Falk, K., & Johnson, M. (2000). Linking customer satisfaction to product design: A key to success for volvo. Quality Management Journal, 7(1), 27-38. Haythornthwaite, C., Kazmer, M. M., Robins, J., & Showmaker, S. (2000). Community development among distance learners. Journal of ComputerMediated Communication, 6(1). Hearn, G., & Scott, D. (1998). Students staying home. Futures, 30(7), 731-737. Hogan, D., & Kwiatkowksi, R. (1998). Emotional aspects of large group teaching. Human Relations, 51(11), 1403-1417. Howell, S. L., Williams, P.B., Lindsay, N.K. (2003). Thirty-two trends affecting distance education: An informed foundation for strategic planning. Online Journal of Distance Learning Administration, 6(3). http://www.westga. edu/~distance/ojdla/fall63/howell63.html (last accessed January 26, 2008) Institute for Higher Education Policy. (2000). Quality on the line: Benchmarks for Success in Internet Distance Education. Washington, D.C. Kariya, S. (2003). Online education expands and evolves. IEEE Spectrum, 40(5), 49-51. Kerlinger, F. N. (1986). Foundations of behavioral research, 3rd. ed. Holt, Rinehart & Winston.
77
Distance Education
Kirkman, B.L., Rosen, B., Gibson, C.B., Etsluk, P.E., & McPherson, S. (2002). Five challenges to virtual team success: Lessons from Sabre, Inc. The Academy of Management Executive, 16(3). Kline, P. (1993). The handbook of psychological testing, London: Routledge. Lifelong Learning Trends: A Profile of Continuing Higher Education. 7th Edition. (2002, April) University Continuing Education Association. Martz, Wm. Benjamin, Jr., & Venkateshwar, Reddy. (2005). Five factors for operational success in distance education. Encyclopedia of Online Learning and Technology, Caroline Howard, (ed.) Martz, W. B, Reddy, V., & Sangermano, K. (2004). Assessing the impact of Internet testing: Lower perceived performance. Distance Learning and University Effectiveness: Changing Educational Paradigms for Online Learning, Caroline Howard, Karen Schenk and Richard Discenza (eds.). Hershey, PA: Idea-Group, Inc.: Martz, William B., & Shepherd, Morgan, M., (2207). Managing distance education for success. International Journal of Web-Based Learning and Teaching Technologies, 2(2), 50-59. Merit Education (2003). Six higher education mega trends what they mean for the distance learning. http://www.meriteducation.com/sixmega-trends-higher-education-continued.html (accessed January 29, 2005) NEA (2002). The promise and the reality of distance education. NEA Higher Education Research Center, 8(3). Noble, D. F. (1999). Digital diplomas mills. Retrieved November 28, 2002 , from http://www. firstmonday.dk/issues/issue3_1/noble/index. html
78
Nunnally, J. (1978). Psychometric theory. New York: McGraw-Hill Peters, T. J., & Waterman, Jr., R.H. (1982). In search of excellence.New York: Harper and Row. Poole, M.S., & DeSanctis, G. (1990). Understanding the use of group decision support systems: The theory of adaptive structuration. In Organizations and Commuincation Technology, J. Fulk and C. Steinfeld (eds.). 173-193. New Bury Park, Ca: Sage Publications. Russell, T. (2001). http://www.nosignificantdifference.org/ accessed on January 26, 2008. Sausne, Rebecca (2003). Thinking about going virtual? Better bone up on the for-profits to see what you’re up against. University Business, July http://universitybusiness.com/page.cfm?p=311 accessed January 29, 2005) Statsoft (2002). http://www.statsoftinc.com/ textbook/stfacan.html. Accessed on November 30, 2002 Svetcov, D. (2000). The virtual classroom vs. the real one. Forbes, 50-52. Traupel, L. (2004). Redefining distance to market your company. http://www.theallineed.com/adonline-business-3/online-business-028.htm, (accessed January 29, 2005) Tucker, L. F., & MacCallum, R. (1997). Unpublished manuscript available at http://quantrm2. psy.ohio-state.edu/maccallum/book/ch6.pdf Accessed on November 30, 2002 U.S. News and World Report (2001). Best online graduate programs, October.
79
Chapter VI
Group Support Systems as Collaborative Learning Technologies: A Meta-Analysis
John Lim National University of Singapore, Singapore Yin Ping Yang National University of Singapore, Singapore Yingqin Zhong National University of Singapore, Singapore
AbSTRACT Computer-based systems have been widely applied to support group-related activities such as collaborative learning and training. The various terms accorded to this research stream include virtual teams, e-collaboration, computer-supported collaborative work, distributed work, electronic meetings, and so forth. A notable and well-accepted aspect in the information system field is group support systems (GSS), the focus of this chpater. The numerous GSS studies have reported findings which may not be altogether consistent. An overall picture is much in want which attends to the synthesizing of the findings accumulated over decades. This chapter presents a meta-analysis study aimed at gaining a general understanding of GSS effects. We investigate 6 important moderators in GSS experimental research: group outcomes, namely group size, task type, anonymity, time and proximity, level of technology, and the existence of facilitation. The results point to important conclusions about the phenomenon of interest; in particular, their implications vis-à-vis computer-supported collaborative learning technologies and use are discussed and highlighted along each dimension of the studied variables. Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Group Support Systems as Collaborative Learning Technologies
INnion Team-based or group work and collaborations are an integral part of education and learning environments. With the advance of information communication technologies, there has been a growing potential for utilizing computerized systems to support idea generation, project assignment, and instant communication among the IT-age students and educators. The phenomenon has attracted interest from the fields of education as well as Information Systems (IS). In this connection, an emerging area in the instructional technology field called computer-supported collaborative learning (CSCL) has focused on the ways to support group learning using different forms of technologies; some examples are electronic discussion environments, distance learning systems, and intelligent agents (e.g., Koschmann, 1996; Strijbos et al., 2003; Ready et al., 2004; Bosco, 2007). Group Support Systems (GSS) research has accumulated a substantial body of knowledge on the effects of computer-based systems in supporting group work in relation to a variety of tasks such as idea generation and decision making. Based on the successful use of GSS technology to support groups in non-academic settings, researchers have begun to explore ways to apply GSS technology in classroom to support and enhance group-based learning (Tyran & Shepherd, 2001). GSS are used in a classroom setting or distance learning groups to support and structure group communication and learning activities (e.g., Leidner & Jarvenpaa, 1995; Sawyer et al., 2001; Alavi et al., 2002; Gill, 2006). While much work has been done in examining the impacts of GSS on group outcomes, the findings are not altogether consistent. Several early meta-analyses exist (e.g., Benbasat & Lim, 1993; McLeod, 1992; Shaw, 1998). Other reviews involve tabular methods which are unavoidably less rigorous (Fjermestad & Hiltz, 1999). Tyran and Shepherd (2001) presented a GSS
80
research framework for analyzing the impact of collaborative technology on group learning, by referring to an earlier framework concerning electronic meeting systems on group processes and outcomes (Pinsonneault & Kraemer, 1990). Nevertheless, as the framework is built based on face-to-face or “same time, same place” research studies (Leidner & Jarvenpaa, 1995), it is somewhat limited in its applicability to group work or learning in other forms such as distributed work or web-based distance learning. Dennis and Wixom (2002) examined five moderators (task, GSS tools, type of group, group size, and facilitation) and their potential effects on GSS use. The necessityhas been noted to look deeper than and beyond “the overall effects of GSS use” (p. 236, Dennis and Wixom, 2002). A pertinent question is under what conditions collaborative technology use would improve group performance because there are moderators that influence the specific effects of GSS (Dennis & Wixom, 2002; Beauclair, 1989). The current study looks into how key moderators individually and jointly influence important group work outcomes using a meta-analytic technique to help derive meaningful conclusions backed by quantitative analysis, as well as provide insights useful for both CSCL and GSS areas. Specifically, our primary interest concerns the use of GSS technology and research in the learning environment. Correspondingly, the paper focuses on six important moderators which are pertinent to both organizational and educational contexts; they are group size, task type, anonymity, time and proximity, level of technology, and the existence of facilitation. A research model is constructed and hypotheses are developed concerning the impacts of GSS. Next, we present a meta-analysis on thirty-three quantitative experimental studies to gain a synthesized view of the GSS effectiveness. The subsequent sections dwell on the results and discussions relating to each of the outcome variables. We conclude the paper by pointing out the relevance to, and implications for, computer-
Group Support Systems as Collaborative Learning Technologies
supported collaborative learning research. As well, future research avenues are identified.
Theear Framework Learning and GSS Technology Collaborative learning, as compared to individual learning, is helpful and important for understanding and exploring the process of learning. Embedded within the definition of cooperative learning is an enormous diversity of cooperative approaches. These may be informal as short meetings to simply discuss and share information, or formal approaches where structure is imposed with specific ways of forming teams. Students may be working together on projects or other creative activities involving specific contents. Growing interests in supporting the needs of collaborative learning, along with concurrent improvements in GSS, have led to the emergence of a research area in the instructional technology field in CSCL; research in CSCL centers on the interaction of computer-supported learning systems and GSS by integrating collaborative learning and Information Technology (IT) (O’Malley, 1995). The rising importance of teamwork and group decision-making has triggered vast research efforts since the 1960s when the GSS concept was first raised by Doug Engelbart (Wagner et al., 1993). GSS are defined as computer-based systems that support group decision making which is the result of the integration of communication, information processing and computer-based group structuring and support (DeSanctis & Gallupe, 1987; Huber, 1984; Poole & DeSanctis, 1990). The CSCL research domain encompasses benefits derived from GSS applications to support group-oriented methods of instruction, including networked discussion environments and distance learning systems. Desktop conferencing, videoconferencing, co-authoring features and applications, electronic
mail and bulletin boards, meeting support systems, voice applications, workflow systems, and group calendars are key examples of groupware (Grudin, 1991). In addition to a set of common features, different applications provide users with different tools and functions. Many IS researchers have used GSS in the classroom and experiments to enhance the learning experience (Kwok et al., 2002), whereas other works in IS and related fields have developed asynchronous learning networks (ALNs) (Coppola et al., 2002). These systems enable affective learning objectives related to interactive communication and teamwork to be achieved, on top of meeting the traditional cognitive learning objectives. GSS were originally designed to support discussion and decision making in the commercial/business sector, but in the last few years there has been a surge of interest in their usage to support collaborative learning (Khalifa & Kwok, 1999; Vogel et al., 2001). DeSanctis and Gallupe’s (1987) two-by-two framework for GSS has been applied to understanding IT usage in learning environments. Although this framework enables us to classify learning settings based on the dimensions of space and time, it does little to improve our understating of the technologies required to support the learning objectives in different settings. Sharda et al. (2004) propose extending DeSanctis and Gallupe’s framework by adding a third dimension, learning objectives achieved (cognitive and affective in classroom form vs. cognitive, affective and psychomotor in lab). Craig and Shepherd (2001) have drawn from the GSS and education literatures to develop a research framework that may be used to analyze the impacts of collaborative technology on learning. This framework is an extension from Pinsonneault and Kraemer’s (1990) framework for electronic meeting systems research. Table 1 illustrates how the GSS features can support groups in collaborative learning (Craig & Shepherd, 2001).
81
Group Support Systems as Collaborative Learning Technologies
Table 1. GSS features and their facilitation of group communication GSS support feature
Feature description
Potential benefit
Anonymity
It supports group members to input information to the group anonymously by means of an electronic communication channel.
It may help to reduce evaluation apprehension by allowing group members to submit their ideas without having to speak up in front of the rest of the group.
Parallel Communication
It supports all group members to communicate at the same time, implemented in a GSS by means of an electronic communication channel.
It may help to reduce domination in a group by one or more members, since parallel communication allows more than one person to express ideas at a time. In larger groups, the feature may also reduce problems associated with limited airtime for group members, since all group members can submit information concurrently without having to wait for other to finish speaking. This feature is able to support more complex communication in groups as compared to that in groups without the aid of GSS (Bandy & Young, 2002).
Process Structure
It supports the process techniques or rules that guide the content, pattern or timing of communication. Besides, it provides structure to a group process by establishing an approach the group may follow in order to perform a group activity. This feature is implemented in a GSS by means of one or more group-oriented software tools that support group activity.
It may help to reduce coordination problems for a group by keeping the group focused on the task or agenda. For example, to focus on electronic discussion, an idea generation activity may be structured by using an electronic discussion system with predefined categories (Bandy & Young, 2002). The process structured by this feature contributes to effective learning (Kwok & Khalifa, 1998).
Effects of Important Moderators Fjermestad (1998) attempts to integrate various GSS research models into a comprehensive model. He included four major categories of variables: contextual or independent variables, intervening variables, group adaptation processes, and outcomes or dependent variables. However, the variables included in this framework are overwhelming with some may not be important in practice and difficult for manipulating. Therefore, for the interest of this paper, we selected the potential moderators from this framework cautiously. The basic criteria for choosing the moderators necessitate them to be practically important and to have a clearly defined measurement. We need a theoretical basis for each moderator to suggest that it might have the effects on the group activities. Therefore, we choose to consider six moderators
82
from the four categories defined by Fjermestad (1998): group size, task type, anonymity, time and proximity, level of technology, and the existence of facilitation. Group performance and satisfaction serve as the two categories of outcome variables, as consistent with previous studies (e.g., Sambamurthy & Poole, 1992; Tan et al., 1994). We defined these two outcomes according to Benbasat and Lim (1993) and Dennis and Kinney (1998) in the following way: (1) performance includes decision quality, number of ideas and time to reach decision; (2) satisfaction includes process satisfaction and decision satisfaction. The rationales for studying group performance in collaborative learning are explained in the following. Group collaborative technologies have been found to help to increase teacher-student interaction, and to make learning more studentcentered (Hiltz, 1995). Collaborative technologies
Group Support Systems as Collaborative Learning Technologies
may potentially eliminate geographical barriers while providing increased convenience, flexibility, currency of materials, knowledge retention, individualized learning, and feedback (Kiser, 1999). In practice, more and more professionals have to collaborate and it is an important goal for any educational institution to improve students’ performance in collaborative situations. With the group evaluation approach, one may verify whether the performance of a specific group has increased or assess if group members have developed some generic ability to collaborate which they could possibly reuse in other groups (Dillengourg, 1999). An important potential benefit of CSCL environment is the support of diverse learning styles (Wang et al., 2001). Hiltz and Turoff (1993) found a strong tendency toward more equal participation, and that more opinions tended to be asked for and offered.
Group Size Research on the effects of group size in non-GSSsupported group work has a long history. The general consensus is that as group size increases, effectiveness increases because there are more individuals who will contribute to knowledge and skills (Hare, 1981; Thomas & Fink, 1963). However, upon the reach of a certain optimal value, the difference in participation becomes more pronounced so that a few members will dominate the group meeting (Shaw, 1981; Diehl & Stroebe, 1987) and thus the effectiveness and member satisfaction will decrease (Hare, 1981). The optimum size for groups without GSS support is suggested to be no more than six in the perception of managers (James, 1951; Rice, 1973). Empirical research has drawn similar conclusions suggesting the optimum size to be five (Hackman, 1970; Hare, 1981; Shaw, 1981). In the case of GSS-supported group meetings, the optimum size of the group is unknown. Two studies have found no differences on the effects
due to group size (Watson et al., 1988; Zigurs et al., 1988). Both of the studies used groups of three and four. In addition, GSS have the potential to reduce the barriers to communication that will increase with group size, thus we expect the performance of groups – and larger groups in particular – will be improved (DeSanctis & Gallupe, 1987). Therefore, we believe that the optimum size for GSS groups will be larger than the non-GSS groups. Previous non-GSS research suggests that performance does not increase as group size becomes larger (Dennis et al., 1990). GSS research has found that larger groups benefit more from GSS use than smaller groups (Dennis, 1991; Dennis et al., 1990; Dennis & Valacich, 1991; Gallupe et al., 1992; Nunamaker et al., 1988; Valacich et al., 1994). Theories suggest that GSS can reduce the following process losses which are common to non-GSS groups: air time, production blocking, evaluation apprehension, free riding, and cognitive inertia (Dennis et al., 1990). H1a: Large groups will generate more ideas than small groups. H1b: Large groups will have higher decision quality than small groups. H1c: Large groups will have a shorter time to reach decision than small groups. Previous research has found decreased user satisfaction in larger groups in the context of non-GSS-supported group meetings due to the decreased performance, increased evaluation apprehension and the lack of equality of participation (Shaw, 1981; Diehl & Stroebe, 1987). In the GSS-supported meeting environment, we expect increased performance, decreased evaluation apprehension and increased equality of participation. Therefore, the user satisfaction will be higher for larger groups in GSS supported environment.
83
Group Support Systems as Collaborative Learning Technologies
H1d: Large groups will have higher process satisfaction than small groups. H1e: Large groups will have higher decision satisfaction than small groups.
Task Type The reason for studying task type as a moderator is that groups in an organization engage in a wide variety of tasks, thus necessitating investigations of the differential effects of various task types (Hackman, 1968). According to McGrath (1984), tasks can be divided into four quadrants (generating, choosing, negotiating and executing). The first two types are relevant to the GSS context, and commonly studied in GSS experimental studies. Likewise, the current work focuses on idea generation tasks and decision-making tasks, as they are considered most common forms of collaborative learning tasks (Ready et al., 2004). Idea generation tasks involve participants to work together to generate ideas or plans. The results are best obtained through the active contribution of each participant. The number of ideas and quality of the ideas are most important in such tasks. However, for decision making tasks, members have to reach consensus to make a decision. The decision has to be superior to other alternatives. Therefore, the quality of the decision is the most important for this type of tasks. As the measures for the results of these two task types are different, we will be unable to compare the performance outcome for the different task types. Since idea generation tasks are additive task which does not need to reach a consensus as in the case for decision-making tasks, the level of conflict will be lower for idea generation tasks. Shaw (1998) compared the GSS groups performing the two tasks and found that the group performing idea generation tasks had higher satisfaction. Therefore, we have the following hypotheses:
84
H2: Process satisfaction will be higher for idea generation tasks than for decision-making tasks.
Anonymity One important feature that GSS can provide to group meetings is anonymity compared to nonGSS meetings. Participants can sit in front of each terminal to input text with their identities being unrevealed. The anonymity provided through GSS sessions have been hailed as the primary way through which GSS help groups overcome process losses (the difference between the group’s potential and actual performance, Kerr & Bruun, 1981). Anonymity can facilitate group processes by moderating those who dominate group discussions (decision by minority rule), have a high position in the group (decision by authority method), and rely on nonverbal cues to get their point across. Wilson and Jessup (1995) propose that anonymous GSS groups should allow more ideas to be generated during a meeting, because group members with low-status can contribute ideas more freely and openly. One of the benefits of anonymity is that it may reduce the pressure to conform to the groups thought the process and minimize evaluation apprehension. These process gains are often tempered by an increase in free riding because it is more difficult to determine when someone is free riding (Albanese & VanFleet, 1985). Other benefits such as more objective evaluation and the creation of a lowthreat environment are attributed to anonymity and can ultimately result in improved decision quality (Nunamaker et al., 1993). Anonymity removes the identity of individuals which eliminates minority influence. The anonymous discussion tends to be more open, honest, and free-wheeling (Nunamaker et al., 1993). H3a: Anonymous groups will generate more ideas than identified groups.
Group Support Systems as Collaborative Learning Technologies
H3b: Anonymous groups will have higher decision quality than identified groups. H3c: Anonymous groups will take a shorter time to reach decision than identified groups. Rao and Monk (1999) stated that if participants made a decision anonymously, the need for external justification would not exist and the participants’ level of commitment to the group decision would be lower than the commitment of identified participants because the identified participants would require external justification. If the participants are anonymous, they only need to maintain an internal sense of competence. This difference in desire to appear externally competent results in a higher level of commitment when participants are identified. Thus the level of anonymity provided in a problem-solving environment (either GSS or face-to-face) will negatively impact user satisfaction with the group outcome. Anonymity also increases objective evaluation. In this case, contributions are judged based on their merits rather then on the source of the contribution. Criticism is perceived as being directed at the idea, not the contributor (Nunamaker et al., 1993). Therefore, it is hypothesized that increased anonymity should improve user attitudes because of the depersonalization of the comments and ultimately the critiques of the comments. H3d: Anonymous groups are less satisfied with decision than identified groups. H3e: Anonymous groups are more satisfied with process than identified groups.
Time and Proximity Meeting environment can be divided into four categories with the different combination of the two dimensions, time and proximity (DeSanctis & Gallupe, 1987). They are: synchronous and colocated, synchronous and remote, asynchronous
and co-located, and asynchronous and remote. We classify the synchronous and co-located meeting environment as face-to-face environment while the other three as Virtual Team environment. CSCL can be grouped in similar fashion, e.g. “same time, same place” learning (Leidner & Jarvenpaa, 1995), virtual classroom (Hiltz & Turoff, 1993), distance learning (Verdejo, 1993), and telelearning (Alavi et al., 1995). Previous experimental studies mainly focus on the synchronous and co-located meetings which are called face-to-face meetings. However, today’s global economy requires many organizations to coordinate work across a variety of intra- and inter-organizational boundaries (Carmel, 2006; Armstrong & Cole, 1995; Lipnack & Stamps, 1997). Virtual Teams allow organizations to improve efficiency and productivity, procure expert knowledge from internal and external sources, and transfer `best practice’ information nearly instantaneously (Huber, 1990). Therefore, studying the GSS effect in Virtual Team environment attracted our attention. GSS that supports Virtual Team have their own characteristic which is the store-and-forward capability across time and space. This characteristic allows members to attend to information at any time at which they can turn their attention to group problems; furthermore, it liberates message senders from having to wait for other members to finish (Watt et al., 2002), preventing “production blocking” (the tendency to hold back or forget information while waiting for a live speaking turn; see Connolly et al., 1990). “Production blocking” is a very common problem in face-to-face group meetings as parallel input is often lacking in face-to-face meetings. Therefore, we have the following hypothesis: H4a: Virtual Teams will generate more ideas than face-to-face teams. H4b: Virtual Teams will have higher decision quality than face-to-face teams.
85
Group Support Systems as Collaborative Learning Technologies
H4c: Virtual Teams will take a shorter time to reach decision than face-to-face teams. Existing research suggests that team members’ satisfaction may depend on the type of communication technology being used. Researchers suggest that the richness of the communication technology media may reduce many of the problems associated with Virtual Team interaction (Daft & Lengel, 1986; Dennis & Kinney, 1998). Face-to-face communication provides a richer medium for group communication since participants can use gestures, expressions and voice to communicate with each other. Therefore, we expect that the satisfaction level for Virtual Team will be lower. H4d: Virtual Teams will have higher process satisfaction than face-to-face teams. H4e: Virtual Teams will have higher decision satisfaction than face-to-face teams.
Level of Technology DeSanctis and Gallupe (1987) identified three levels of GSS design to characterize the degree of technological sophistication of the system. Previous studies focused on the effects of level-1 GSS and level-2 GSS since level-3 GSS were not mature enough and the tools were rarely available in the market. Therefore, we choose to compare the effects of level-1 GSS and level-2 GSS. Level-1 GSS attempt to remove common communication barriers through technical features such as anonymous input of ideas and preferences, large screen for simultaneous display of ideas and preferences, electronic message exchanging among members and compilation and display of members’ assessments. Level-2 GSS provide decision modeling and group techniques aiming at reducing the uncertainty and “noise” in the group process. The former one supports primarily communication activities (such as entering ideas
86
and simple rating, ranking and voting) while the latter provides support for communication and consensus activities (providing the structured decision techniques). An important enhancement in level-2 GSS is the support for both idea generation and synthesis. The existing theories indicate the differences in the two systems result in the different of meeting outcomes in the following ways. Level-1 GSS provide support primarily for communication and do not impose strict structure on the meeting which does not hinder the participants’ creativity compared to level-2 GSS (Sambamurthy & DeSanctis, 1990). As a result, we propose that level-1 GSS will provides better support for idea generation. However, level-1 GSS often lack the support for reaching group consensus. Level2 GSS help groups reach agreement on fewer number of members’ expectations, generate more valid and important assumptions, and achieve a superior quality of decision in a shorter time (Sambamurthy & DeSanctis, 1990). Level-2 GSS provide structure for groups to manage both communication and consensus activities. Groups using them may perceive that they have compared all ideas and know the differences and similarities between these ideas. As a result they may perceive their decision as a better one than others’ (Sambamurthy & DeSanctis, 1990). They are also satisfied in the consensus reaching process since the support for consensus reaching helps them manage conflicts more easily (Sambamurthy & DeSanctis, 1990). H5a: Groups using level-1 GSS will generate more ideas than those using level-2 GSS. H5b: Groups using level-2 GSS will have higher decision quality than those using level-1 GSS. H5c: Groups using level-2 GSS will take a shorter time to reach decision than those using level-1 GSS.
Group Support Systems as Collaborative Learning Technologies
H5d: Groups using level-2 GSS will have higher process satisfaction than those using level-1 GSS. H5e: Groups using level-2 GSS will have higher decision satisfaction than those using level-1 GSS.
The Existence of Facilitation Facilitation is the external assistance on the group meeting while remaining neutral as to the content of discussion. A facilitator brings his or her own expertise regarding effective procedures and techniques to the meeting. He or she elicits, selects, and modifies structures drawn from his or her own expertise, from the available GSS tools, or from the members (Anson et al., 1995). It is generally believed that facilitation can improve meeting outcomes (Nunamaker et al., 1987). Adaptive Structuration Theory (AST) (Poole & DeSanctis, 1987, 1989; DeSanctis & Poole, 1991; Gopal et al., 1992) highlights an additional, special role of group interaction – a means to appropriate technology-based and non technology structures to guide further group interaction. Structures are meant to organize and direct group behavior process. Appropriation is the “fashion in which a group uses, adapts, and produces a structure” (Poole & DeSanctis, 1989). When well-designed and relevant structures are successfully appropriated, the group interaction will be improved, which will in turn contribute to higher performance. Poole and DeSanctis (1989) suggest three dimensions which affect the appropriation of structures: faithfulness, attitudes and level of consensus. A facilitator can help the group successfully appropriate structures by providing guidance to encourage faithfulness, as well as encourage positive attitudes and consensus over the structures’ use (Anson et al., 1995). Poole (1991) argues that freely interacting groups often do not effectively apply procedures unless assisted. Therefore, although the leaders and members can provide structures and support,
the process may not be as effective as that with the presence of facilitators. H6a: Groups with facilitators will generate more ideas than those without. H6b: Decision quality will be higher for groups with facilitators than those without. H6c: Groups with facilitators will reach decision in a shorter time than those without. Relationships among group members may require mediation by an external source because of conflicts or power differences (Ackermann, 1996; Pinsonneault & Kramer, 1989). A facilitator can apply his or her expertise and experience of conflict management to mediate the group process effectively. Therefore, we expect the improved process satisfaction of group members. Facilitators are often experienced and well trained. Group members with facilitation support tend to think they have followed the guidance and instruction of a well-trained and experienced while neutral outsider. Therefore, they may imagine they have reached the decision under the guidance of an expert. H6d: Groups with facilitators are more satisfied with process than those without. H6e: Groups with facilitators are more satisfied with decisions than those without.
The Research Framework Figure 1 presents our research framework which encompasses hypotheses earlier raised.
Theenaly Over almost three decades of GSS research, researchers conducted plenty of both quantitative
87
Group Support Systems as Collaborative Learning Technologies
Figure 1. The research framework Moderators
Group Size
Task Type
Anonymity
Time and Proximity
Level of Technology
Facilitation
Availability of GSS
and qualitative studies. Quantitative studies are mainly those experimental studies where the students are the main subjects and the researcher has control on the experimental settings. Researchers collect quantitative data from experts’ judgment on the meeting outcomes and through post-session questionnaires. Meta-analysis is a set of statistical procedures designed to accumulate experimental results across independent studies in the literature that address a related set of research questions. It is to deal with secondary data which are quantitative in nature. Unlike traditional research methods, meta-analysis uses the summary statistics from individual studies as the data points. A key assumption of this analysis is that each study provides a differing estimate of the underlying relationship within the population. By accumulating results across studies, one can gain a more accurate representation of the population relationship than is provided by the individual study estimators. The typical steps for meta-analysis studies include selection of studies and outcome measures, followed by analytic procedure and hypotheses testing. We adopt the meta-analysis approach as developed and detailed by Hunter and Schmidta (Hunter and Schmidt, 1990). Essentially, the meta-analysis procedure produces the mean effect size and a standard deviation across all studies. Positive
88
Group Work Outcomes Performance Decision Quality Number of Ideas Time to Reach Decision Satisfaction Process Satisfaction Decision Satisfaction
effect sizes indicate that the mean effect of GSS use across all included studies was to increase the outcome measure (e.g., the use of GSS will increase decision quality). The following sections outline our research procedures.
Selecting Studies and Outcome Measures The first step is to select the appropriate studies that investigate the performance of both a control group and a treatment group. We selected those studies that compare the GSS-supported groups with non-GSS-supported groups on the five outcome variables which are decision quality, number of ideas, time to reach decision, process satisfaction and decision satisfaction with statistical results. We searched various major databases such as ProQuest. Various journals are searched for relevant studies. We have also included the proceedings of Hawaii International Conference on System Sciences. The result includes studies from early 1980s to present. Since some studies have comparison for the selected moderators such as Gallupe et al. (1992) compared the effect of GSS on groups with different sizes – the study alone resulted in five data points. Therefore, we obtained thirty-three studies (Appendix A) with sixty-two data points (Appendix B).
Group Support Systems as Collaborative Learning Technologies
In the included studies, the outcome variables are measured as follows: Decision quality was defined by most researchers as the correctness (intellective tasks with correct answers) or “goodness” (decision-making tasks without correct answers); Number of ideas was defined as number of ideas generated for idea generation tasks or number of alternatives generated for decision-making tasks); Time to reach decision was measured the time taken by the group to reach consensus on a particular decision; Process satisfaction and decision satisfaction are measured through the post-experimental surveys or interviews. Some variables have different measurement, thus we have adjusted them to unsure reliability during the analysis.
Analytic Procedure In short, this procedure computes an average effect size (d) for a given dependent variable across an entire set of studies corrected for sampling error and unreliability. One shall convert the individual study statistic to d for accumulation later. Hunter and Schmidt (1990) presented the conversion equation: X − Xc d= e Sp X e :
Experimental Group Mean
X c : Control Group Mean S p : Pooled Standard Deviation
and
After converting data to a common statistic, reliability information was collected. Since not all the studies for calculating d provide reliability information, we have included all the studies that provide the reliability information of the studied moderators and outcome variables. Then we calculated the mean reliability for each moderator and outcome variable for correcting the unreliability later. The next step is to eliminate the bias caused by sampling error. Sampling Error refers to the random variation in study results due to sample size. Smaller sample sizes tend to vary more widely from the true relationship within the population than do studies that have large samples. Accordingly weighting the effect size of a study by its sample size will provide a more accurate approximation to the relationship within the population, unaffected by the size of the sample. The sample weighted mean d is d=
∑[ N d ] ∑N i
i
i
N i : Number of Subjects in the Study d i : Effect Size for the Individual Study The sample weighted variance of d is defined by the following formula: Sd =
∑[ N (d − d ) ] ∑N i
2
i
i
2
( N e − 1) S e + ( N c − 1) S c Sp = Ne + Nc − 2
2
The net effect of this transformation is that the differences are now standardized to a common metric across all studies, and this in turn means that effect sizes may be statistically combined and evaluated.
While the sample weighted mean correlation is not affected by sampling error, its variance is greatly increased. A two-stage procedure is used to correct the variance of the sample weighted mean correlation. The first stage calculates the sampling error variance: 2
2
Se =
K (1 − d ) 2 ∑ Ni
K: Number of Studies in the Analysis
89
Group Support Systems as Collaborative Learning Technologies
To estimate the biased population variance, the sampling error variance is subtracted from S e 2 . 2
2
S BP = Sd − Se
2
So far this meta-analysis technique has corrected for one source of error, sampling error. There is another form of error: Measurement Error. Measurement Error or Test reliability is assessed by using the two reliabilities which apply to the study: r xx and r yy. Hunter and Schmidt (1990) presented a method of correcting the sample weighted mean effect size using a distribution of reliability estimates. Any study that assesses the pertinent reliability estimates can be used to construct the reliability distribution. To conduct the reliability corrections, we constructed a distribution of reliability using all available sources. This distribution has the mean of: r xx =
∑
rxx K
r xx: Reliability for the Individual Study K: Number of Reliability Studies
as:
The variance for this distribution is defined S xx =
∑(
rxx − r xx ) 2 K
The mean reliability and variance for the dependent variable use the same formulas. Given these statistics, the relationship within the population can be estimated. Correct the sample weighted mean of d for measurement error using the following formula: dP =
d rxx ryy
2
SP =
2
2
Hypotheses Testing To test our hypotheses, we used a three-step approach developed by Hunter and Schmidt (1990). First, the studies are divided into two sets according the different criteria on the moderators. Second, a meta-analysis is performed separately on the studies within each set to produce relevant statistics for each set. Third, t-tests are used to compare the mean effect sizes between the two sets to see if there are significant differences among them. For H1 (group size), we have split the data set into two partitions: small (five or fewer members) and large (more than five members). We choose the split point of five because it is argued that the optimum group size for non-GSS-supported groups is five (Shaw, 1981). For H2 (task type), we have split the data set into one set which deals with idea generation tasks and another set which deals with decision making tasks. For H3 (anonymity), we have split the data set into one set that requires members to remain anonymous and another set with members identified. For H4 (time and proximity), we have split the data set into one set in which members communicate face-to-face in real time and one set with members communicating either remotely or asynchronously or both. For H5 (level of technology), we have split the data set into one set with level-1 GSS support and another set with level-2 GSS support. For H6 (the existence of facilitation), we have split the data set into one set with the support from facilitators and another set without facilitators.
2
S BP − d P (rxx S yy + ryy S xx ) rxx ryy
By now, we have the estimates of the mean and standard deviation for the population, d P and
90
S P . The mean effect sizes indicate that the mean effect of GSS use across all included studies was to increase the outcome variable.
Re The result of the meta-analysis is summarized from table 2 to table 6. They are arranged ac-
Group Support Systems as Collaborative Learning Technologies
cording to different outcome variables. The set of moderators that may have impact on the specific outcome variable is included in the respective table. The first column “Number of Data Points” indicates the number of studies partitioned to this set. Rxx refers to reliability of the moderator and Ryy refers to reliability of the outcome variable. The column labeled “Uncorrected d” represents the mean effect size d after correcting sampling error but before correcting unreliability. The “Corrected d” column represents the mean effect size after correcting both sampling error and unreliability. “Significantly Different?” indicates whether the difference between “Corrected d” for two partitions of each independent variable is significant over dependent measures. “Hypothesis supported?” column indicates whether each hypothesized relationship is supported (Yes) or not (No), or there was not enough data to report (N.A.).
Decision Quality Table 2 shows the moderator effects on decision quality. The first major column is the number of data points in the meta-analysis for the respective moderators, e.g. 29 data points with the description of level of technology with 17 of which are using level-1 GSS. The second column is the reliability of the moderator. Since all the moderators we included in the study have definite and single measurement, the reliabilities should be 1 by definition. The third column is the reliability of the outcome variable which is decision quality. As we have mentioned above, this number is the mean value across all studies not only included in the meta-analysis but all the studies that have included the measurement of decision quality as there are different measurement for this outcome variable, such as “goodness” and “correctness” of the decision outcome. The fourth column is
Table 2. Moderator effects on decision quality
Group Size
Number of Data Points
Rxx
Ryy
28
1
0.926
Uncorrected d
Corrected d
Small
21
0.177
0.191
Large
7
1.169
1.263
Anonymity
33
1
0.328
0.354**
Anonymous
12
-0.073
-0.079
Time and Proximity
32
Face-to-Face
24
0.002
0.002
Virtual Team
8
0.604
0.652**
-0.03
-0.031
0.491
0.531*
29
Level-1
17
1
Yes
0.926
No
0.926
0.05
Level-2
12
Facilitation
33
Without
26
0.348
0.377
With
7
0.702
0.758**
1
0.05
No
21
Level of Technology
Hypothesis supported?
0.926
Identified
1
Significantly Different?
0.926
Yes
No
*p1. We focused on the first five components corresponding to the research model constructs (Table 1) because of the limitation of one of the research constructs (individual preparedness) that yielded ambiguous results and because the additional variance explained by the additional factor did not substantially alter the model (as it is the lowest incremental cumulative variance). In addition, we did not test the trust/communication construct, with a possible impact on the overall model. Limitations of this approach as introduced at the end of this section.
Extraction Method: Principal Component Analysis Table 2 shows the bivariate correlations among variables and provides intriguing results. In
228
particular, it shows that perceived enjoyment and motivation are highly correlated with the dependent variable of perceived learning (R>0.63 for both constructs). It also shows that perceived team member’s contribution positively correlates to learning, enjoyment and the motivation constructs. However, the bivariate correlation analysis shows that individual preparedness is not significantly related to the other variables, and the relationship, if any, is negative. Based on the results of the above bivariate correlations, we identify that hypothesis 1 (a, b, and c) are not supported. The other hypotheses are supported at the p=0.01, with the correlation between team contributions and motivation significant at the p=0.05 level (see Figure 4). From the above data analysis results, it is suggested that how individuals value their team members’ contribution has significant correlations with their perception of enjoyment and learning quality from computer-supported TBL (Gomez et al., 2006). These results address the first research question on how team interactions positively impact the whole computer-supported TBL learning experiences. In addition, individual opinions on team members’ contributions also have a positive impact on their perceived motivation, although the Pearson’s R value is not high (R = 0.28). It might be caused by other potential factors from the computer-supported TBL experience, which could decrease students’ motivation. For instance, if the team leader is more dominant, his control might impact other team members’ motivation. Surprisingly, individual preparedness does not impact perception of the team-based learning experience. The correlation values among individual preparedness and other variables are not significant. The results also show that their correlation values are negative. There might be a few reasons. First, as indicated earlier, the ambiguous two question items for the “individual preparedness” construct (i.e. ambiguous statement, negative factor loading) have a negative impact on the research framework. The individual preparedness
Utilizing Web Tools for Computer-Mediated Communication
Table 2. Bivariate correlation analysis (1)
(2)
(3)
(4)
Perceived Learning from TBL
1
Perceived Enjoyment from TBL
0.635**
1
Perceived Motivation from TBL
0.637**
0.708**
1
Perceived Team Member’s Value/ Contribution from TBL
0.437**
0.518**
0.288*
1
Individual Preparedness from TBL
-0.171
-0.174
-0.188
-0.026
(5)
1
** Correlation is significant at 0.01 level (two-tailed) * Correlation is significant at 0.05 level (two-tailed)
tasks associated with studying differences may not be self-evident to the student increasing the ambiguity of the questions. Moreover, our observations indicate the importance of individual preparedness on the entire iterative nature of the module whereas our focus of the current individual preparedness construct was driven from the importance of the readiness assessment test process. The bivariate correlation analysis does not show any significant relationships between “individual preparedness” and other constructs in the framework. This is evidently one of the limitations for this research. For future research, we plan to refine the “individual preparedness” construct with question items targeted at the individual preparedness process more than studying specifically for the tests. Second, there might be an interaction effect of the experimental conditions: the computer-supported TBL process design itself might also impact the results. The team assessment tool (tRAT) is the same test as the individual readiness assurance test (iRAT). Although the overall team scores are better than individual test scores, the test repetition may explain the decrease in the students’ motivation and enjoyment. Also they could not perceive more value from their respective team members. We also speculate that the test questions may not have lent themselves well to interesting team discussions, leaving the
tRATs uninteresting. Alternatively, this simply may indicate that many students found the TBL process valuable even when they did not prepare in the manner the instructor expected. These results show that our second research question on the role of individual preparedness needs further investigation and analysis to span the iterative cycle of each module (preparation, test, activity) and not simply focus on the readiness assessment test aspect. There are a number of limitations to this study. First and foremost, we found that some of the constructs could be better specified. Our factor analysis displayed the possible existence of another significant factor which should be further analyzed in future research. This additional factor was associated with an ambiguous question in the individual preparedness construct. Looking at the articulation of the questions in that construct, we identified an instrumentation bias (one item in the construct was weak), which may explain our concerns with the specific construct. In addition, because some of the constructs and the extension of this grounded team-based learning approach to a Web-environment is novel, we should supplement our conclusions with an in-depth analysis of qualitative factors (observations, open-ended questions, and content analysis of discussion boards) that may help better understanding the
229
Utilizing Web Tools for Computer-Mediated Communication
Figure 4. Computer-supported team-based learning bivariate correlation results Perceived Individual Preparedness from TBL
Perceived Motivation from TBL
(+) H3b supported R=0.63**
(+)H2a supported R=0.28* Perceived Team Member s Value or Contributions from TBL
(+)H2c supported R=0.43**
(+) H2b supported (-) H1a, b, c) not supported R=0.51** * p < 0.05 ** p < 0.01
(+) H3a supported R=0.70**
(+) H4 supported R=0.63**
Perceived Enjoyment from TBL
learning outcomes more objectively in a way that complements the perceived learning measures. Adjusting the activities layout in the Web-based environment with our phased approach and the nature of the individual preparedness activities indicate the complexities of this construct. Our qualitative analysis indicates that spanning beyond the individual preparedness question items related to the readiness assessment test to also include question items that measure perceived preparation of the materials related to team activities.
SUMMARY AND FUTURE RESEARCH Computer-supported team-based learning provides a powerful instructional experience and reduces some of the disadvantages many instructors and students have found with traditional small-group work. Blending the benefits of the face-to-face classroom with computer-mediated communication extends the learning process between the weekly face-to-face sessions keeping the team learning progress and group dynamics growing. Our research places emphasis on key variables that affect learning in computer-supported teambased learning. Computer-supported team-based learning is still a relatively new pedagogical ap-
230
Perceived Learning from TBL
proach, and to the best of our knowledge, this is the first study blending computer-mediated communications with the iterative team-based learning modular approach proposed by Michaelsen et al. (2002). The use of Web-based CMC learning techniques places emphasis on individual and team learning outcomes. The surveys indicate a high-perception of learning, motivation, and enjoyment. These findings deemed computersupported team-based learning an approach for further investigation both in the face-to-face classroom and for online learning. The emphasis of future research will be on team-assessments and group cohesion in a purely Web-based learning environment. The findings around the team activities will allow for additional adjustments in the team-based learning process before it is introduced in a completely online learning mode. Blending the face-to-face class with computer-mediated communications provides a means to gauge the asynchronous learning network process. Future studies will extend the analysis of the computer-supported team-based learning model and research framework using the structural equations model (SEM), trust, communication, and team leadership factors. Further review of individual and team preparedness is also needed. The progressive nature of the readiness exam process and team activities should ensure individual preparation. Because of the novelty
Utilizing Web Tools for Computer-Mediated Communication
of the preparedness and team contributions constructs, we will also implement content analysis of team activities posted on WebBoard to support the evaluation of individual preparedness for each module. Adding qualitative data and observations will enhance our understanding of the constructs. Actual grades and peer evaluation results will also support the measurement of task completion levels. Team-based learning presents a promising technique employing small teams that actively engage students in learning. We look forward to the day when instructors can effectively use computer-supported team-based learning as a standard approach in both face-to-face and online classrooms.
ACNOWLEDGMENT We gratefully acknowledge partial funding support for this research by the United Parcel Service Foundation, the New Jersey Center for Pervasive Information Technology, the New Jersey Commission on Science and Technology, and the National Science Foundation under grants IIS0135531, DUE-0226075 and DUE-0434581, and the Institute for Museum and Library Services under grant LG-02-04-0002-04. An earlier version of this paper was presented at the IRMA 2006 International Conference in Washington DC, May 2006.
REFERENCES Bloom, B.S., Englehart, M.D., Furst, E. J., Hill, W.H., & Krathwohl, D.R. (1956). A taxonomy of education objectives: Handbook I. The cognitive domain. New York: McKay. Coppola, N., Hiltz, S.R., & Rotter, N. (2004). Building trust in virtual teams, IEEE Transactions on Professional Communication, 47(2) 95-104.
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1992). Extrinsic and intrinsic motivation to use computers in the workplace. Journal of Applied Social Psychology, 22,1111-1132. Gomez, E.A., & Bieber, M. (2005). Towards active team-based learning: An instructional strategy. In Proceedings of the Eleventh Americas Conference on Information Systems (AMCIS)( pp. 728-734). Omaha, . Gomez, E.A., Wu, D., Passerini, K., & Bieber, M. (2006, April 19-21). Computer-supported learning strategies: An implementation and assessment framework for team-based learning. Paper presented at ISOneWorld Conference, Las Vegas, NV. Gomez, E. A., Wu, D., Passerini, K., & Bieber, M. (2006). Introducing computer-supported team-based learning: preliminary outcomes and learning impacts. In M. Khosrow-Pour (Ed.), Emerging trends and challenges in information technology management: Proceedings of the 2006 Information Resources Management Association Conference, (pp. 603-606). Hershey, PA: Idea Group Publishing. Isabella, L., & Waddock, S. (1994). Top management team certainty: environmental assessments, teamwork, and performance implications. Journal of Management, Winter. Johnson, D., & Johnson, R. (1999). What makes cooperative learning work. Japan Association for Language Teaching, pp. 23-36. Johnson, D.W., Johnson, R.T., & Smith, K.A. (1991). Cooperative learning: increasing college faculty instructional productivity. ASHE-ERIC Higher Education Report No. Leidner, D., & Jarvenpaa S.L. (1995). The use of information technology to enhance management school education: A theoretical view. MIS Quarterly, September, 265-291. Leidner, D., & Fuller, M. (1996). Improving
231
Utilizing Web Tools for Computer-Mediated Communication
student processing and assimilation of conceptual information: GSS-supported collaborative learning vs. individual constructive learning. In Proceedings of the 29th Hawaii International Conference on System Sciences (HICSS-29), Big Island, Hawaii, pp. 293-302. Malhotra, Y., & Galletta, D. (2003, January 6-9). Role of commitment and motivation in knowledge management systems implementation: Theory, conceptualization, and measurement of antecedents of success. In Proceedings of 36th Annual Hawaii International Conference on Systems Sciences, pp. 1-10. IEEE. Michaelsen, L., Fink, D., & Knight, A. (2002). Team-based learning: A transformative use of small groups in college teaching. Sterling VA: Stylus Publishing. Passerini, K., & Granger, M.J. (2000). Information technology-based instructional strategies. Journal of Informatics Education & Research, 2(3). Phillips, G. M., & Santoro, G. M. (1989). Teaching group discussions via computer-mediated
232
communication. Communication Education, 39, 151–161. Schlechter, T. M. (1990). The relative instructional efficiency of small group computer-based training. Journal of Educational Computing Research, 6(3), 329-341. Shen, J, Cheng, K., Bieber, M., & Hiltz, S. R. (2004). Traditional in-class examination vs. collaborative online examination in asynchronous learning networks: field evaluation results. In Proceedings of AMCIS 2004. Thorndike, E. L. (1932). Fundamentals of learning. New York: Teachers College Press. Wu, D., & Hiltz, S. R. (2004). Predicting learning from asynchronous online discussions. Journal of Asynchronous Learning Networks (JALN), 8(2), 139-152. Wu, D., Bieber, M., Hiltz, S. R., & Han, H. (2004, January). Constructivist learning with participatory examinations, In Proceedings of the 37th Hawaii International Conference on Systems Sciences (HICSS-37), Big Island, CD-ROM.
233
Chapter XIV
Accessible E-Learning:
Equal Pedagogical Opportunities for Students with Sensory Limitations Rakesh Babu University of North Carolina at Greensboro, USA Vishal Midha University of North Carolina at Greensboro, USA
AbSTRra The transformation of the world into a highly technological place has led to the evolution of learning from the traditional classroom to e-learning, using tools such as course management systems (CMS). By its very nature, e-learning offers a range of advantages over traditional pedagogical methods, including issues of physical access. It is particularly useful for people with sensory limitations as it offers a level playing field for them in learning. This study examines the accessibility, usability, and richness of CMS used for e-Learning in institutions of higher education. A model is proposed that underscores the influence of accessibility, usability, and richness of the CMS, coupled with learning motivation on the learning success as perceived by students with sensory limitations. The model is tested by surveying university students with sensory limitations about their views on the course management system used. The results suggested that accessibility and usability of a CMS have a positive influence on the learning success as perceived by students with sensory limitations.
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Accessible E-Learning
INTODUCTION The transformation of the world into a highly technological place has led to the evolution of learning from traditional classroom method to e-learning, where students learn in “invisible classrooms” (Phillips, 1998). E-learning is the acquisition and use of knowledge primarily distributed and facilitated electronically, using networks and computers. It can take the form of courses as well as modules and smaller learning objects delivered through course management systems (CMS). A large number of colleges and universities have implemented CMSs for online education, be it for hybrid learning (employing both e-learning and classroom learning) or virtual classroom learning (Alavi, Wheeler, & Valacich, 1995). CMSs like Blackboard, Moodle, WebCT, and others enable instructors without Web design skills or the time to build entire sites from scratch can quickly and easily place materials online. E-learning, by its very nature, offers a range of advantages over traditional methods. Unlike traditional learning, it doesn’t involve physical access issues. It has opened the doors for many individuals who are restricted in their pursuit of higher education due to constraints of access, time, distance, and so forth. It is particularly applicable for people with sensory limitations. Sensory limitations, involving vision, hearing, mobility, or cognitive impairments, can pose significant impediments for affected individuals in carrying out day-to-day activities. It can hamper the use of computers in the traditional mode. For this purpose, hardware and software, called assistive technologies, have been designed that enable such individuals to use computers and related application programs in an alternative mode (Brunet, et al., 2005; Englefield, Paddison, Tibbits, & Damani, 2005; Weir, 2005). By the year 2000, 55 million Americans had some form of sensory limitations that hindered their use of the Web (Waldrop, 2000). At the global level, 750 million people suffer from physical/
234
sensory limitations (WHO, 2002). A number of Web resources contain features restricting these people from accessing their content partially or completely. These resources should be accessible to all, irrespective of sensory abilities. It is even more important in an educational setting. How can students learn from course contents presented in inaccessible formats? Accessible e-learning takes into account the special needs of learners with disabilities, and provides an equivalent learning experience to that of non-disabled learners. Two U.S. federal laws deal with issues related to individuals with physical/sensory limitations, including online education. The first, The Americans with Disabilities Act (ADA) emphasizes on equal learning access to all, including learning materials on the Web (ADA Accessibility Guidelines, 2000). Section 508 of the Rehabilitation Act requires federal agencies to make IT accessible to persons with disabilities (Rehabilitation Act, 1998). Besides, the W3C (World Wide Web Consortium) sets technical specifications and standards for the Web (W3C/WAI 1997). These guidelines apply to all Web-based applications, including CMS. CMSs create complex environments that present numerous accessibility issues. Utilizing accessible tools for content creation does not automatically imply the accessibility of the content itself. Both the infrastructure and the content must be accessible for e-learning courses to be accessible, a responsibility equally shared by instructors and software developers. A concerted effort by the academia is necessary to highlight the issue of accessibility of CMS to developers, instructors and institutions in general. Traditional methods of education have often failed people with sensory limitations. Providing these members of the society with the facility to learn new skills during their working lives is therefore imperative. One way to do this is via e-learning in higher education. The e-learning experience of such individuals can be facilitated through the use of CMSs that are accessible. The
Accessible E-Learning
usability or user friendliness (Stewart, Narendra, & Schmetzke, 2005) and richness (Webster & Hackley, 1997) of a CMS may also be favorable to learning. This study takes a holistic approach to online learning for individuals with sensory limitations, proposing a model that depicts accessibility, usability and richness of a CMS, in addition to learning motivation as the independent variables and learning success as the dependent variable. The research question is stated as: Does the use of an accessible, usable, and rich CMS enhance the learning effectiveness for students with sensory limitations? This question is explored via a survey method, using Blackboard as a typical example of a CMS. The subjects include university students with sensory limitations.
LITERATURE REVIEW Accessibility Accessibility in the literary sense is defined as the approachability or the ease of dealing (http://wordnet.princeton.edu/perl/webwn). In the context of e-learning, accessibility refers to the attribute of a CMS that enables users to understand and fully interact with its contents, with the aid of assistive technologies (Thatcher et al., 2002). Extensive research has been conducted in a number of domains that relate to the subject matter of this paper. Programming requirements and guidelines for building accessible applications are published by a number of organizations. The Trace Center at the University of Wisconsin maintains a list of well-written, representative publications at their Web site (http://trace.wisc.edu). IBM’s approach to capturing and enumerating such guidelines can be found at http://www-3.ibm. com/able/guidelines/index.html. Jacob Nielsen (2001) has written guidelines specifically for Web-based applications. The insights captured
in these guidelines are critical to the successful delivery of an accessible application (Brunet et al., 2005). Accessible design is intended to enable universal access to interactive systems, regardless of user impairments and preferred client technology. Such design supports the specific needs of distinct groups challenged by impairments related to vision, hearing, motor skills, and cognitive abilities. Universal usability will be met when affordable, useful, and usable technology accommodates the vast majority of the global population: this entails addressing challenges of technology variety, user diversity, and gaps in user knowledge in ways only beginning to be acknowledged by educational, corporate, and government agencies (Shneiderman, 2000). A diverse range of challenges must be addressed in design for accessibility. For example, distinct design responses are necessary to support blind users, as compared to the support required for users with other visual impairments such as limited vision or tunnel vision. In the former case, a key concern is providing appropriate encoding of content for screen readers. A screen reader is a software program that reads the contents of the screen aloud to a user and is usually used by blind and visually impaired people. These screen readers cannot read text that is part of an image. In the case of visual impairments, the emphasis is on a range of techniques, such as appropriate typography, sensitivity to the diminished context associated with use of a screen magnifier, and support for user-defined font sizes. Additional design responses are required to facilitate input by users with limited dexterity, mobility, hearing impairments, and learning difficulties. Section 508 of the U.S. Rehabilitation Act defines assistive technology as a “device or software that substitutes for or enhances the function of some impaired ability.” In this sense, design for accessibility is a special case of design for usability. A critical factor in design for accessibility is design for as-
235
Accessible E-Learning
sistive technology such as screen readers, adapted keyboards, specialized pointing devices, and builtin support within browsers and operating-system user interfaces. However, more challenging issues must be resolved earlier in the design process. In initial design, for example, so called vast-and-fast menus support recognition over recall for sighted users, but for users of screen readers this design pattern imposes an unacceptable burden on working memory. Likewise, during detailed design, visual designers may need to make trade-offs between contemporary typographic aesthetics that value compact, low-contrast typography and the needs of readers with low vision (Englefield et al., 2005). In her field study, Lori Weir (2005) discusses how to design courses for physically challenged students. She suggests that while developing course materials, instructors must be aware of: perception about tasks and information by the cognitively impaired; comprehension of content without sound by the hearing impaired; reading without vision by the visually impaired; and instant messaging by the mobility impaired. This could result in design changes having universal impact. Development of online courses or course materials and discussions of learning styles must include the needs of disabled students and the assistive technology used. Soliciting feedback from students and spending time with them as they learn are simple yet powerful ways to gain an awareness of the course materials’ accessibility. Weir (2005) suggests some instructional strategies and course design guidelines in this regard.
Usability The Disability Rights Commission (2003) presented a report that highlights the importance of Web site usability for people with disabilities. According to this report, “45% of the 585 accessibility and usability problems were not a violation of any Web Accessibility Initiative’s Web Content
236
Accessibility Guidelines (WAI WCAG) Checkpoint and could therefore have been present on any WAI-conformant site regardless of rating”. This point illustrates a limitation of the WAI WCAG. It should be self-evident that quality e-learning Web resources should be usable and not just accessible. However, due to excessive focus on accessibility owing to fear of litigations, usability fails to draw similar levels of attention. Web site usability has received attention in the human computer interaction (HCI) literature as well as in Web-specific usability research. Usability is an engineering approach that identifies a set of principles and common practices in systems design (Nielsen, 1993). Usability includes consistency and the ease of getting the Web site to do what the user intends it to do, clarity of interaction, ease of reading, arrangement of information, speed and layout. Appropriate design of user interfaces includes organization, presentation, and interactivity (Shneiderman, 1998). Prior research also provides insights into activities supported by Web sites. A key capability of the Internet is its capacity to support greater interactivity for users. Gathering information over the Web is facilitated by the interactivity of the Web site (Jarvenpaa & Todd, 1997).
Media Richness Another set of characteristics that users consider when responding to Web resources is its richness, which refers to the medium’s relative ability to convey messages. Media have been classified as face to face, telephone, documents (Daft & Lengel, 1986), e-mail (Markus, 1994), and computer-mediated and video communication (Dennis & Kinney 1998). The specific influence of a given medium is often dependent on the task being performed (Daft & Lengel 1986). Finding information that is of high quality within the computer-mediated context is an important element (Hoffman & Novak, 1996; Dickson, 2000).
Accessible E-Learning
In the e-learning context, richness refers to the variety of modes of information presentation, such as video, audio, textual, etc offered by the CMS. The richer the CMS, the greater is its reach to users with disability.
MODEL AND HYPOTHESES Following from the literature analysis, this study attempts to adopt a holistic approach to accessible e-learning. The idea is to highlight the importance of usability and richness, besides accessibility of CMSs towards learning success. It also recognizes the significance of motivation to learning outcome. Our proposed model is depicted in Figure 1. The independent constructs—accessibility, usability, richness, and learning motivation—are shown to influence the dependent construct, perceived success of learning, a measure of learning effectiveness (Picciano, 2002). For this study, learning success is operationalized via comprehension, performance, and participation. These attributes depend on the perceptions of individuals. Hence, we have adopted the term perceived success of learning, from the Huang et al. (2005) study. Students with physical/sensory limitations can use a system only if it is designed with accessibility. E-learning imparted using CMSs will be successful for disabled students only if the CMS
Learning Motivation Learning motivation is an influential factor of learning outcome, especially in the e-learning context, where no instructors or peer classmates are physically present to motivate the learner in an online education program. In their investigation of the role of team learning in an online MBA program offered by University of Ohio, Huang, Luce, and Lu (2005) examined the correlations among virtual team learning, learning motivation, and learning outcome (satisfaction). One of their research questions was: Is learning motivation associated with students’ perceived learning outcome in an online MBA program? They reported that learning motivation was significantly correlated with learning outcome (Huang et al., 2005).
Figure 1. Research model
ACCESSIBILITY
USABILITY − Navigability − Organization MEDIA RICHNESS − Responsiveness − Interactivity
H1 H2
H3
PERCEIVED SUCCESS OF LEARNING
H4
LEARNING MOTIVAVTION
237
Accessible E-Learning
is accessible. Hence we propose that accessibility of a CMS will lead to learnability, comprehension, participation, and performance. Our first hypothesis may be stated as follows: H1: The perceived success of learning of physically challenged students is positively correlated to the accessibility of the CMS used. Usability, which reflects layout design, clarity, is equally important for using a system. The higher the usability of a system, the higher is its ease of use (Bevan, Kirakowskib, & Maissela, 1991). Students can learn better and faster when the CMS is hassle free. Hence, we propose that usability leads to improved learning. Our second hypothesis may be stated as follows: H2: The perceived success of learning of physically challenged students is positively correlated to the usability of the CMS used. The richness of a system reflects the variety in the modes of information presentation and the speed of obtaining information (Dennis & Valacicih, 1999). As students with different limitations may prefer different modes, a rich CMS will support needs of various disabilities. Richness of the CMS will impact comprehension and learning, leading to the third hypothesis: H3: The perceived success of learning of physically challenged students is positively correlated to the richness of the CMS used. Learning motivation reflects the desire to learn despite challenges. It enables an individual to persistently advancing and setting goals. As per Cury’s (1991) Theoretical Model of Learning Style Components, while designing educational programs that lead to successful learning, one must consider the constructs of motivation maintenance. The central idea is that student’s success in any learning situation requires positive
238
motivation on the part of the student, which then leads to an appropriate degree of task engagement. This is particularly important for students with impairments. Hence, we propose the fourth hypothesis: H4: The perceived success of learning of physically challenged students is positively correlated to their learning motivation.
METODOLOGY Owing to the nature of the topic, the literature review had to be very extensive. The sources ranged from academic to practitioner journals, US and UK Government Web sites, and agencies promoting accessible e-learning. Academic journals from a wide range of disciplines, such as Information Systems, Management, Organizational Behavior, Education, Computer Science, Sociology, Psychology, and so on were reviewed. Data was collected via survey method. Considering the novelty of this topic, and due to the small number of physically challenged students pursuing higher education, this work represents a preliminary study that intends to test the validity of the model and the instrument. This also explains why a convenient sample was used for the study.
Sudy Settings and Subjects Subjects of this study were individuals having limited sensory abilities, who used a computer exclusively with an assistive technology, (e.g. screen reader for sight impaired), enrolled into a course partly or fully delivered via CMS. A CMS may be used for posting lecture notes, completing assignments, taking exams, communicating, viewing grades, and more. These individuals were approached via the Office of Disability Services at each institution. These offices are primarily responsible for providing support to students with sensory limitations. A total of 32 students
Accessible E-Learning
value of .70 (Nunnally, 1967). For convergent and discriminant validity, factor analysis was performed in addition to scale reliability. Factor loading (Table 2) of most items was satisfactory (>0.7). No items were dropped due to small sample size. Data was subjected to multiple regression (Table 3), using mean of each construct, indicating significant regression model (F=7.220, p=0.000) and adjusted R 2 =0.445. Support for H1 and H2, and lack of support for H3 and H4 were also established (Table 4). Factors A1 through A5 correspond to accessibility, U1 through U6 correspond to usability, MR1 through MR5 correspond to media richness, LM1 through LM3 correspond to learning motivation, and PS1 through PS3 correspond to perceived success of learning.
agreed to participate in this survey, including 22 sight-impaired, and 10 hearing impaired.
Operationalization The survey instrument had to be designed to collect data to support the four hypotheses. An extensive literature search was conducted to trace validated instruments containing items used to operationalize the various constructs involved in this study. For ACCESSIBILITY, items of Brunet et al. (2005) study was found to be relevant, that was designed to capture the perspective of visually impaired users on accessibility and success. For usability, media richness, and perceived success, the Palmer (2002) study on users’ perceptions of e-commerce websites was found to be germane to the e-learning context. Hence, the items were slightly tweaked for a fit. For items on LEARNING MOTIVATION, the Huang et al. (2005) study, concerned with virtual learning of an MBA program, proved a valuable source for validated items. The consolidated instrument comprised of 22 items for various constructs.
RESULTS The mean cumulative score of ACCESSIBILITY of the CMS was 5.14. Overall accessibility received a score of 5.9; ease of uploading and downloading with an assistive technology received 5.18; ease of communicating received 5.48. The data supported the hypothesis that accessibility is positively correlated to learning success. The mean cumulative score of USABILITY of the CMS was 5.05. Overall navigability received
DATA ANALYSIS Data analysis began with descriptive statistics (Table 1) and reliability tests of the measures. All Cronbach alphas were above the acceptable Table 1. Descriptive statistics and internal reliability Covariance Matrix Demonstrating Internal Reliability Accessibility
Usability
Media Richness
Learning Motivation
Accessibility
1.0
Usability
0.2693
1.0
Media Richness
0.5149
0.3148
1.0
Learning Motivation
0.4430
0.4432
0.6333
1.0
Perceived Success
0.8329
0.6455
0.7927
0.6551
Perceived Success
Cronbach’s Alpha 0.915 0.879 0.906 0.810
1.0
0.833
239
Accessible E-Learning
Table 2. Factor loadings Factor 1
Factor 2
3
4
5
1
2
3
A1
0.873
U1
0.811
A2
0.859
U2
0.748
A3
0.760
U3
0.675
A4
0.788
U4
0.844
A5
0.810
U5
0.843
4
5
MR1
0.703
LM1
0.749
MR2
0.726
LM2
0.720
MR3
0.685
LM3
0.694
MR4
0.843
PS1
0.809
MR5
0.791
PS2
0.603
MR6
0.785
PS3
0.661
Table 3. Regression analysis results Analysis of Variance SS
DF
Means
F-Ratio
P-value
Regression
9.044
4
2.261
7.220
0.000
Residual
8.456
27
0.313
Total
17.50
31
Table 4. Does data analysis support the hypotheses?
240
Hypotheses
Coefficient (p-value)
Supported
H1: The perceived success of learning of physically challenged students is positively correlated to the accessibility of the CMS
0.392 (0.020)
YES
H2: The perceived success of learning of physically challenged students is positively correlated to the usability of CMS
0.328 (0.037)
YES
H3: The perceived success of learning of physically challenged students is positively correlated to the richness of CMS
0.260 (0.169)
NO
H4: The perceived success of learning of physically challenged students is positively correlated to their learning motivation
0.044 (0.815)
NO
Accessible E-Learning
a mean score of 5.2; layout received 5.06; making CMS do what was needed received 5 or higher. Hence, the CMS was overall quite usable. Data supported the hypothesis that usability is positively correlated to learning success. The mean cumulative score of richness was 3.6, with low levels of customizability, feedback, with no provision for FAQs. Data did not support the hypothesis that richness is positively correlated to learning success. The mean cumulative score of LEARNING MOTIVATION was 5.48. Item on interest in learning through a CMS received a score of 5.8; performance level of the respondents received 5.5; effort put into the course received 5.1. Despite the high score, the variance in data failed to support the hypothesis that learning motivation is positively correlated to learning success. PERCEIVED SUCCESS of learning was operationalized by three items. The item on comfort level of online discussion received a mean score of 6.01, while that for level of performance in online exams received 4.65, and preference of online learning over traditional learning received 5.44
This study represents a preliminary investigation of those features of a CMS that influence the learning outcome of students with disabilities at college or university level. This study is limited in two aspects: sample size and choice of CMS. The sample size of thirty-two is good for a pilot, but not for a full-scale study. The CMS under study was limited to Blackboard. In future, a full-scale survey is planned that would involve students with disability from multiple universities across the U.S. Also, that would provide an opportunity to examine a wide range of CMSs.
DISCUSSION
CONCLUSION
As is evident from the results, the accessibility and usability of the CMS influenced the perceived learning success of students with disability. However, its richness was not found to be significant for the learning outcomes. This may be attributed to the fact that the CMS under study did not offer a media rich enough for learning. It was also observed that the students were more comfortable in learning and participating through a CMS. This could be attributed to the fact that e-learning in general, and CMS in particular offer a medium that presents information simultaneously in both visual and textual modes. A lecture slide delivered over the Blackboard facilitates universal comprehension, as students with vision impairment can read it using assis-
E-learning, by its very nature, offers a range of advantages over traditional methods. Tools of e-learning course management systems—when designed for accessibility and usability, offer a level playing field for learning, irrespective of physical or sensory abilities. Despite of the regulations and guidelines on accessibility and usability, there are numerous instances of institutions ignoring the special needs of students with physical/sensory limitations. Access issues in e-learning may arise from two sources—course materials and medium of delivery. As a starting point, colleges and universities must ensure that their CMSs, the medium for e-learning, are totally accessible, usable, and rich, irrespective of abilities. This will guarantee
tive technologies, while those with hearing impairment can read it visually. Both these groups would have had access to part of the lecture in a traditional class as the instructor cannot speak out each word that he/she writes on the board, or vice versa. Hence, an accessible CMS represents an excellent realization of the slogan “Equal Opportunity for All.”
LIMITONS
241
Accessible E-Learning
better pedagogical opportunities for the physically challenged, resulting in favorable learning outcome.
FUTURE OUTLOOK There are some fruitful avenues for future research based on this study. One study could involve a comparative examination of the various CMSs for accessibility, usability, and richness. Based on the findings, taxonomy of a good CMS design, pertaining to the three constructs could be developed, which can serve as checkpoints for institutions looking at CMS adoption. A second study could involve examining the effect of learning motivation in e-learning at different levels of education, after controlling for the other factors. This would give a correct perspective about the impact of motivation on learning success of students with disability. A third study could involve developing a framework of e-learning success that considers different forms of disabilities as well as learning styles. An implication of offering e-learning via such a medium is that it has the potential to offset the decrease in enrollment into institutions of higher education, by encouraging students with disabilities to pursue higher education, a group of students that have had a higher drop out rate, owing to access issues. A second implication could be an increased contribution of individuals with disabilities towards the pool of skilled labor.
REFERENC ADA Accessibility Guidelines. Retrieved Mar 22, 2006, from http://www.usdoj.gov/crt/508/report2/standards.htm Alavi, M., Wheeler, B. C., & Valacich, J. S. (1995). Using IT to reengineer business education: An
242
exploratory investigation of collaborative telelearning. MIS Quarterly, 19(3), 293-312. Bevan, N., Kirakowskib, J., & Maissela, J. (1991, September). What is Usability? In Proceedings of the 4th International Conference on HCI. Brunet, P., Feigenbaum, B. A., Harris, K., Laws, C., Schwerdtfeger, R., & Weiss. L. (2005). Accessibility requirements for systems design to accommodate users with vision impairments. IBM Systems Journal, 44(3), 445-467. Cury, L. (1991). Patterns of learning style across selected medical specialties. Educational Psychology, 11, 247-77. Daft, R., & Lengel, R. (1986). Organizational information requirements, media richness and structural design. Management Science, 32, 554-571. Dennis, A., & Valacich, J., (1999). Rethinking media richness: Towards a theory of media synchronicity. In Proceedingsof the 32th Hawaii International Conference of Systems Sciences, IEEE Computer Society. Los Alamitos. Dennis, A., & Kinney, S. (1998). Testing media richness theory in the new media: The effects of cues, feedback, and task equivocality. ISR, 9(3), 256-274. Dickson, P., Understanding the trade winds: The global evolution of production, consumption, and the Internet, Journal of Consumer Research, June 2000, 27, 115-122. Disability Rights Commission (2003), W3C argues with DRC Web Accessibility Findings. Retrieved April 5, 2006 from http://www.usabilitynews.com/news/article1664.asp Englefield, P., Paddison, C.. Tibbits, M., & Damani, I. (2005). A proposed architecture for integrating accessibility test tools. IBM Systems Journal, 44(3), 537-556.
Accessible E-Learning
Hoffman, D., & Novak. T. (1996). Marketing in hypermedia computer-mediated environments: Conceptual foundations. Journal of Marketing, 60(3), 50-68. Huang, W., Luce, T., & Lu, Y. (2005). Virtual team learning in online MBA education: An empirical investigation. Issues in Information Systems, VI(1). Initiative (WAI). Available online at http://www. w3.org/WAI/. Jarvenpaa, S., & Todd. P. (1997). Consumer reactions to electronic shopping on the World Wide Web. International Journal of Electronic Commerce, 1(2), 59-88. Markus, M. (1994). Electronic mail as the medium of managerial choice. Organizational Science, 5, 502-527. Nielsen, J. (1993). Usability engineering. San Francisco: Morgan Kaufmann. Nielsen, J. (2001). Beyond ALT text: Making the Web easy to use for users with disabilities: 75 best practices for websites and intranets, based on usability studies with people using assistive technology. Retrieved from http://www.nngroup. com/reports/accessibility Nunnally, J. (1967). Psychometric theory. New York: McGraw-Hill. Palmer, J. (2002). Web site usability, design and performance metrics. Information Systems Research, 13(2), 151-167 Phillips M. C. (1998). Increasing students’ interactivity in an online course. Journal of Online Education, 2(3). 31-43. Picciano, A. G. (2002). Beyond student perceptions: Issues of interaction, presence, and perfor-
mance in an online course. Journal of Asynchronous Learning Networks, 6(1), 20-41. Rehabilitation Act. (1998). Retrieved on Mar 30, 2006 from http://www.usdoj.gov/crt/508/archive/ oldinfo.html Shneiderman, B. (1998). Designing the user interface: Strategies for effective human-computer interaction. Reading, MA: Addison-Wesley. Shneiderman, B. (2000) Universal Usability, Communications of the ACM, 43(5), 84-91. Stewart, R., Narendra, V., & Schmetzke, A. (2005). Accessibility and usability of online library databases. Library Hi Tech, 23(2), 265-286. Thatcher J., Burks M., Swierenga S., Waddell C., Regan B., Bohman P., Henry S. L., and Urban M. (2002). Constructing Accessible Web Sites,. Birmingham, U.K.: Glasshaus. Universal Usability Guide, universalusability. org, Retrieved from http:// www.universalusability.org/ W3C/WAI (1997). The World Wide Web Consortium (W3C) Web Accessibility Waldrop J. and Stern S. (2003). Disability status: 2000. Census 2000 Brief. Retreived from http:// www.census.gov/prod/2003pubs/c2kbr-17.pdf Webster, J., & Hackley, P. (1997). Teaching effectiveness in technology-mediated distance learning. Academy of Management Journal, 40(6), pp1282-1309. Weir, L. (2005). Raising the awareness of online accessibility. THE Journal, 32(10). WHO (2002), World Health Organization: Future trends and challenges in rehabilitation. Retrieved Feb 18, 2006 from http://www.who.int/ncd/disability/trends.htm
243
Section III
Tools and Applications
245
Chapter XV
Supporting Argumentative Collaboration in Communities of Practice: The CoPe_it! Approach
Nikos Karacapilidis University of Patras and Research Academic Computer Technology Institute, Greece Manolis Tzagarakis Research Academic Computer Technology Institute, Greece
AbSTRACT Providing the necessary means to support and foster argumentative collaboration is essential for Communities of Practice to achieve their goals. However, current tools are unable to cope with the evolving stages of the collaboration. This is primarily due to the inflexible level of formality they provide. Arguing that a varying level of formality needs to be offered in systems supporting argumentative collaboration, this chapter proposes an incremental formalization approach that has been adopted in the development of CoPe_it!, a Web-based tool that complies with collaborative principles and practices, and provides members of communities engaged in argumentative discussions and decision making processes with the appropriate means to collaborate towards the solution of diverse issues. According to the proposed approach, incremental formalization can be achieved through the consideration of alternative projections of a collaborative workspace.
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Supporting Argumentative Collaboration in Communities of Practice
INnion Designing software systems that can adequately address users’ needs to express, share, interpret and reason about knowledge during a session of argumentative collaboration has been a major research and development activity for more than twenty years (de Moor and Aakhus, 2006). Designing, building, and experimenting with specialized argumentation and decision rationale support systems has resulted to a series of argument visualization approaches. Technologies supporting argumentative collaboration usually provide the means for discussion structuring, sharing of documents, and user administration. They support argumentative collaboration at various levels and have been tested through diverse user groups and contexts. Furthermore, they aim at exploring argumentation as a means to establish a common ground between diverse stakeholders, to understand positions on issues, to surface assumptions and criteria, and to collectively construct consensus (Jonassen and Carr, 2000). When engaged in the use of these technologies, through a software system supporting argumentative collaboration, users have to follow a specific formalism. More specifically, their interaction is regulated by procedures that prescribe and - at the same time - constrain their work. This may refer to both the system-supported actions a user may perform (types of discourse or collaboration acts), and the system-supported types of argumentative collaboration objects (e.g. one has to strictly characterize an object as an idea or a position). In many cases, users have also to fine-tune, align, amend or even fully change their usual way of collaborating in order to be able to exploit the system’s features and functionalities. Acknowledging that the above are necessary towards making the system interpret and reason about human actions (and the associated resources), thus offering advanced computational services, there is much evidence that sophisticated approaches
246
and techniques often resulted to failures (Shipman and McCall, 1994). A number of reasons are responsible for the abovementioned failures. Some reasons originate from the consequences of adopting a specific tool to support argumentative collaboration. In many cases, there is a considerable amount of time and effort required by users to get acquainted with the system. Moreover, introducing a new system introduces burdens that disrupt the users’ usual workflow (Fischer et al., 1991). The very nature of argumentative collaboration, as it is carried out within Communities of Practice (CoPs) poses additional issues. In particular, argumentative collaboration within CoPs is not a linear process from problem statement to decision making, but rather an iterative process that exhibits a series of stages, each one associated with different objectives. In this regard, the problems faced by CoPs can be characterized as “wicked problems” [ref Dialogue Mapping]. Current tools are unable to support this evolving nature of collaboration; their use in such settings results to the “error prone and difficult to correct when done wrong” character and the prematurely imposing structure (Halasz, 1988). The rigid nature of their level of formality contributes decisively to this situation. As a consequence, we argue that a varying level of formality should be considered. This variation may either be imposed by the nature of the task at hand (e.g. decision making, joint deliberation, persuasion, inquiry, negotiation, conflict resolution), the particular context of the collaboration (e.g. legal reasoning, medical decision making, public policy), or the group of people who collaborate each time (i.e. how comfortable people feel with the use of a certain technology or formalism). The above advocate an incremental formalization approach, which has been adopted in the development of CoPe_it!, a web-based tool that is able to support argumentative collaboration at various levels of formality (http://copeit.cti.gr). CoPe_it! complies with collaborative principles
Supporting Argumentative Collaboration in Communities of Practice
and practices, and provides members of communities engaged in argumentative discussions and decision making processes with the appropriate means to collaborate towards the solution of diverse issues. Representative settings where the tool would be useful include medical collaboration towards making a decision about the appropriate treatment of a patient, public policy making involving a wide community, collaboration among students in the context of their project work, organization-wide collaboration for the consideration and elaboration of the organization’s objectives, web-based collaboration to enhance individual and group learning on an issue of common interest, etc. According to the proposed approach, incremental formalization can be achieved through the consideration of alternative projections (i.e. particular representations) of a collaborative workspace, as well as through mechanisms supporting the switching from one projection to another. This chapter focuses on the presentation of this approach. More specifically, Section 2 comments on a series of background issues related to reasoning and visualization, as well as on related work, which motivated our work. Section 3 presents our overall approach, illustrates two representative examples of different formality level and sketches the procedure of switching among alternative projections of a particular workspace. Section 4 discusses advantages and limitations of the proposed approach. Finally, Section 5 concludes our work and outlines future work directions.
Moiva The representation and facilitation of argumentative discourses being held in diverse collaborative settings has been a subject of research interest for quite a long time. Many software systems have been developed so far, based on alternative models of argumentation structuring, aiming to capture the key issues and ideas during meetings
and create a shared understanding by placing all messages, documents and reference material for a project on a “whiteboard”. More recent approaches pay particular attention to the visualization of argumentation in various collaborative settings. As widely argued, visualization of argumentation can facilitate problem solving in many ways, such as in explicating and sharing representations among the actors, in maintaining focus on the overall process, as well as in maintaining consistency and in increasing plausibility and accuracy (Kirschner et al., 2003). Generally speaking, existing approaches provide a cognitive argumentation environment that stimulates reflection and discussion among participants (a comprehensive consideration of such approaches can be found in (Karacapilidis et al., 2005)). However, they receive criticism related to their adequacy to clearly display each collaboration instance to all parties involved (usability and ease-of-use issues), as well as to the structure used for the representation of collaboration. In most cases, they merely provide threaded discussion forums, where messages are linked passively. This usually leads to an unsorted collection of vaguely associated positions, which is extremely difficult to be exploited in future collaboration settings. As argued in (van Gelder, 2003), “packages in the current generation of argument visualization software are fairly basic, and still have numerous usability problems”. Also important, they do not integrate any reasoning mechanisms to (semi)automate the underlying decision making processes required in a collaboration setting. Admittedly, there is a lack of consensus seeking abilities and decision-making methods. In addition, the assumptions that contemporary approaches make with respect to the nature of collaboration fail to capture its dynamic aspects. In particular, current tools assume that the argumentative collaboration proceeds linearly from problem statement to decision making. However, the processes in which CoPs engage when attempting to address problems using argumenta-
247
Supporting Argumentative Collaboration in Communities of Practice
tive collaboration have an iterative nature that exhibits various stages with different objectives (Wenger, 1998). For example, at the beginning of the collaboration the objectives may be simply to collect and share knowledge items as well as exploit legacy resources. Collaboration may then proceed at a stage where the interrelation of the collected resources consists the primary activity. Next, a semi-formal aggregation and specialization of the resources may take priority along with the semantic annotation of existing items. The stage of exploiting the resulting knowledge patterns and the formal argumentation and reasoning may delineate the end of the collaboration. During collaboration sessions in CoPs, the aforementioned stages cannot be considered hierarchical as at any point in time objectives may change. Such aspects of the collaboration bring the problems that CoPs attempt to address into the realm of so called “wicked problems” (Conklin, 2006). Taking the above into account, we claim that an integrated consideration of visualization and reasoning is needed in an argumentative collaboration context. Such an integrated consideration should be in line with incremental formalization principles. More specifically, the above integration should efficiently and effectively address problems related to formality (Shipman and Marshall, 1994). As discussed in (Shipman and McCall, 1994), “users want systems be more of an active aid to their work - to do more for them; yet they already resist the low level of formalization required for passive hypertext”. Existing work on incremental formalization argues that problems related to formality have to be solved by approaches that (i) do not necessarily require formalization to be done at the time of input of information, and (ii) support (not automate) formalization by the appropriate software. At the same time, the abovementioned integrated consideration should be also in line with the information triage process (Marshall and Shipman, 1997), i.e. the process of sorting and organizing through numerous relevant materials
248
and organizing them to meet the task at hand. During such a process, users must scan, locate, browse, update and structure effortlessly knowledge resources that may be incomplete, while the resulting structures may be subject to rapid and numerous changes.
The_It! approa The research method adopted for the development of the proposed solution follows the Design Science Paradigm, which has been extensively used in information systems research (Hevner et al., 2004). Having followed this paradigm, our main contribution lies in the development of a webbased tool for supporting argumentative collaboration and the underlying creation, leveraging and utilization of the relevant knowledge. Generally speaking, our approach allows for distributed (synchronous or asynchronous) collaboration and aims at aiding the involved parties by providing them with a series of argumentation, decision making and knowledge management features. Moreover, it exploits and builds on issues and concepts discussed in the previous section.
Challenges in Argumentative Collaboration A series of interviews with members of diverse CoPs (from the engineering, management and education domains) has been performed in order to identify the major issues they face during their argumentative collaboration practices. These issues actually constitute a set of challenges for our approach, in that the proposed collaboration model and infrastructure must provide the necessary means to appropriately address them. These issues are: •
Management of information overload: This is primarily due to the extensive and uncontrolled exchange of comments, docu-
Supporting Argumentative Collaboration in Communities of Practice
•
•
ments and, in general, any type of information/knowledge resource, that occurs in the settings under consideration. For instance, such a situation may appear during the exchange of ideas, positions and arguments; individuals usually have to spend much effort to keep track and conceptualize the current state of the collaboration. Information overload situations may ultimately harm a community’s objectives, requiring users to spend much time on information filtering and comprehension of the overall collaboration status. Diversity of collaboration modes as far the protocols followed and the tools used are concerned: Interviews indicated that the evolution of the collaboration proceeds incrementally; ideas, comments, or any other type of collaboration object are exchanged and elaborated, and new knowledge emerges slowly. When a community’s members collaboratively organize information, enforced formality may require specifying their knowledge before it is fully formed. Such emergence cannot be attained when the collaborative environment enforces a formal model (i.e. predefined information units and relationships) from the beginning. On the other hand, formalization is required in order to ensure the environment’s capability to support and aid the collaboration efforts. In particular, the abilities to support decision making, estimation of present state or summary reports benefit greatly from formal representations of the information units and relationships. Expression of tacit knowledge: A community of people is actually an environment where tacit knowledge (i.e. knowledge that the members do not know they posses or knowledge that members cannot express with the means provided) predominantly exists. Such knowledge must be able to be efficiently and effectively represented.
•
•
Integration and sharing of diverse information and knowledge: Many resources required during a collaborative session have either been used in previous sessions or reside outside the members’ working environment. Moreover, outcomes of past collaboration activities should be able to be reused as a resource in subsequent collaborative sessions. Decision making support: Many communities require support to reach a decision. This means that their environment (i.e. the tool used) needs to interpret the information types and relationships in order to proactively suggest trends or even calculate the outcome of a collaborative session (e.g. as is the case in voting systems).
Towards an Argumentative Collaboration Framework To address the above issues, our approach builds on a conceptual framework where formality and the level of knowledge structure during argumentative collaboration is not considered as a predefined and rigid property of the tool, but rather as an adaptable aspect that can be modified to meet the needs of the tasks at hand. By the term formality, we refer to all the rules enforced by the system and to which all discourse actions of users must comply. Allowing formality to vary within the collaboration space, incremental formalization, i.e. a stepwise and controlled evolution from a mere collection of individual ideas and resources to contextualized and interrelated knowledge artifacts, can be achieved. In the proposed collaboration model, projections constitute the “vehicle” that permits incremental formalization of argumentative collaboration (see Figure 1). A projection can be defined as a particular representation of the collaboration space, in which a consistent set of abstractions able to solve a particular organizational problem during argumentative collaboration exists. With
249
Supporting Argumentative Collaboration in Communities of Practice
Figure 1. Alternative projections of a collaboration space t se ol o T
Formal Projection
To ol se t
Collaboration Space
the term abstraction, we refer to the particular discourse types, relationships and actions that are available at a particular projection, and with which a particular problem can be represented, expressed and - ultimately - be solved. Each projection of the collaboration space provides the necessary mechanisms to support a particular level of formality. More specifically, the more informal is a projection, the more easeof-use is implied; at the same time, the actions that users may perform are intuitive and not time consuming (e.g. drag-and-drop a document to a shared collaboration space). Informality is associated with generic types of actions and resources, as well as implicit relationships between them. However, the overall context is human (and not system) interpretable. On the other hand, the more formal is a projection, ease-of-use is reduced (users may have to go through training or reading of long manuals in order to comprehend and get familiar with sophisticated system features); actions permitted are less and less intuitive and more time
250
Informal Projection
consuming. Formality is associated with fixed types of actions, as well as explicit relationships between them. The overall context in this case is both human and system interpretable. An informal projection also aims at supporting information triage. It is the informal nature of this projection that permits such an ordinary and unconditioned evolution of knowledge structures. While such a way of dealing with knowledge resources is conceptually close to practices that humans use in their everyday environment (e.g. their desk), it is inconvenient in situations where support for advanced decision making processes must be provided. Such capabilities require knowledge resources and structuring facilities with fixed semantics, which should be understandable and interpretable not only by the users but also by the tool. Hence, decision making processes can be better supported in environments that exhibit a high level of formality. The formal projections of the collaboration space come to serve such needs.
Supporting Argumentative Collaboration in Communities of Practice
CPe_it! in a Collaboration Session To better illustrate our approach, this subsection presents two alternative (already implemented) projections of a particular collaborative session (the session is about which is the most appropriate treatment for a patient with breast cancer). The first one is fully informal and complies with the abovementioned information triage principles, while the second one builds on an IBIS-like formalism (Conklin and Begeman, 1989) and supports group decision making. As mentioned above, the aim of an informal projection of the collaboration space is to provide users the means to structure and organize information units easily, and in a way that conveys
semantics to users. Generally speaking, informal projections may support an unbound number of discourse element types (e.g. comment, idea, note, resource). Moreover, users may create any relationship among discourse elements (there are no fixed relationship types); hence, relationship types may express agreement, disagreement, support, request for refinement, contradiction etc. Informal projections may also provide abstraction mechanisms that allow the creation of new abstractions out of existing ones. Abstraction mechanisms include: •
Annotation and metadata: the ability to annotate instances of various discourse elements and add (or modify) metadata.
Figure 2. Instance of an informal projection
251
Supporting Argumentative Collaboration in Communities of Practice
•
•
•
Aggregation: The ability to group a set of instances of discourse elements so as to be handled as a single conceptual entity. This may lead to the creation of additional informal sub-projections, where a set of discourse elements can be considered separately, but still in relation to the context of a particular collaboration. Generalization/Specialization: The ability to create semantically coarse or more detailed discourse types. Generalization/specialization may not lead to additional informal projections but may help users to manage information pollution of the collaboration space leading to ISA hierarchies. Patterns: The ability to specify instances of interconnections between discourse elements (of the same or a different type) as templates acting as placeholders that can be reused within the discussion.
Figure 2 presents an example of an informal projection of the collaboration session considered. Medical doctors discuss the case of a particular patient aiming at achieving a decision on the most appropriate treatment. Since initially the process of gathering and discussing the available treatment options is unstructured, highly dynamic and thus evolving rapidly, the informal space provides the most appropriate environment to support collaboration at this stage. The aim is to bring the session to a point where main trends crystallize, thus enabling the switch to a formal projection (upon the participants’ wish). In the example of Figure 2, three approaches to the patience’s treatment – proposed by three different users – have been (so far) elaborated, namely “modified radical mastectomy”, “lumpectomy” and “radiation”. Each proposed treatment is visible on the collaboration space as an “idea”. Participants may use relationships to relate resources (documents, links etc.), comments and ideas to the proposed treatment. The semantics of these relationships are user-defined. Visual
252
cues may be used to make the semantics of the relationship more explicit, if desired. For instance, a red arrow indicates comments and resources that express objection to a treatment, while green arrows express approval of a treatment. Note that the resource entitled “On tumor sizes positions” seems to argue against the solution of “lumpectomy” while, at the same time, it argues in favor of “modified radical mastectomy”. This is due to the information contained in it (in that some “chunks” advocate or avert from a particular solution; this is to be further exploited in a formal projection). Other visual cues supported in this projection may bear additional semantics (e.g. the thickness of an edge may express how strong a resource/idea may object or approve a treatment). Informal projections also provide mechanisms that help aggregating aspects of collaboration activities. For example the colored rectangles labeled as “solution-1”, “solution-2” and “solution-3” help participants visualize what the current alternatives are. Although - at this projection instance – these rectangles are simply visual conveniences, they play an important role during the switch to formal projections, enabling the implementation of abstraction mechanisms. While an informal projection of the collaboration space aids the exploitation of information by users, a formal projection aims mainly at the exploitation of information by the machine. As noted above, formal projections provide a fixed set of discourse element and relationship types, with predetermined, system-interpretable semantics. More specifically, the formal projection presented in Figure 3 is based on the approach followed in the development of Hermes (Karacapilidis and Papadias, 2001). Beyond providing a workspace that triggers group reflection and captures organizational memory, this projection provides a structured language for argumentative discourse and a mechanism for the evaluation of alternatives. Taking into account the input provided by users, this projection constructs an illustrative discourse-based knowledge graph that is com-
Supporting Argumentative Collaboration in Communities of Practice
Figure 3. Instance of a formal projection
posed of the ideas expressed so far, as well as their supporting documents. Moreover, through the integrated decision support mechanisms, participants are continuously informed about the status of each discourse item asserted so far and reflect further on them according to their beliefs and interests on the outcome of the discussion. In addition, the particular projection aids group sense-making and mutual understanding through the collaborative identification and evaluation of diverse opinions. The discourse elements allowed in this projection are “issues”, “alternatives”, “positions”, and “preferences”. Issues correspond to problems to be solved, decisions to be made, or goals to be achieved. They are brought up by users and are open to dispute (the root entity of a discoursebased knowledge graph has to be an issue). For each issue, users may propose alternatives (i.e. solutions to the problem under consideration) that correspond to potential choices. Nested is-
sues, in cases where some alternatives need to be grouped together, are also allowed. Positions are asserted in order to support the selection of a specific course of action (alternative), or avert the users’ interest from it by expressing some objection. A position may also refer to another (previously asserted) position, thus arguing in favor or against it. Finally, preferences provide individuals with a qualitative way to weigh reasons for and against the selection of a certain course of action. A preference is a “tuple” of the form [position, relation, position], where the relation can be “more important than” or “of equal importance to” or “less important than”. The use of preferences results in the assignment of various levels of importance to the alternatives in hand. Like the other discourse elements, they are subject to further argumentative discourse. The above four types of elements enable users to contribute their knowledge on the particular problem or need (by entering issues, alterna-
253
Supporting Argumentative Collaboration in Communities of Practice
tives and positions) and also to express their relevant values, interests and expectations (by entering positions and preferences). Moreover, the system continuously processes the elements entered by the users (by triggering its reasoning mechanisms each time a new element is entered in the graph), thus facilitating users to become aware of the elements for which there is (or there is not) sufficient (positive or negative) evidence, and accordingly conduct the discussion in order to reach consensus. Further to the argumentation-based structuring of a collaborative session, this projection integrates a reasoning mechanism that determines the status of each discourse entry, the ultimate aim being to keep users aware of the discourse outcome. More specifically, alternatives, positions and preferences of a graph have an activation label (it can be “active” or “inactive”) indicating their current status (inactive entries appear in red italics font). This label is calculated according to the argumentation underneath and the type of evidence specified for them (“burden of proof”). Activation in our system is a recursive procedure; a change of the activation label of an element is propagated upwards in the discussion graph. Depending on the status of positions and preferences, the mechanism goes through a scoring procedure for the alternatives of the issue (for a detailed description of the system’s reasoning mechanisms, see (Karacapilidis and Papadias, 2001)). At each discussion instance, the system informs users about what is the most prominent (according to the underlying argumentation) alternative solution. In the instance shown in Figure 3, “modified radical mastectomy” is the better justified solution so far. However, this may change upon the type of the future argumentation. In other words, each time an alternative is affected during the discussion, the issue it belongs to is updated, since another alternative solution may be indicated by the system.
254
Changing the Formality Level The projections discussed above could individually serve the needs of a particular community (for a specific context). However, they should be also considered (and exploited) jointly, in that a switch from one to the other can better facilitate the argumentative collaboration process. Adopting an incremental formalization approach, a formal projection can be considered as a filtered and machine-interpretable view of an informal one. Our approach is able to support cases where argumentative collaboration starts through the informal projection (see Section 3.3.1), where instances of any discourse element and relationship type can be created (by any participant). Such collaboration may start from an empty collaboration space or may continue elaborating an informal view of a past collaboration session (existing resources and relationships between them can thus be reused). At some point of the collaboration, an increase of the formality level can be decided (e.g. by an individual user or the session’s facilitator), thus switching to the formal projection (see Section 3.3.2), where discourse and relationship type instances will be transformed, filtered out, or kept “as-is”. The above are determined by the associated (visualization and reasoning) model of the formal projection (consequently, this process can be partially automated and partially semi-automated). For instance, referring to the projections discussed above, the colored rectangles shown in Figure 2 will be transformed to the alternatives of Figure 3 (each alternative is expressed by the related idea existed in Figure 2). Moreover, provided that a particular resource appearing in the informal view has been appropriately annotated, the formal projection is able to exploit extracts (“chunks”) of it and structure them accordingly. Such extracts appear as atomic objects at the formal projection. For instance, consider the multiple
Supporting Argumentative Collaboration in Communities of Practice
arguments in favor and against the alternatives of Figure 3; these have been resulted out of the appropriate annotation of the resources appearing in Figure 2. One may also consider a particular argumentative collaboration case, where decrease of formality is desirable. For instance, while collaboration proceeds through a formal projection, some discourse elements need to be further justified, refined and elucidated. It is at this point that the collaboration session could switch to a more informal view in order to provide participants with the appropriate environment to better shape their minds (before possibly switching back to the formal projection). Note that there may exist more than one informal projections that are related to a particular formal view (depending on the type of the discourse element to be elaborated). Switching from a formal to an informal projection is also supported by our approach. In addition to the above, our approach permits users to create one or more private spaces, where they can organize and elaborate the resources of a collaboration space according to their understanding (and their pace). Although private in nature, users are able to share such spaces with their peers. Moreover, each projection is associated with a set of tools that better suit to its purposes. These tools enable the population, manipulation and evolution of the discourse element types allowed in the particular projection. There can be tools allowing the reuse of information residing in legacy systems, tools permitting authoring of multimedia content, annotation tools, as well as communication and management tools.
Implementation Issues A web-based prototype version of CoPe_it!, supporting various levels of formality using projections as the ones described above, has been implemented. The prototype makes use of Web 2.0 technologies, such as AJAX (Asynchronous
JavaScript and XML), to deliver the functionalities of the different projections to end users. Based on these technologies, concurrent and synchronous collaboration in every projection is provided. Individual collaboration sessions are stored in XML format. There is at least one XML schema for each formality level (i.e. projection) that encodes and implements the constraints and rules that are active in it. More formal levels are manifested as more strict XML schemas, where types and relationships are fewer and more explicit than in cases of less formal levels. A database stores all XML documents giving the opportunity to use XML querying facilities. To enable asynchronous collaboration within workspaces, CoPe_it! has adopted the Comet application architecture (see http://en.wikipedia. org/wiki/Comet_(programming)). Although the literature has yet to show the feasibility of using Comet in the context of argumentative collaboration over the Web, our initial experiences in using Comet are promising as they show that it can be reliably used in such data- and action-intensive environments. Nevertheless, problems and concerns have been identified as - for example - efficiency issues in the propagation of events that however can be addressed by making use of caching mechanisms with respect to events.
Diion Referring to (Shipman and Marshall, 1994), we first draw remarks concerning the advantages and limitations of the proposed approach against issues such as cognitive overhead, tacit knowledge, premature structure, and situational differences. Speaking about the first issue, we argue that our approach mirrors working practices with which users are well acquainted (they are part of their ordinary tasks), thus exhibiting low “barriers to entry”. Moreover, it reduces the overhead of entering information by allowing a user-friendly reuse
255
Supporting Argumentative Collaboration in Communities of Practice
of existing documents (mechanisms for reusing existing knowledge sources, such as e-mail messages and entries or topics of web-based forums, as well as multimedia documents, such as images, video and audio, have been also integrated). In addition, our approach is able to defer the formalization of information until later in the task. This may be achieved by the use of the appropriate annotation and ontology management tools. In any case, however, users may be averted from the use of such (usually sophisticated) tools, thus losing the benefits of a more formal representation of the asserted knowledge resources. A remedy to that could be that such processing is performed by experienced users. One should also argue here that, due to the collaborative approach supported, the total overhead associated with formalizing information can be divided among users. Speaking about management of tacit knowledge, we argue that the alternative projections offered, as well as the mechanisms for switching among them, may enhance its acquisition, capturing and representation. Limitations are certainly there; nevertheless, our approach promotes active participation in knowledge sharing activities which, in turn, enhances knowledge flow. Reuse of past collaboration spaces also contributes to bringing previously tacit knowledge to consciousness. Our approach does not impose (or even advocate) premature structure; upon their wish, participants may select the projection they want to work with, as well as the tasks they want to perform when working at this projection (e.g. a document can be tagged or labelled whenever a participant wants; moreover, this process has not to be done in one attempt). Finally, considering situational differences, we argue that our approach is generic enough to address diverse collaboration paradigms. This is achieved through the proposed projection-oriented approach (each projection having its own structure and rationale), as well
256
as the mechanisms for switching projections (such mechanisms incorporate the rationale of structures’ evolution). As mentioned above, the proposed approach is the result of action research studies for improving argumentative collaboration. It has been already introduced in diverse educational and organizational settings for a series of pilot applications. Preliminary results show that it fully covers the user requirements analyzed in Section 3.1. Also, it stimulates interaction, makes users more accountable for their contributions, while it aids them to conceive, document and analyze the overall argumentative collaboration context in a holistic manner. In addition, these results show that the learning effort for the proposed tool is not prohibitive, even for users that are not highly adept in the use of IT tools; in most cases, an introduction of less than an hour was sufficient to get users acquainted with the tool’s features and functionalities.
Conluion This chapter has described an innovative approach that provides the means for addressing the issues related to the formality needed in argumentative collaboration support systems. This approach aims at contributing to the field of social software, by supporting argumentative interaction between people and groups, enabling social feedback, and facilitating the building and maintenance of social networks. Future work directions include the extensive evaluation of the corresponding system in diverse contexts and collaboration paradigms, which is expected to shape our mind towards the development of additional projections, as well as the experimentation with and integration of additional visualization cues, aiming at further facilitating and augmenting the information triage process.
Supporting Argumentative Collaboration in Communities of Practice
ACKknowledgmen Research carried out in the context of this work has been partially funded by the EU PALETTE (Pedagogically Sustained Adaptive Learning through the Exploitation of Tacit and Explicit Knowledge) Integrated Project (IST FP6-2004, Contract Number 028038).
Referen Conklin, J., & Begeman, M. (1989). gIBIS: A tool for all reasons. Journal of the American Society for Information Science, 40(3), 200-213. de Moor, A., & Aakhus, M. (2006). Argumentation support: from technologies to tools. Commun. ACM Vol. 49, No 3 (Mar. 2006), pp. 93-98. Conklin, J. (2006). Dialogue mapping: Building shared understanding of wicked problems. John Wiley & Sons, Fischer, G., Lemke, A.C., McCall, R., & Morch, A. (1991). Making argumentation serve design. Human Computer Interaction 6(3-40, 393-419. Halasz, F. (1988). Reflections on note cards: Seven issues for the next generation of hypermedia systems. Communications of the ACM, 31(7), 836-852. Hevner, A.R., March, S.T., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75-105. Jonassen, D.H., & Carr, C.S. (2000). Mindtools: Affording multiple representations for learning. In Lajoiem S.P. (Ed.), Computers as cognitive tools II: No more walls: Theory change, paradigm shifts and their influence on the use of computers for instructional purposes, 165-196. Mawah, NJ: Erlbaum.
Karacapilidis, N., & Papadias, D. (2001). Computer supported argumentation and collaborative decision making: The HERMES system. Information Systems, 26(4), 259-277. Karacapilidis, N., Loukis, E, & Dimopoulos, S. (2005). Computer-supported G2G collaboration for public policy and decision making. Journal of Enterprise Information Management, 18(5), 602-624. Kirschner, P., Buckingham Shum, S., & Carr, C. (2003). Visualizing argumentation: Software tools for collaborative and educational sense-making. London, UK: Springer Verlang. Marshall, C., & Shipman, F. (1997). Spatial hypertext and the practice of information triage. In Proceedings of the ACM HT97, Southampton UK, available online from http://www.csdl.tamu. edu/~shipman/papers/ht97viki.pdf. Shipman, F.M., & McCall, R. (1994). Supporting knowledge-base evolution with incremental formalization. In Proceedings of CHI’94 Conference, 285-291. April 24-28, 1994, Boston, MA.. Shipman, F.M.,& Marshall, C.C. (1994). Formality considered harmful: Issues, experiences, emerging themes, and directions. Technical Report ISTL-CSA-94-08-02, Xerox Palo Alto Research Center, Palo Alto, CA, 1994. van Gelder, T. (2003), Enhancing deliberation through computer supported argument visualization, in P. Kirschner, S. Buckingham Shum and C. Carr (eds), Visualizing argumentation: Software tools for collaborative and educational sensemaking, 97-115. .London: Springer Verlag. Wenger E. (1998). Communities of practice: Learning, meaning, and identity, Cambridge University Press.
257
258
Chapter XVI
Personalization Services for Online Collaboration and Learning Christina E. Evangelou Informatics and Telematics Institute, Greece Manolis Tzagarakis Research Academic Computer Technology Institute, Greece Nikos Karousos Research Academic Computer Technology Institute, Greece George Gkotsis Research Academic Computer Technology Institute, Greece Dora Nousia Research Academic Computer Technology Institute, Greece
AbSTRACT Collaboration tools can be exploited as virtual spaces that satisfy the community members’ needs to construct and refine their ideas, opinions, and thoughts in meaningful ways, in order to suc-cessfully assist individual and community learning. More specifically, collaboration tools when properly personalized can aid individuals to articulate their personal standpoints in such a way that can be proven useful for the rest of the community where they belong. Personalization services, when properly integrated to collaboration tools, can be an aide to the development of learning skills, to the interaction with other actors, as well as to the growth of the learners’ autonomy and self-direction. This work pre-sents a framework of personalization services that has been developed to address the requirements for efficient and effective collaboration between online communities’ members that can act as catalysts for individual and community learning.
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Personalization Services
INnion Computer Supported Collaborative Work (CSCW) has long been the subject of interest for various disciplines and research fields. CSCW systems are collaborative environments that support dispersed working groups so as to improve quality and productivity (Eseryel et al., 2002). Varying from stand alone applications to web-based solutions for the provision of communication, cooperation and coordination services, software tools supporting collaborative work -commonly referred to as groupware- provide individuals and organizations with support for group cooperation and task orientation, especially in distributed or networked settings (Ackerman et al. 2008). Such technologies have enhanced collaboration, affecting peoples’ everyday working and learning practices. Furthermore, one of the CSCW discipline core aims has always been to assist individuals and organisations in knowledge sharing, whenever it is required and wherever it is located (Lipnack and Stamps, 1997). Nevertheless, research findings on the usage of collaboration tools show that support of group members in expressing personal ideas and opinions, and the provision with adequate means for the articulation and sharing of their knowledge is an extremely complicated and difficult task (Olson & Olson, 2000). Furthermore, it is generally acknowledged that traditional software approaches supporting collaboration are no longer sufficient to support contemporary communication and collaboration needs (Moor & Aakhus, 2006). This work concerns the design of personalized web-based tools that enable collaborative work, emphasis given to aspects such as the sharing of knowledge and consequently to learning. We envisage collaboration tools that can promote learning and encourage creative, parallel and lateral thinking during collaboration. Towards this, we argue that personalized services can be of great value as they enable the provision of services tailored according to an individual’s (or
community’s when applicable) skills, needs and preferences. Thus, we first performed a comprehensive literature and practice survey of related issues regarding Communities of Practice, Collaboration and Learning. Then, we developed a generic Learner Profile model to formalize CoP members as human actors in settings where learning takes place. The Learner Profile presented in this chapter contributes to the proper user modelling required for the development of virtual environments for collaboration. The remainder of this chapter is structured as follows. Section 2 discusses issues related to online collaboration and learning. Section 3 provides an overview of user modelling issues and presents the Learner Profile model of our approach. Section 4 provides information about the acquisition of the data required for the population of the proposed Learner Profile. Section 5 presents the proposed set of personalized collaboration services towards learning and their relation to the proposed Learner Profile. Section 6 concludes with final remarks and future work directions.
Online Collabora and Learning The Internet is an artefact that emerged from people’s need to communicate and share content that enables various kinds of web-based collaboration and virtual teamwork. Early online communities were mostly formed through the use of emailing lists, or bulletin boards. Today, the availability of social software applications has resulted in the phenomenal growth of user embodiment in virtual spaces and the constant emergence of online communities (Anderson, 2007). The notion of social software, i.e. software that supports group communications, is perceived as a particular type of software that concerns itself with the augmentation of human social and/or collaborative abilities. As clearly stated in (Boulos & Wheeler, 2007), the increased user
259
Personalization Services
contribution leads to the growth of “collective intelligence” and reusable dynamic content. It is a fact that most communities are formulated around some kind of a web-based interface that allows them to exchange their personal ideas, opinions and beliefs, such as blogs or multimedia content sharing applications. In this way, collaboration is facilitated by online communication spaces where individuals can develop a sense of belonging, usually through interacting with other users on topics of common interest. Also, as stated in (Baxter, 2007), in a focused community it is the member-generated content that adds stickiness to a site encouraging people to stay, participate and revisit. As organizations start to acknowledge the significance of online communities in helping them meet their business needs and objectives, new efforts to better facilitate the processes of collaborative learning in these communities are constantly emerging (Quan-Haase, 2005). In this vein, online communities are considered as “knowledge networks”, meaning institutionalized, informal networks of professionals managing domains of knowledge (Ardichvili et al., 2003). Knowledge sharing and exchange is an ongoing process among community members. Knowledge sharing networks, established for the intra- and inter-organizational communities, act as a forum of reciprocal knowledge sharing among knowledge workers. An especially prized type of community, the so called Community of Practice (CoP), is formed by groups of people who share an interest in a domain of human endeavour and engage in a process of collective learning (Wenger, 1998). It is this very process of knowledge sharing that results to collective learning and creates bonds between them since such communities are formed by groups of people who are willing to share and elaborate further on their knowledge, in-sights and experiences (Wenger & Snyder, 2000). Being tied to and performed through practice, learning is considered of premium value by practitioners for
260
improving their real working practices (Steeples & Goodyear, 1999). Above and beyond learning situated in explicitly defined contexts, modern learning theories strongly support the value of communities and collaborative work as effective settings for learning (Hoadley & Kilner, 2005). Thus, collaboration is considered as an essential element for effective learning since it enables learners to better develop their points of view and refine their knowledge. On the other hand, learning is a major part of online communities’ activities, one of the most significant roles undertaken by almost all community members is the role of a learner. Much of the work of finding, interpreting and connecting relevant pieces of information, negotiating meanings and eliciting knowledge in conversations with others, creating new ideas and using them to come up with a final product, happens in the head of a knowledge worker or as part of communication or doing work (Efimova 2004). Situated learning in particular, i.e. learning that normally occurs as the function of an activity, context and culture, is closely related to the social interactions in the community context. To become more specific, situated learning implies the exchange of a series of problem interpretations, interests, objectives, priorities and constraints, which may express alternative, fuzzily defined, or conflicting views. On the other hand, collaborative learning work refers to processes, methodologies and environments, where professionals engage in a common task and where individuals depend on and are accountable to each other. When speaking about collaborative learning, we espouse the Wenger’s perspective of learning as a social phenomenon in the context of our lived experience of participation in the world (Wenger, 1998). As regards to it, an especially valued activity involves information exchanges in which information is constructed through addition, explanation, evaluation, transformation or summarising (Gray, 2004; Maudet & Moore, 1999).
Personalization Services
USEer Modelingoward Learning
to has the ability of adaptation and self-improve during their lifetime.
User Modelling Systems
Modelling Users as Learners
During the past two decades, various user-adaptive application systems have been developed influenced by the findings of user modelling research. It is a fact though, that in most of these early approaches, user modelling functionality was an integral part of the user-adaptive application. In most recent approaches, several systems were designed to provide personalized content to the users. These systems focused on the design of user profiles, where information about the user’s preferences, interests was stored. On the other hand, and due to the rise of the Internet, during the past decade several research studies have conducted user modelling servers. User modelling servers have been developed aiming at operating in a web environment. Personis (Kay et al., 2002) for instance, is a user model server focusing on issues like privacy, control and self-exploration. Personis is targeted at adaptive hypermedia applications and is influenced by the um toolkit, which follows component-based architecture. Another related approach is Doppelgänger (Orwant, 2005), a user modelling system that monitors user actions and detects patterns within these actions. The architecture of the system is based on the server-client paradigm. Client is being implemented through the use of sensors which gather the required information and forward it to the server for further analysis. Following this phase, applications gain access to the information produced from the server. Nevertheless these systems do not adequately model the user, since information system only store the user profile focusing solely on the presentation layer. In order for such systems to be more efficient towards providing personalized support to the users, design should keep in the epicentre of their interest the various user roles, so as to result in a user model which supports user activities and
User models are an essential part of every adaptive system. In the following, we discuss design issues and we present the proposed learner profile model of our approach. The specification of this model is oriented to the development of the personalized services appropriate for learners and/or CoPs. Research findings about learners’ modelling prove that due to the complexity of human actors and the diversity regarding the learning context, the development of a commonly accepted learner profile is a highly complex task (Dolog & Schäfer, 2005). For instance, the learner model proposed in (Chen & Mizoguchi, 1999) depicts a learner as a concept hierarchy, but it does not refer to issues such as the learning object, or the learners’ interactions with their environment and other people. However, it provides both interesting information about a learner’s cognitive characteristics and a representation of knowledge assessment issues. Another related approach, the “PAPI Learner” conceptual model, comprises preferences, performance, portfolio, and other types of information (PAPI, 2000). Yet, this model is too generic as its primary aim is to be portable in order to fit a wide range of applications, and it does not provide any information about a learner’s profile dynamic aspects. The IMS Learner Information Package specification (IMS LIP, 2001) is a useful collection of information that addresses the interoperability of internet-based Learner Information Systems with other systems that support the Internet learning environment.
A Proposed Learner’s Profile The primary design aims of our approach in modelling users as learners were to achieve extensibility and adaptability of the user profile as well as the ability to exchange user information between the
261
Personalization Services
proposed personalized collaboration services and third party services. In this context, the proposed learner profile comprises both computational and non-computational information. Computational information comprises information such as the name, contact details, education, training, etc. of users, as well as information about the community they belong to. The non-computational information is calculated after the processing of the users’ individual behaviour during their participation in system activities. This type of information comprises fields that can be defined during run-time, whenever a new requirement for a new kind of user information is raised. As regards the source of the information stored in the user model, this may derive from the user, the tool and third party applications. More specifically, fields that can be filled up by users constitute the user derived information (e.g. login name, password, address, etc.). In contrast, fields that are calculated and filled up by the tool are machine derived information (e.g. level of participation, average response time, etc.). Furthermore, some fields can be filled up both from the user and machine (preferences, resources, etc.). In addition, there can be fields that are calculated by external or third party tools (or applications). Although user and machine derived information can be easily gathered, third party tools have to be aware of the user profile
and the communication means with the tool in order to interchange data of the user profile. For this purpose, the user profile template is available through an xml schema definition to third party requestors via web services. In the storage layer, user records are stored in a relational database and manipulated through SQL queries. After the careful consideration of the above, we developed a generic Learner Profile (see Figure 1) that can be employed for the representation of both individuals and communities as learners (Vidou et al., 2006). The proposed model can be employed for developing customized services for both individual and group learners. More specifically, the proposed Learner Profile consists of two types of information, namely static information and dynamic information in compliance with the computational and non-computational data presented in the above. Static information is considered as domain independent in our approach. The Learner Profile dynamic information elements were chosen to reflect one’s individual behaviour during his participation in a specific CoP’s collaboration activities. Thus, all four dynamic elements, i.e. preferences, relations, competences and experience are to be implicitly or explicitly defined through the learner’s interaction with a tool supporting collaboration. Preferences regarding the use of resources and
Figure 1. The proposed learner profile Learner Profile Static Information Individual Information
Preferences
Relations
Community Information
Competences
Experience
Domain independent
262
Dynamic Information
Domain specific
Personalization Services
services provided by the tool, as well as relations among individuals, CoPs and learning items (e.g. argument, URL, or document) can reveal the learners’ different personality types and learning styles. Competences refer to cognitive characteristics such as the creativity, reciprocity and social skills. Experience reflects learners’ familiarity and know-how regarding a specific domain. It should be noted that all dynamic elements of the proposed Learner Profile can be of assistance towards learning. Nevertheless, the domain of the issue under consideration is a decisive factor. Thus, dynamic aspects of a learner’s profile are treated as domain specific in our approach.
ACQquiring Learner Profile DaTA In order to enable the operation of personalized collaboration services, the Learner Profile has to be populated with the appropriate data. Such data can be acquired in two ways: explicitly from the users’ preferences, and implicitly based on the users’ behaviour while using the system. Static information of the Learner Profile is explicitly provided by the user, as a required initialization step of the registration procedure. While such information is usually provided when registering
to the system, users should be able to edit this set of profile information at any time. Such explicit data acquisition constitutes a subjective way of profiling, since it depends on the statements made by the user (e.g. experience level, competences etc.). Their subjective nature may influence personalization services in an unpredictable way (e.g. suggesting to a novice user a document that requires advanced domain knowledge because the user misjudged his experience or competence level). To cope with such issues, we are currently in the process of designing methods that assess explicitly stated profile data, based on the users’ behaviour. We refer to these ways as implicit or behaviour-based data acquisition. In general, the aim of implicit or behaviour-based data acquisition is to assess experience, domains, competences of an individual user based on his behaviour. Implicit data acquisition utilizes the users’ actions and interactions and attempts to extract information that can permit assessing or augmenting a user profile data. A special part of the system’s architecture is usually dedicated to support implicit data acquisition and interpretation. It consists of a number of modules, each of which is responsible for a particular task (see Figure 2). More specifically, the User Action and Tracking module is responsible for observing user actions and recording them in
Figure 2. Data acquisition and interpretation structure
User Action Tracking Module
Actions/Event Store
Interpretation Engine
Rule Store
System Data Base
263
Personalization Services
a specific repository of the infrastructure called the Action and Event Store. The Action and Event Store only maintains all actions and events that are useful for implicit user action analysis and does not interpret them in any way. Analysis and interpretation of the gathered data as well as triggering of the appropriate computations (i.e. system reactions) is the main responsibility of the Action Interpretation Engine. The Action Interpretation Engine analyses the available information in the Action and Event Store and triggers computations that either update accordingly the user profile or execute a particular action. The rule based interpretation engine can be configured using rules that are also stored within the infrastructure. A rule essentially specifies under which circumstances (i.e. the events and actions of a particular user in the store) an action is triggered. The rule based nature of the interpretation engine makes the engine itself extensible so that even more cases of implicit data acquisition and interpretation are able to be supported. Based on the explicit or implicit data, explicit or implicit adaptation mechanisms can be supported within the collaboration tool. Explicit adaptation mechanisms refer to approaches where the tool adapts its services based on the explicitly stated characteristics or preferences of the user. Users are usually aware of explicit adaptations since they themselves triggered the initiation and presence of the respective services. On the other hand, implicit adaptation mechanisms refer to approaches that adapt the system’s services to the user, based on his actions within it. Such mechanisms work in the background, thus users are usually unaware of the origin of these services since they did not explicitly initiate their activation and, thus, do not perceive their operation. Implicit personalization mechanisms are automatically triggered by the system utilizing implicit or behaviour-based data in the proposed learner profile. In order to enable the foreseen functionalities (such as dynamic update of user information,
264
adaptation of the tool according to the user needs, etc.), the most important actions of the entire set of users’ actions should be tracked down. As regards the User Action Tracking Mechanism, the recorded data about user actions contain information about who did the action, when, what type of action was executed, and what objects were affected by the action. In this way, it will be possible for the system to give valuable feedback to other mechanisms so as to be able to both examine and calculate dynamic user characteristics. Moreover, a variety of statistical reports that cover both the overall and the specific views of usage of the system should also be produced. Furthermore, a rule-based approach has been chosen so as to facilitate incorporation of new rules once they are observed or modification of existing ones if they prove to be too restrictive or even harmful. More specifically, we propose the development of a set of rules that deal with resource access, as access to resources are logged and a number of rules operate on the logged data to provide additional information to resources and/or user profiles. These can be based on the frequency of access, as well as the competence and experience levels of users (e.g. a document that is frequently accessed by novice users should augment the documents metadata with elements that mirror this fact, so that this document can be recommended to any novice user entering a discussion). A second set of rules observing discussion contribution could control how user behaviour in the context of discussions will affect the users’ competence and experience (e.g. users that actively and frequently participate can be assigned with a high experience level). Another useful indicator associated to the proposed learner profile is the reasoning about how a competence level of a particular user changes in time. This may provide useful insights about the learning capabilities of the particular user and the usefulness of the system.
Personalization Services
The Propoed Framework of Per ed SERServi for Collabora The establishment of a learner profile gives now the opportunity as the system level to design and implement new services or augment existing ones in collaboration tools with the aim to explicitly support the role of learners that users have during a collaboration. Personalization services are one of the crucial set of services in collaboration environments that can benefit greatly from the existence of the learner profile. In general, such services are important for collaboration environments since users vary greatly in terms of knowledge, training, experience, personality and cognitive style and hence personalization services permit the system to adapt various aspects to the needs of each individual user. However, currently personalization services in collaboration tools take usually into account only general user defined preferences neglecting aspects related to the role of learner that users of such tools have. In this regard, the learner profile provides the necessary framework to fill this gap in personalization services of collaboration environments. In the following we present a set of services that be employed for enhancing software tools supporting collaboration towards learning. The proposed set of services has resulted out of a thorough investigation of the related literature, existing case studies that consider diverse aspects of learning within communities, as well as a transversal analysis of a set of interviews with real CoP members engaged in various domains of practice. While the services presented in the next paragraphs can be found in existing collaboration environments and do not introduce a radical new set of functionalities, our emphasis is on how they can be recast and augmented in light of the proposed learner profile.
Awareness Srvices According to the findings of our research, CoPs’ members consider awareness services as amongst the most valued one for collaboration tools. As defined by Dourish and Belloti (1992) the term awareness denotes “an understanding of the activities of others, which provides a context for one’s own activity” and over the years a number of different awareness types have emerged. These include informal awareness, presence awareness, task and social awareness, group, historical and workspace awareness. Presence and participation awareness provides information about CoP members, on-line members as well as the discourse moves of individual CoP members. Users will be able to see which user is online, how the workspace changed by a particular member, etc. Social awareness provides information on how members are related to other members in the CoP and includes statistics about how and how many times members within a CoP communicate with each other and social networks representing the community. Based on the data populated in the Learner Profile, personalized services can provide the proper set of notification actions for the provision of helpful personalized information about system events to CoP members. For instance, a collaboration tool could alert users about the entrance of another user to the system, or about new content insertion into the system. In order to enable the personalization of awareness services, terms such as “related” or “interesting” that define a relation between the user and the content should be determined by the user himself or automatically by the system through the manipulation of some characteristics from the user profile. Furthermore, awareness services can play an important role to assist the familiarization on the new learners of the system. By both informing the CoP moderator about the entrance of a new member and proposing some
265
Personalization Services
starting guidelines to the incomer, this service can assist the learning of the way of participation within a CoP. On the other hand, the awareness can provide the moderators with the activity monitoring service that helps the moderator to better understand and manage the whole CoPs’ procedures. That, in turn, contributes to the process of learning the CoP’s moderator role. Awareness services can also be of use towards the self-evaluation of the participation of a community member, providing her/him with valuable feedback about his overall contribution to the community and assisting her/him in collaborative learning as well as in self reflecting. Using statistic reports populated according to the Learner Profile, such services can measure the level of the member’s contribution to the collaboration procedure. More specifically, this kind of services can provide reports about the actual usage of the resources posted by a member, the citations of their resources, or the actual impact of posts to the overall process. In this way, one can be aware of the overall impression that other members have about his participation.
Presentation Services: Ranking, Filtering and Classifying Presentation is another service that being personalized in collaboration tools can facilitate learning activities, especially for autonomous learners. The aim of personalizing the presentation services is to adapt the way the same set of resources is rendered to the each individual user. While the set of resources may be the same across users, their learning profile will determine how they will be presented to each individual one. As regards to searching for instance, a Learner’s Profile can provide useful information to rank search resources according to a number of factors, such as the learner’s preferences, or even his competence and experience level. In this context, the system will be able to adapt to an individual user’s needs. Moreover, the information about the
266
user’s domains of interest will provide additional information with which a search can be better contextualized, thus leading to more relevant results. Furthermore, reasoning mechanisms could be employed for providing the necessary filtering features for capturing and reusing the knowledge shared in past collaboration activities. Consequently, filtering and recommendation of content services can further support learning. For instance, some of the attached documents of posted positions that contribute to the strengthening of an argument should be suggested for view to the users according to their Learner Profile. Furthermore, a document library could recommend some documents that are related to a specific learner (e.g. experienced learner’s recommendations or popular documents). Thus, members will be able to extend their knowledge through explicit learning of associated content. Services for classifying other learners according to their domain of expertise can also assist learning in the community. Such services enable the community members to request for suggestion, find and communicate with their co-workers in a knowledgeable way. Furthermore, if coinciding with a community’s norms and wills, such services could also be used for the assignment of weights regarding the weight of a member’s arguments. In addition, services that keep tracking of the members’ activity contribute to the procedure of learning by example, in which a member can learn during watching another one’s practice in collaborative activities.
Visualization It has been widely argued that visualization of collaboration conducted by a group of people working collaboratively towards solving a common problem can facilitate the overall process in many ways, such as in explicating and sharing individual representations of the problem, in maintaining focus on the overall process, as well as in maintaining consistency and in increasing
Personalization Services
plausibility and accuracy (Kirschner et al., 2003; Evangelou et al., 2006). Personalized representation of the associated processes, such as the process of discoursing or knowledge sharing, is an essential feature for tools providing effective environments for learning. Furthermore, personalized visualization of context should provide learners with a working environment that fits to their preferred visualization style. System personalization includes alterations in colours, fonts and text effects, enabling and disabling pieces of information in the working panel, predefinition of system responses in user actions etc. In this direction, taxonomies and classification schemes should be employed wherever possible, as a means for “guiding” users’ cognition. In any case, it should be noted that there is no panacea for the design of user-friendly interfaces; the related practices should be interpreted, refined, and exploited according to the needs of the different types of learners involved in the particular environment. Appropriate navigation and help tools should be also provided for users with diverse expertise. Adaptive User Interfaces (AUI) should adapt themselves to the learner by reasoning about the user based on his Learner Profile.
Trust Building Services Privacy policies and access control services are a critical requirement for the employment of all the above services, as well as for the building of trust between the CoP members and the software application. These should be provided in order to satisfy the learner/users’ need to know what information about them is recorded, for what purposes, how long this information will be kept, and if this information is revealed to other people. Furthermore, the security assurance while establishing connections between users and services, or while accessing stored information, should be taken into consideration as well. Towards this end, two major techniques are broadly used to provide denial of access to data, i.e. anonymity and
encryption. Anonymity cuts the relation between the particular user and the information about him, while information encryption provides protection of the exchanged personal data. In our approach, we employed the Platform for Privacy Preferences Project (P3P) approach, a W3C recommendation that supports the description of privacy policies in a standardized XML-based form, which can be automatically retrieved and interpreted by the user client (Cranor et al., 2002).
Implementation Issues According to current trends in developing webbased tools, for reasons such as the reusability of components and agility of services, our approach builds on top of a service oriented environment. In order to exploit advantages enabled by the Service Oriented Architecture (SOA) design paradigm, the proposed set of services should be based on web service architecture so as to enable the reusability of the implemented modules, as well as the integration or the interoperation with other services (from external systems). An overall design for the enhancement of tools supporting collaboration with personalized functionality towards learning is depicted in Figure 3. In this approach, we sketch a generic architecture design in which a Learner Profile Service is the basis for the storage and the provision of each learner’s characteristics to a set of proposed services that contribute to the system’s personalization. In order to support extensibility, the learning profile service can be dynamically augmented with new learners’ characteristics during run-time. Furthermore, targeting to the openness of the service, the service can provide the learner profile schema in the form of XML Schema Definition (XSD) in the service requestors. Considering the set of proposed services as non-exhaustive, our approach is open for the addition of new personalized services and can use the Simple Object Access Protocol (SOAP) for both internal and external communication.
267
Personalization Services
Conlu Collaboration is considered as an essential element for effective learning since it enables learners to better develop their points of view and refine their knowledge. Our aim being to facilitate online communities’ members as learners, we argue that collaboration tools should provide personalization features and functionalities in order to fit the specific individual and community learning requirements. In this chapter, we investigate collaboration and learning within such communities. Based on our findings, we propose a framework of services supporting personalization that being embedded in collaboration tools, can act as catalyst for individual and community learning. The proposed set of services has derived after the careful consideration of a generic learner profile, developed to formalize human actors in online settings where learning takes place. As a result of our work, we have concluded that personalized collaboration and learning services for online communities should strive for: 1. 2. 3.
4.
Transmission services that make tacit or explicit knowledge exploitable. Exploitable storage services for both kind of information. Training, personal support or programs that help in difficult moments such as decision making, handling complex information, designing a workflow, etc. Enhancing the awareness of the collaboration’s pattern.
In this chapter we presented a set of services enhancing CoPs interactions and collaborative work based on a generic Learner Profile model. Our approach concerns an alternative form of online learning with different forms of interaction, and a new way of promoting community building. Its purpose is to aid researchers and developers in the development of personalized collaboration systems, i.e. tools that adapt their structure and
268
services to the individual user’s characteristics and social behaviour. Our main goal being to support individual and community learning, the proposed set of services is based on personalized features and functionalities. We argue that it can further support learning, as well as the achievement of learning objectives, as it can assist communities’ members in the development of learning skills such as the interaction with other actors, growth of their autonomy and self-direction. Nevertheless, in order to be creatively adapted in CoPs’ everyday practices, the proposed services must fit into the specific culture, norms and incentive schemes of the community. Moreover, identification of communities’ members’ individual characteristics, as well as the culture, norms and incentive schemes of the community should be appropriately handled. Our future work directions concern the appropriate handling of these issues as well as the full development of the set of personalization services and its evaluation in diverse online communities.
ACKknowledgmen Research carried out in the context of this chapter has been partially funded by the EU PALETTE (Pedagogically Sustained Adaptive Learning through the Exploitation of Tacit and Explicit Knowledge) Integrated Project (IST FP6-2004, Contract Number 028038).
Referen Ackerman, M.S. Halverson, C.A., Erickson, Th., & Kellogg, W.A. (Eds.) (2008). Resources, co-evolution and artifacts theory in CSCW. Springer Series: Computer Supported Cooperative Work. Anderson, P. (2007). What is Web 2.0? Ideas, technologies and implications for education. JISC TechWatch report. Available at: http://www.jisc.
Personalization Services
ac.uk/whatwedo/services/services_techwatch/ techwatch/techwatch_ic_reports2005_published. aspx (Last accessed on 18th February 2008) Ardichvili, A., Page, V., & Wentling, T. (2003). Motivation and barriers to participation in online knowledge-sharing communities of practice. Journal of Knowledge Management, 7(1), 6477. Baxter, H. (2007). An introduction to online communities. Retrieved on 13/07/2007 from http://www.providersedge.com/docs/km_articles/ An_Introduction_to_Online_Communities.pdf. Boulos, M., & Wheeler, S. (2007). The emerging Web 2.0 social software: An enabling suite of sociable technologies in health and health care education. Health Information and Libraries Journal, 24, 2-23. Chen, W., & Mizoguchi, R. (1999). Communication content ontology for learner model agent in multi-agent architecture. In Prof. AIED99 Workshop on Ontologies for Intelligent educational Systems. Available on-line: http://www.ei.sanken. osaka-u.ac.jp/aied99/a-papers/W-Chen.pdf. Cranor, L., Langheinrich, M., Marchiori, M., Presler-Marshall, M., & Reagle, J. (2002). The platform for privacy preferences 1.0 (P3P1.0) Specification. World WideWeb Consortium (W3C). http://www. w3.org/TR/P3P/. Dolog, P., & Schäfer, M. (2005). Learner modelling on the Semantic Web?. Workshop on Personalisation on the Semantic Web PerSWeb05, July 24-30, Edinburgh, UK. Dourish P., & Bellotti V. (1992). Awareness and coordination in shared workspaces. In proceedings of the 1992 ACM Conference on ComputerSupported Cooperative Work (Toronto, Ontario, Canada, November 01 - 04, 1992. Eseryel, D., Ganesan, R., & Edmonds, G.S. (2002). Review of computer-supported collaborative work
systems. Educational Technology & Society, 5(2), 130-136. Evangelou, C.E., Karacapilidis, N., & Tzagarakis M. (2006). On the development of knowledge management services for collaborative decision making. Journal of Computers, 1(6), 19-28. Gephart, M., Marsick, V., Van Buren, M., & Spiro, M., (1996, December). Learning organizations come alive. Training & Development, 50(12), 34-45. Gray, B. (2004). Informal learning in an online community of practice. Journal of Distance Education, 19(1), 20-35. Hoadley, C.M., & Kilner, P.G. (2005). Using technology to transform communities of practice into knowledge-building communities. SIGGROUP Bulletin, 25(1), 31-40. IMS LIP (2001). IMS learner information package specification. The Global Learning Consortium. Available on line: http://www.imsglobal.org/profiles/index.html Kay J., Kummerfeld B., & Lauder P. (2002). Personis: A server for user modelling. In Proceedings of the 2nd International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems (AH’2002), 201–212. Kirschner, P., Buckingham-Shum, S., & Carr, C. (2003). Visualizing argumentation: Software tools for collaborative and educational sense-making. London: Springer Verlag. Lipnack, J., & Stamps, J. (1997). Virtual teams. New York: John Wiley and Sons, Inc. Marsick, V.J., & Watkins, K.E. (1999). Facilitating learning organizations: Making learning count. Aldershot, U.K. and Brookfield, VT: Gower. Maudet, N., & Moore, D. J. (1999). Dialogue games for computer supported collaborative argumentation. In Proceedings of the 1st Workshop
269
Personalization Services
on Computer Supported Collaborative Argumentation (CSCA99). Moor, A., & Aakhus, M. (2006, March). Argumentation support: From technologies to tools. Communications of the ACM, 49(3), 93-98. Olson, G.M., & Olson J.S. (2000). Distance matters, Human-Computer Interaction, 15, 139178. Orwant, J. (2005). Heterogeneous learning in the doppelgänger UserModeling System. User Modeling and User-Adapted Interaction, 4(2), 107-130, 1995. Available online ftp://ftp.media. mit.edu/pub/orwant/doppelganger/learning. ps.gz, Last access June 21th, 2005). PAPI (2000). Draft Standard for Learning Technology —Public and Private Information (PAPI) for Learners (PAPI Learner). IEEE P1484.2/D7, 2000-11-28. Available on-line: http://edutool. com/papi Quan-Haase, A. (2005). Trends in online learning communities. SIGGROUP Bulletin, 25(1), 1-6.
270
Steeples, C., & Goodyear, P. (1999) Enabling professional learning in distributed communities of practice: Descriptors for multimedia objects. Journal of Network and Computer Applications, 22, 133-145. Veerman, A.L., Andriessen, J.E., & Kanselaar, G. (1998). Learning through computer-mediated collaborative argumentation. Available on-line: http://eduweb.fsw.ruu.nl/arja/PhD2.html Vidou, G., Dieng-Kuntz, R., El Ghali, A., Evangelou, C.E., Giboin, A., Jacquemart, S., & Tifous, A. (2006). Towards an ontology for knowledge management in communities of practice. In proceeding of the 6th International Conference on Practical Aspects of Knowledge Management, (PAKM06), 30 Nov.-1 Dec. 2006, Vienna, Austria Wenger, E. (1998). Communities of practice: Learning, meaning and identity. Cambridge University Press. Wenger, E., & Snyder, W. (2000). Communities of practice: The organizational frontier. Harvard Business Review, 78, 139-145.
271
Chapter XVII
Computer-Aided Personalised System of Instruction for Teaching Mathematics in an Online Learning Environment Willem-Paul Brinkman Delft University of Technology, The Netherlands Andrew Rae Brunel University, UK Yogesh Kumar Dwivedi Swansea University, UK
Ab This paper presents a case study of a university’s discrete mathematics course with over 170 students who had access to an online learning environment (OLE) that included a variety of online tools, such as videos, self-tests, discussion boards, and lecture notes. The course is based on the ideas of the Personalised System of Instruction (PSI) modified to take advantage of an OLE. Students’ learning is initially examined over a period of 2 years, and compared with that in a more traditionally taught part of the course. To examine students’ behaviour, learning strategies, attitudes, and performance, both qualitative and quantitative techniques, were used as a mixed methodology approach, including in-depth interviews (N=9), controlled laboratory observations (N=8), surveys (N=243), diary studies (N=10), classroom observations, recording online usage behaviour, and learning assessments. In addition, students’ attitude and performance in 2 consecutive years where PSI was applied to the entire course Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Computer-Aided Personalised System of Instruction
provides further understanding that is again in favour of PSI in the context of OLE. This chapter aims to increase understanding of whether PSI, supported by an OLE, could enhance student appreciation and achievement as findings suggest.
INTODUCTION As Online Learning Environments (OLEs), such as WebCT® and Blackboard®, are becoming more widely used, the role of teachers changes as they adapt to their new mode of teaching (Coppola, Hiltz & Rotter, 2002). It remains a challenge however for teachers to use these technologies effectively (Hiltz & Turoff, 2002), and benefit from the suggested advantages of OLEs over traditional classroom learning. These include being more learner-centred, providing flexibility as to the time and the location of learning, being cost-effective for learners, and potentially serving a global audience (Zhang, Zhao, Zhou & Nunamaker, 2004). This paper arises from the experience obtained in delivering a mathematics module for both Computer Science and Information Systems undergraduate students in a UK based university. The first term of the module, which focuses on discrete mathematics, makes extensive use of OLE tools, such as online self-tests, video clips and a discussion board, whereas the second term, which focuses on statistics, is taught by a more traditional lecturer based approach. Comparing the data obtained in the two terms during the academic years 2003-2004 and 2004-2005, gives an insight into how students perceive these OLE tools and into how they affect students’ learning strategy and the learning outcomes. The teaching method used in the first term is based on the principles of the Keller Plan (Keller & Sherman, 1974), also known as the Personalised System of Instruction (PSI). Although these principles were already published in the sixties (Keller, 1968), the observations
272
presented here suggest that they can be highly relevant when teaching is supported by an OLE. The principles of PSI can be summarized as “(1) mastery learning, (2) self-pacing, (3) a stress on the written word, (4) student proctors, and (5) the use of lectures to motivate rather than to supply essential information.” (Keller & Sherman, 1974, p. 24). PSI has been applied to courses in various areas such as psychology (Kinsner & Pear, 1988; Pear & Crone-Todd, 2002), physics (Austin & Gilbert, 1973; Green, 1971), mathematics (Abbott & Falstrom, 1975; Brook & Thomson, 1982; Rae, 1993; Watson, 1986), and computer science (Koen, 2005). PSI has received extensive attention in the literature. For example, ten years after its introduction Kulik, Kulik and Cohen (1979) could base their meta-analysis on already 72 different papers, and today PSI is still a topic that receives research attention. In all these years teachers have successfully used PSI, although often they have made some modifications so that it fits into their academic environment (Hereford, 1979). The trend towards high marks has been a recurring observation. The original PSI description talks of a self-paced learning approach where students have to prove mastery of learning material that is divided into small learning units. For each learning unit students receive written material, which includes the learning objective for that unit. Students study the material on their own or in groups, and when they think that they have mastered the unit they take a test. An instructor or a student assisted, called a proctor, immediately marks this test in the presence of the students. If they answer all questions correctly, they receive the written material for the next unit. If they fail,
Computer-Aided Personalised System of Instruction
the marker provides them with formative feedback and asks them to study their material again before they re-take the test. Passing the test also gives students the right to attend lectures as a reward. This is possible because no essential material is taught in the lectures; only a few lectures being scheduled and their main purpose is to motivate the students. The use of student proctors clearly has both economic and educational advantages, though care has to be taken to avoid misconduct by proctors. Proctors are not always used as is shown by Emck and Ferguson-Hessler (1981) who reported that at the Technische Universiteit Eindhoven (The Netherlands) the proctors were replaced by a computer as early as 1970. In a mechanical engineering course, the computer randomly selected a number of questions from a question book. After the students entered their answers onto an answer card, they gave it to the test-room assistant who fed it into the computer. Within less than a minute the test results were printed on a computer terminal. In addition to checking students’ answers, computers have also been used to facilitate the testing process. For example, Pear and Crone-Todd (2002) have used the computer to provide proctors, who were automatically selected by the computer, with the completed tests of other students, and afterwards to return the proctors feedback to the students. They relied on human proctors instead of a computer because they used essay style questions. The idea of Computer-Aided PSI (CAPSI) has also been picked up by others (Kinsner & Pear, 1988; Koen, 2005; Roth, 1993; Pelayo-Alvarez, Albert-Ros, Gil-Latorre, & Gutierrez-Sigler, 2000; Pear & Novak, 1996) and this might even become easier to implement if teachers could use an off-the-shelf or standard OLE. For example WebCT, short for Web Course Tools, is an online course management system in which automatically marked quizzes can be set. It also has other online tools such as a discussion board, a calendar, student homepages, email, chat rooms, online submission of coursework, and a place to
store files, such as lecture slides, which students can access online. Providing an effective teaching approach that suits such widely used OLE could therefore be of obvious benefit to teachers. Although OLEs have been the topic of a number of publications, an OLE in combination with PSI has received limited attention (e.g. Pear, 2003). Research on standard OLEs is, however, advancing. For example, Hoskins and Van Hooff (2005) found a relation between academic achievement and the use of a discussion board. Johnson (2005) also made this observation. Additionally she found that an increased use of the OLE coincided with an increased feeling of peer alienation, i.e. the experience students might have of feeling isolated from other students. She also observed that students who felt more Course and Learning alienation (experiencing the course as irrelevant) made less active use of the OLE and also obtained lower grades. Again this relates to Hoskins and Van Hooff (2005) observation that their achievement oriented students made more active use of the OLE. They therefore worry that an OLE might only engage students that are highly motivated and academically capable. This concern is supported by the findings of Wernet, Olliges, and Delicath (2000) that graduate students valued OLE tools such as a course calendar, hyperlinks, email, and online quizzes more highly than undergraduate students. Other reports (Debela, 2004; Jones & Jones, 2005) on students’ attitudes towards elearning environments are more positive as students believe that these environments can improve their learning, and also that these environments are more convenient and accessible. Another OLE tool is video. The use of video in a PSI-based course is not new. For example Rae (1993) has used videotapes as a key support to his written course material. On the videos, which students could watch in the university library, he solved exercises and gave short summaries of each learning unit. He found that this approach resulted in high examination marks while he could reduce the tutorial support to an economic
273
Computer-Aided Personalised System of Instruction
level. Instead of using video to present the learning material, Koen (2005) has used it to increase the feeling of being present at the university for students who participate in a distance PSI-based course. He had placed video cameras in the student computer room, the room of the proctor and in the room of the professor. The distance learners on this course could see these live images via their OLE. Unfortunately in his report Koen did not provide results on the effect the streamed video had on the students other than that he was forced to remove the cameras in the student computer room after complaints by a handful of students who did not want to be observed. Of course video has been used in non PSI-based courses as well - for example in the medical field students responded positively on the use of streamed video (Green, et al., 2003; Schultze-Mosgau, Zielinski, & Lochner, 2004), and in a survey on courses in economics offered via the Internet in the USA in the fall of 2000 semester, Coates and Humphreys (2003) found that 18% of the 189 courses used streamed video. To summarise, although research has been done on PSI, CAPSI and OLE tools there is currently limited understanding about their use and effect on student learning when they are combined in a CAPSI-based course that is supported by a standard OLE. This motivates the present study of a PSI-based module that makes substantial use of a standard OLE. The study looked at students’ attitude, their learning strategy and their academic achievement in relation to OLE tools. Before presenting the results of the study, the following section will provide some background information on how the module was set-up; this is followed by a section describing the research approach used. The paper concludes by briefly discussing the main findings and the resulting modifications that have been made to the course and the effect this had on students’ performance and opinion.
274
ADAPTON OF PSI APPROACH TO OLE The first year module, Foundations of Computing (School of Information Systems, Computing and Mathematics, Brunel University, UK), is taught over two terms. The CAPSI-based first term focuses on discrete mathematics and looks at logic and set theory, whereas the conventionally taught second term focuses on statistics and looks at probabilities, correlations and regression analysis. In the first term the principles of the PSI (Keller 1968; Keller & Sherman, 1974) were implemented in the following way. •
•
•
Students used specially written material, divided into four modules, each module including five learning units, each with clearly stated learning objectives. Units one through four were theoretical while units five, for motivational purposes, focused on the application of those theories in computer science. Instructors (graduate teaching assistants) in the seminars used specially developed written diagnostic tests for each module to examine, together with the students, their understanding of the material. In contrast to the original PSI principles, these tests were not part of the formal assessment, but students were advised to demonstrate sufficient mastery before they received the written material for the next module. The lectures were mainly motivational, covering only the application units, and aimed to give students study advice.
Computer-assisted learning tools and video have been developed to support these traditional elements of PSI. In the OLE students had access to learning tools such as online-self tests, streamed
Computer-Aided Personalised System of Instruction
video clips, a discussion board, as well as to the written material, lecture slides and old exams. One hundred and twenty five video clips, mostly less than five minutes in length, were used as the main medium to give essential information (introductions, summaries and solutions to exercises). The videos were simply made and designed to give the students the impression of having the lecturer at their side explaining rather than imparting new information; experience over 25 years has shown such video clips to be remarkably effective with PSI (Rae 1993). For 16 of the learning units (the theoretical units) videos were available in both years that were examined in this case study, but the videos for the other four learning units (the applications units) were available only in the second year, thus providing an opportunity to study their effectiveness by comparing students attitude and learning between the two years. Because students could only access the streamed videos on campus, they could also obtain a DVD for use at home. Although the computer support services charged students £5 duplication cost for the DVD, students were encourage and allowed to copy the DVD freely from each other, or borrow it from the library. Each learning unit also had a related OLEbased self-test. A test consisted of around five questions, and completion of the test would give students access to the self-test of the following learning unit. These questions were multiplechoice type questions marked automatically by the computer. Another OLE tool that students could use was the online discussion board. Instructors actively encouraged students to post their questions on the online discussion board and told students that they would not answer questions directly emailed to them. Although instructors invited students to answer questions posted on the discussion board, they promised that they would answer questions on a daily basis between Monday and Friday. This approach had advantages for both the students and the instructors. Students could read the questions and answers previously
posted, and the instructor only had to answer a question once, instead of the alterative, responding to numerous individual emails regarding the same question. Each week students had three contact hours, a one-hour lecture delivered by the lecturer in a large lecture theatre, a one-hour seminar in which they could take the written diagnostic tests or get help from the teaching assistant (TA), and a one-hour lab in which they could watch the videos, take the online self-test, and work on their coursework. Eight TAs ran the seminars and lab sessions, each TA being responsible for a group of between 18 and 25 students. To reduce the often-mentioned problem of procrastination in PSI (Hereford, 1979), the students had to take three formal assessments in the first term. They consisted of two pieces of coursework (project 1 and 2), which students could do in pairs, and a one-hour mid term test on module 1 and 2. Furthermore, students had to demonstrate mastery of the first term material as part of a three-hour final exam at the end of the year. The element of pair work was deliberately introduced to counteract concerns that PSI might mitigate against the social interaction between students in which they benefit from exploring conceptual problems with peers (Sheehan, 1978). The second, more traditionally taught term centred on a weekly two-hour lecture that covered all material. In the one-hour seminar, students worked again in groups of 18 to 25 students, under the supervision of a TA on so called problem sheets, answers to which students could also find on the OLE at the end of each week. The OLE also provided the usual tools including a discussion board and weekly lecture slides. TAs also ran the one-hour lab session, in which students worked on exercises with the statistical software application SPSS® and Excel®, or could take computerised selftests developed in Mathletics (Kyle, 1999). This environment was also used as part of the formal assessment of the second term, students taking two tests that the computer marked automatically.
275
Computer-Aided Personalised System of Instruction
Students also had to submit a statistical report as a piece of coursework, and finally to take the three-hour final exam at the end of the year.
RESEARCH APPROACH AND INSTRUMENTS Biggs’ (2003) 3P (Presage, Process and Product) model of teaching and learning was used as a starting point to structure the research into categories of factors that could influence learning. The model follows the chain in learning. It starts with the factors before the learning takes place, which are split into student factors, such as prior knowledge and interests, and teaching context, such as assessment procedures, teaching sessions, and computer-assisted learning tools. These factors influence students’ learning activities, or in other words their approach to learning. The 3P model sees this engagement as an essential factor that eventually determine the learning outcomes, the new skills and knowledge that students master. To study these factors, data was collected in a variety of activities, including online student surveys, interviews, diary studies, observations in the class, a controlled usability test, tracking online behaviour, and assessment results. The online student surveys (appendix Table 8 and Table 9) were OLE delivered and students completed them anonymously at the end of each term. A total of 243 responses were collected, 85 in the first term survey of 2004 and 60 in same survey in 2005; 54 responses being collected in the second term survey of 2004, and 44 again in 2005. Please note that from now on the academic year 2003-2004 will be referred to simply as 2004 and 2004-2005 as 2005. An important part of the research was to see whether CAPSI would change students’ learning activities and strategies. A good teaching context should encourage deep learning, where students focus on underlying meaning and principles; the learning environment should move students away
276
from surface learning (Biggs, 2003; Ramsden, 2003), where students act with the intention of passing the module with the minimal amount of effort or engagement. In a previous study on this module, Hambleton, Foster and Richardson (1998) had found that PSI could have a positive affect on students’ learning. They had asked students to complete the Approaches to Studying Inventory (ASI) (Ramsden & Entwistle, 1981), an earlier instrument to measure approach to learning. After analysing this data they found that students scored significantly higher on the ‘comprehension’ learning scale for the PSI-based module than for a more traditionally lecture-based module on statistics. They therefore concluded that PSI could have a positive impact on students learning strategy. To follow up on this research the R-SPQ-2F inventory (Biggs, Kember, & Leung, 2001) was included in the end of term surveys, with the exception of the first term survey of 2004. The inventory is a 20-item questionnaire that scores students both on a deep approach and surface approach scale. These two scales are derived from two subscales, which the inventory provides for each approach: the students’ motivation and their strategy. While the survey data provides a general insight from a sizable sample, between 25% and 48% of the population, the semi-structured interviews conducted with nine students in the summer break of 2004 provided an in-depth understanding of students’ learning strategies and attitudes. The students had all passed the module in 2004 and agreed to complete the R-SPQ-2F inventory and to be interviewed by a PhD student on the premise that he would not disclose their names. These students were interviewed for half an hour on the phone, and focused on the student approach to learning, the teaching approach, student characteristics, and the OLE tools (appendix Table 10). For their participation in the interview students received a £5 incentive, paid as all other incentives in this study from a university research grant that supported this research.
Computer-Aided Personalised System of Instruction
Another source of data was the diary study. Where surveys and interviews provided information from only one point in time, in the diary study 10 students agreed to provide weekly information throughout the course of 2005. Collecting data in the diary study however proved to be more difficult. Although initially 10 students started with weekly reports, only three students continued to do this into the second term. Still 105 weekly reports were collected. For their participation students received a £10 incentive for each term and they were promised that their names would not be disclosed to the instructors. The diary study was conducted by a PhD-student who also made weekly observations in the lab. The number of students that attended the lab and their activities were recorded in the logbook. Another PhD student was asked in the summer of 2004 to conduct a usability study of the OLE of the module. Eight masters students who had not attended the module were invited into a usability laboratory and asked to study the first learning unit of module one and to take the related online self-test. During the test, she watched the students from an observation room through a one-way mirror, and recorded any comments made by the students. Afterwards, she interviewed the students and asked them to highlight specific problems they had encountered. The ease of use and the students’ satisfaction in using specific OLE tools were also examined in the usability test with a component-specific usability questionnaire (Brinkman, Haakma, & Bouwhuis, 2005). This questionnaire included six ease-ofuse questions from the Perceived Usefulness and Ease-of-Use (PUEU) questionnaire (Davis, 1989) and two questions from the Post-Study System Usability Questionnaire (PSSUQ) (Lewis, 1995). The test took around two hours and students were given a £12 incentive for their participation. Student behaviour was also followed by recording web traffic. In 2005 four weeks into the module, web access of several pages was tracked, including the home page, the self-tests, and the main page for the video clips. All these data collection
activities were set up to get both a broad and indepth understanding of how students perceived the module and how they actually engaged with it. Finally, the results of the coursework and the exam gave an insight into the students’ academic performance. Because data was collected both in the CAPSI taught first term and in the traditionally taught second term, it was possible to examine the effect these different teaching contexts had on learning activities and outcomes.
FINDINGS Student Factors In 2004 176 students (73% male, 27% female) were registered for the module and in 2005 177 (79% male, 21% female). The students had various educational backgrounds. For example, in the four end-of-term surveys conducted in 2004 and 2005, 50% of the students stated they had an Alevel, short for Advanced Level, a non-compulsory qualification taken by students in England, Wales, and Northern Ireland, which is the usual university entrance qualification. Students usually take A-levels in the final two years of secondary education, after they have obtained a General Certificate of Secondary Education (GCSE), which is taken at an age of around 16. On the other hand, 17% of the students had a certificate or diploma of the Business and Technician Education Council (BTEC), 10% of the students had a General National Vocational Qualification (GNVQ), 14% of the students had done an Access course, and 9% of the students had overseas qualifications. But a basic requirement for the degree course was that students at least had a GCSE maths level or equivalent for example a graduation diploma from a good US high school. For some students however, it had been a while since they obtained their qualification. For example, one student in the in-depth interview mentioned: “…but things such as tables, and probability I did in GCSE.
277
Computer-Aided Personalised System of Instruction
However, I have forgotten it.” Some students were therefore invited to attend workshops to refresh their knowledge on basic mathematical concepts relevant for this module, such as linear equations, linear function, power, and logarithms. In 2005 this invitation was based on the results of an online OLE test that students took in the first lab. In the test, the students scored on average 24.96 (SD = 10.91) points out of a 40 points maximum. Students that obtained a score below 20 were advised by email to attend the workshops, which was also open to other students to attend. Some of the students did not live on campus, and therefore spent time commuting. Attending class or doing group work was more difficult for some of these students. For example one student mentioned in the interview: I was off campus and I had to travel by bus, which took around one and a half hours each day. This was one reason why I was not attending the lectures, coming to university means wasting three hours in travelling and in that much time I can do some work. It did stop me from going to lectures…. Almost all students in the interview mentioned that access to a computer was vital for passing the module. Computer access outside the lab sessions seemed adequate. In the four surveys, only two responses indicated that they never had access to a PC or laptop outside the lab session, 21 responses indicated to have access sometimes, but the majority, 219 responses replied that they had regular access. Although this seems a reassuringly high number, the percentage of students that have never had access might be higher since the data from the online questionnaires could be biased. On the other hand, students that lived on campus had access to computer rooms, where they could work outside the normal class hours. Most students who were interviewed that lived off campus also stated that they had a computer at home, except for one student, who stated: “I did
278
not have a computer at home, so I had to come to university everyday to study. So it was a waste of my time travelling.” This is clearly a disadvantage. Where students in conventionally taught courses cannot study anywhere and anytime because they have to attend lectures, computer-assisted learning makes students dependant on computer access, which unfortunately goes against the idea of anywhere, anytime learning. The time students spent on learning, was also limited as students were engaged in other activities as well. For example one student mentioned: “I cannot spend enough time on each module, as I did not have free time left after doing my job. Also after doing my job, I would get tired and would not feel like studying.”
Teaching Context The difference in students’ appreciation of the teaching context —things such as assessments, the OLE tools, and lectures— between the CAPSIbased first term and the conventionally taught second term shows that students on average were more positive about the first than about the second term teaching context. Table 1 shows the average scores obtained on the end of term survey items that related to the teaching context. The scores of these six items were used in a MANOVA, which showed a significant main effect for the terms (F(df between-groups = 6, df within-groups = 204) = 26.91, p < .001). ANOVAs on the individual items revealed this effect also in the scores of the overall module quality (F(1, 209) = 18.00, p < .001), the usefulness of lectures (F(1, 209) = 70.92, p < 0.001), the usefulness of seminars (F(1, 209) = 11.00, p < .01), and the usability of the OLE (F(1, 209) = 55.28, p < 0.001). Students rated all these items higher for the CAPSI-based term than for the conventionally taught term. Items only related to CAPSI, also received high average ratings, such as the usefulness of printed material (M = 2.98, SD = 0.80), the online self-tests (M = 3.32, SD = 0.70), and the video clips (M = 3.07, SD = 0.86).
Computer-Aided Personalised System of Instruction
In the interviews, students were also very positive about these online tools. They liked the onlineself tests to test and extend their knowledge and they suggested including more random and more difficult questions in the tests, because as one student put it “I do not like to do the same question ten times.” The students found the discussion board also useful, especially those students that lived off campus. Some students post messages, whereas others (sometimes called ‘lurkers’) just checked it frequently to keep up to date, looked at the type of questions that were posted, or read previously posted answers to questions they also had. In the interview students were also positive about the video clips. They liked these because they helped them revise before exams, or when they had missed a lecture. One student frequently watched the video clips because he/she did not attend the seminars and tried to understand and solve the problems by watching the videos. Students also used the videos if they ran into problems as the following diary entry shows: “The difficulties [sic] encountered this week was rules of inference, however I resolved by watching some exercise videos…”. On the other hand, one student who lived off campus mentioned that he/she had not purchased the videos on DVD because the five pounds charge for the disc was too expensive. Many students in the interview also mentioned the desire to have online access
to the video off campus. Apparently students did not regard watching videos as a normal lab session activity, which became clear from their reluctance to bringing headphones (these were necessary because the computers in the lab were not equipped with speakers). However, the policy of bringing in your own headphone did not seem to work, as illustrated by the following remark in the observation logbook for the second lab of the first term: “Student suppose [sic] to bring headphones so they can listen Video but none of them brought [sic], instead they were trying to read the videos.” In fact, during all his observations, the observer never noticed a student bringing in a headphone. During the course of 2004 some students informally mentioned some concerns about the usability of OLE tools. A student in the interview mentioned: “…at the beginning of the year students should be given a demo of WebCT, how to use it and the things available on it. Many students did not use it because they do not know how to use it.” This was therefore investigated further. A usability test was conducted in the summer break of 2004. Overall the students in this test were positive about the set-up of the OLE tools. However, their major usability concern related to the difficulty of finding and navigating to particular items in the site. This is not uncommon for OLE delivered courses (Engelbrecht & Harding, 2001).
Table 1. Mean rating (SD) of teaching context of the first term (N between 134 and 145) and second term (N between 91 and 98) in the 2004 and 2005 surveys Item
Scale
Term 1
Term 2
Overall module quality
1 (poor) – 4 (very good)
2.98(0.78)
2.55(0.81)
Usefulness lectures
1 (useless) – 4 (very useful)
3.35(0.75)
2.44(0.83)
Usefulness seminars
1 (useless) – 4 (very useful)
2.85(0.89)
2.46(1.03)
Usefulness lab sessions
1 (useless) – 4 (very useful)
2.53(0.84)
2.45(0.90)
Usefulness discussion board
1 (useless) – 4 (very useful)
2.72(0.72)
2.82(0.90)
Usability OLE
1 (very low) – 5 (very high)
4.13(0.90)
3.12(0.87)
279
Computer-Aided Personalised System of Instruction
Table 2. Mean (SD) ease-of-use and satisfaction rating of the OLE tools by eight students in the usability test Item
Ease of Use
Satisfaction
Study guide
5.48(1.09)
4.63(1.41)
Video
5.69(0.94)
5.31(1.36)
Lecture slides
5.62(0.69)
5.43(0.79)
Online self-test
5.48(0.72)
5.25(1.07)
Discussion board
5.73(0.79)
5.06(1.45)
OLE progress overview
5.40(0.96)
5.38(0.69)
Therefore, after the test, the navigation panel and the overall structure of the site was redesigned in an attempt to make it more consistent. In the usability test participants were also asked to rate the ease-of-use and their satisfaction of a number of OLE tools. The results are given in Table 2. Since students rated items on seven point Likert scales, the rating of above 3.5 seems to indicate no serious usability problems with the six items.
Learning Activities The first step in examining the student learning activities was analysing the scores on the R-SPQ2F questionnaire that was included in the end of term survey for the first and the second terms in 2005. An ANOVA was conducted to see the effect of the learning context on the two learning approaches scales. The ANOVA with repeated measure took as between-subject variable the terms and as within-subject variable the scores on the deep approach and surface approach scales. The analysis reveals a significant effect (F(1, 98) = 46.41, p < .001) for scores between the two scales. Examining the means showed that the students scored, on a scale from 10 to 50, 28.8 (SD = 0.73) points on the deep approach scale and 23.2 (SD = 0.58) on the surface approach scale. However, the analysis failed to find a significant effect (F(1, 98) = 1.42, p > .05) for the terms, or for a two-way interaction (F(1, 98) = 0.66, p > .05) between approaches and terms. ANOVAs on
280
the sub-scales, motivation and strategy, resulted in similar outcomes. This means that survey responders were on average more inclined to apply a deep instead of a surface approach through both terms. Thus the difference of the overall learning environment between the first and the second term did not seem to change the students learning approach. The next step in the analysis was to look for possible relationships between the learning approach and elements of the teaching context. Table 3 and Table 4 show the Pearson correlations between the subscales of the learning approaches and the elements. Both the surface motivation and strategy seem to have a negative Pearson correlation with students’ interest and perceived lack of difficulty of the first term subject matter. For the second term the only significant Pearson correlations between teaching context items and surface learning was a positive Pearson correlation between perceived usefulness of the seminars and motivation; and a negative Pearson correlation between perceived usefulness of the lectures and learning strategy. The relative small number of significant Pearson correlations in Table 3 may suggest that the surface learning approach was less driven by the teaching context, or perhaps that these relationships are not linear as a Pearson correlation assumes. The high number of significant Pearson correlations between the deep learning scale and teaching context items suggests the opposite for
Computer-Aided Personalised System of Instruction
Table 3. Pearson correlation between surface learning approach and items of the teaching context of first term (N ranges from 58 down to 52) and second term (N ranges from 43 down to 39) in 2005. Teaching context
Motivation Term 1
Term 2
Strategy Term 1
Term 2
Overall module quality
0.16
-0.21
0.08
-0.23
Lack of difficulty of subject matter
-0.26*
-0.26
-0.46**
-0.18
Interest in subject matter
-0.28*
0.00
-0.37**
-0.17
Previous familiarity subject matter
-0.23
0.22
-0.11
0.20
Usefulness lectures
-0.03
-0.17
0.14
-0.30*
Usefulness seminars
-0.21
0.34*
-0.25
0.28
Usefulness lab sessions
0.10
0.28
0.09
0.06
Attendance lectures
-0.04
-0.10
-0.09
-0.28
Attendance seminars
-0.10
-0.07
-0.08
-0.29
Attendance lab sessions
-0.19
0.09
-0.21
-0.21
Usability OLE
0.07
0.07
-0.06
-0.01
Usefulness discussion board
-0.01
0.10
0.10
0.08
*p. < 0.05. **p.< 0.01.
Table 4. Pearson correlations between the deep learning approach and items of the teaching context of first (N between 52 and 58) and second term (N between 39 and 43) in 2005. Teaching context
Motivation
Strategy
Term 1
Term 2
Term 1
Term 2
Overall module quality
0.39**
0.38*
0.30*
0.40**
Lack of difficulty of subject matter
0.33*
0.33*
0.21
0.43**
Interest in subject matter
0.53**
0.23
0.33*
0.33*
Previous familiarity subject matter
0.46**
0.29
0.31*
0.38*
Usefulness lectures
0.27*
0.42**
0.19
0.35*
Usefulness seminars
0.35**
0.37*
0.22
0.25
Usefulness lab sessions
0.04
0.21
0.08
0.26
Attendance lectures
0.31*
0.29
0.34*
0.41**
Attendance seminars
0.01
0.36*
0.23
0.41**
Attendance lab sessions
0.17
0.41**
0.23
0.42**
Usability OLE
0.29*
0.16
0.15
0.18
Usefulness discussion board
0.36**
0.37*
0.19
0.25
*p < .05. **p < .01.
281
Computer-Aided Personalised System of Instruction
Table 5. Correlation between learning approach and items of the teaching context of first term in 2005 (N between 48 and 58) Surface approach
Teaching context
Motivation
Strategy
Motivation
Strategy
-0.26
-0.15
0.26*
0.28*
Usefulness online self-testsa
-0.26
-0.09
0.35**
0.18
Usefulness Q&A video
-0.05
-0.18
0.28*
0.21
Usefulness introduction videoa
0.15
-0.18
0.28*
0.23
0.10
0.14
0.26
0.13
Usefulness written study material
a
a
Usefulness summary video
a
Number of introduction video watched
0.04
0.04
0.10
0.22
Number of exercise video watchedb
0.01
-0.15
0.11
0.01
0.02
-0.05
0.12
0.09
b
Number of summary video watched
b
a
Pearson correlation, Spearman correlation, *p < .05. **p < .01. b
deep learning (Table 4). Both for the first and second terms, there are positive Pearson correlations between the deep approach on one side and on the other side: perceived quality of the module, perceived lack of difficulty and students’ interest in the subject matter, previous familiarity with the subject matter, perceived usefulness of the lectures and seminars, and attendance at lectures. However, there are also some distinctions between first and second term. For example, the deep learning approach for the second term seems associated with attendance at lectures, seminars and lab sessions, whereas for the first term the deep approach is only associated with the attendance of the lectures. Therefore it seems that in the conventionally taught second term, classes were mainly attended by students that were intrinsically motivated and applied a deep learning strategy, while for the CAPSI-based term this factor was of less importance to explain class attendance. Table 4 also shows that the usability of the OLE and usefulness of the discussion board positively correlated with intrinsic motivations. This also seems to explain the perceived usefulness of the written study material, the online self-tests, the introduction, and Question & Answer (Q&A) videos that were provided in first term (Table 5). Apparently deep learners appreciate these online
282
Deep approach
tools. However because no difference was found in the learning approaches between the terms, it is unlikely that these tools have a large impact on students’ adoption of a learning approach. The ten students in the diary studies each week spent on average 9.3 (SD = 4.85) hours on the module. This was spread over lectures (M = 1.7 hours, SD = 0.78), seminars (M = 1.0 hours, SD = 0.60), labs (M = 1.3, SD = 1.26) and a considerable amount of self-study (M = 5.2, SD = 4.23). Applying an ANOVA on the survey data from the two years gave more insight into the impact of the learning context on class attendance. The analysis was conducted on the student’s average percentage of the lectures’, seminars’, and lab sessions’ attendance combined. An ANOVA with as independent variable years and terms revealed a significant main effect for terms (F(1, 239) = 37.38, p < .001) and for years (F(1, 239) = 19.21, p < .001), and also a significant two-way interaction effect (F(1, 239) = 9.64, p < .01) between terms and years. Examining the means of the first term shows that attendance remained stable over the two years. In the first term of 2004 it was 79.9% (SD = 16.2), and 83.2% (SD = 14.2) in 2005. Attendance at the second term classes was lower, but increased. In 2004, attendance was 56.4% (SD = 25.2) and in 2005, it was 75.5% (SD = 22.5). This
Computer-Aided Personalised System of Instruction
agrees with students’ attendance observed in the lab in 2005. In the first term this was on average 70 students, significantly (F(1, 16) = 12.28, p < .01) more than the 45 students on average in the second term. The difference between terms could suggest that CAPSI may change study behaviour. Of course there could also be other factors, such as previous knowledge. For example in the interview a student mentioned: Actually the first semester math was new for me so I attended regularly all lectures in the first semester. However most of the statistical stuff in the second semester was not new for me, therefore I only attended a few lectures in the second semester. Another factor might have been the aim and style of the lectures. In the CAPSI-based term,
lectures were for motivation, while in the other term, lectures were for covering the subject matter. Because of this, lectures in the first term were shorter, around one hour, while the lectures in second term were longer, around two hours. This point was illustrated by another comment made in the interview: The first semester lecture was extremely useful and interesting. However the second semester lecture was too lengthy, for me it was too much to attend a one and a half hour lecture without any break. After an hour I get tired, I lost my concentration and interest. Also in the statistics lecture there were less interaction between the students and the lecturer, students only ask questions at the end of the lecture not during the lecture. However it would be more useful to have questions and answer during the lecture.
Figure 1. OLE usage of module between week 4 and 34 in the year 2004-2005, with on the x-axis the assessments events (project 1, midterm test, project 2, mathletics test 1, mathletics test 2, statistical report, exam)
283
Computer-Aided Personalised System of Instruction
Still the difference in attendance could also relate to students’ tendency to initially follow all classes, and later on to stay away more often when they have become more familiar with university life and the course. A similar reduction was also found when examining the number of messages posted on the discussion board. In 2004 staff and students together posted 330 messages, whereas in 2005 this number had risen to 763. The promotion of the discussion board in 2005 seems therefore to have been successful. However, comparing the various periods in the year, the percentages of messages were rather similar. For example, in 2004, 49% of the messages were posted in the first term, whereas 47% in 2005. Next, the percentage for the second term was 37% for 2004 and 33% for 2005 (the remaining messages were posted in the revision and exam period). As with class attendance, students were more active on the discussion board in the first term than in the second term. The discussion board, however, could not replace face-to-face meetings entirely. For example, one student wrote down in his/her diary: “…even though there is webct’s discussion board it is not the same as being explained a difficult problem over s [sic] few notes on the internet” referring here to the alternative of discussing it with the instructor in the seminar. Students’ behaviour was also affected by assessment deadlines, or as one student in the interview said: “I work harder when I have exams and not as hard when I do not have exams.” This behaviour was also observed in web traffic. Fig. 1 shows the recorded accessing of the home page, the self-tests, and the main page of the online videos. The peaks of the home page hits clearly relate with the assessments events. The use of videos, and particular the use of online self-tests had peaks especially before the midterm test and for the final exam. During the Christmas and Easter break, online activity dropped and picked up again when classes resumed. The holidays were also quiet periods on the online discussion boards. Students in the interview were positive
284
about the spread of coursework throughout the year, instead of only a single final exam. Even a student who applied mainly a surface approach said in the interview: I like very much the idea of having both coursework and exams. If I do not have the coursework then I would have left everything to study at the end. Because of the coursework, which was spread in several tasks, I studied this module constantly and therefore I learnt it better than other modules with only an exam at the end of the year. The frequency of students’ attempts at the online self-tests was relatively high for the first modules of term one, but it steadily declined, till halfway through the third module only around 25% of the students were still attempting the tests (Fig. 2). The initial access policy of having to pass previous online-self tests might have created this decline. Not all students liked this mastery policy. One student stated in his/her diary: “I find it a little overwhelming that I must get 100% [sic] in a lab test to regard that particular test as a pass!!!” Although it was never a 100% mastery policy, this was changed early on in the year. Instead of mastery, students should at least have attempted the previous self-tests. In the revision period, even this condition was dropped, to give students unconditional access to prepare for the exam. At the same time, students got access to a general self-test with 20 random questions taken from all tests. From Fig. 2 this seems to have attracted more attention from the students. Around a third of the students in 2005 took this test. Possibly students attempted the self-tests for module one and two as preparation for the midterm test. After that there was no immediate assessment deadline to motivate most students into taking the other tests, and their attention might also been drawn to coursework deadlines of other modules as the following diary entry clearly shows: “For weeks 7, 8 and 9 I haven’t really been attending lectures, seminars or labs. Reasons for this are that I have
Computer-Aided Personalised System of Instruction
Figure 2. Percentage of students that attempted the online-self tests in 2005 (M stands for Module, and U for unit)
% of students
00% 0% 0% 0% 0%
M U
General test
M U
M U
M U
M U
M U
M U
M U
M U
M U
M U
M U
M U
M U
M U
M U
M U
M U
M U
M U
0%
Online self-test
been too busy with work and other modules.” With the end of year exam approaching, students might have ignored the remaining unfinished self-tests and went immediately for the general test. Others might have completely ignored or forgotten the self-tests and used other means to revise for exams or focussed all their attention on the second term material.
Learning Outcomes Several reports can be found in the literature about the positive impact PSI has on students’ marks. Kulik, et al. (1979) concluded in their meta-analysis of 75 comparative studies that students score on average 8% higher in a PSI course than in a conventionally taught course. The exam results of the module agree with these findings. Fig. 3 shows the exam results of the CAPSI taught part and the more conventionally taught part of the module. In 2004 the average difference between the two is 14 points on a scale of 100, and in 2005 it is 20 points. An ANOVA with repeated measure confirms this observation. The ANOVA with as dependent variables the exam marks related to the two terms of the module, and as independent variable the year of the exam, revealed a significant main effect for the two terms of the module (F(1,
332) = 424.09, p < .001). One reason for this might have been the differences in the teaching approach, or simply that one topic was easier than the other. The analysis also showed a significant difference between the years (F(1, 332) = 13.09, p < .001). On average students scored lower in the exam in 2005 than in the previous year. This could mean that the performance of the student cohort was different or the overall exam was more difficult. Still, the more interesting results of the analysis was a significant (F(1,332) = 13.69, p < .001) twoway interaction effect between the year and the topic of the questions. Where the exam scores for the CAPSI taught part remain more or less stable over 2004 (M = 52.1, SD = 19.0) and 2005 (M = 48.9, SD = 19.2), the exam scores of the other part drops from a 38.0 points (SD = 16.9) average to a 28.7 points (SD = 15.0) average. This could be attributed to several factors, such as a variation between the two years in the students’ academic abilities and the ability of the teaching context to adapt to this; a temporary change of the lecturer in the first five weeks in the second term of 2005; or a combination of these factors. The impact of video clips and online self-tests was apparent in the midterm test of the first term. In this test students had to answer four out of six questions under exam conditions. Two of the ques-
285
Computer-Aided Personalised System of Instruction
Figure 3. Mean exam score, with a 95% confidence bar, on first term topic and second term topic obtained by students in academic year 2003-2004 and 2004-2005 term : CAPSI-based 0
term : Traditionally taught
points
0 0 0 0 0 0 00-00
00-00 year
tions related to the application units, which were taught only in the lectures and were supported by OLE tests and video material only in 2005. While in 2004, 71% of the students attempted one or both of these application related questions, in 2005 this had risen to 80%. Although it just fails to reach a significant level in a 2 × 2 Chi Square analysis, χ2 (1, N = 345) = 3.81, p = .051, the trend shows that these OLE tools could have an impact on students confidence in attempting questions. Still, giving them without testing the material in the written diagnostic test does not seem to improve the exam results. An ANOVA with repeated measures on the average score obtained in the 2004 and 2005 exams on the four questions covering the first four theoretical learning units of the first term, and the score on a question covering the application units, units five, only revealed a significant main effect (F(1, 332) = 74.38, p < .001) for the topic of the questions. Students scored on average 10.5 (SD = 3.9) out of 20 points on the units one till four related questions, and 8.7 (SD = 4.9) out of 20 points on the units five question. But more importantly, no significant two-way interaction effect (F(1, 332) = 0.87, p > .05) was found between questions and exam year. This means that the introduction of
286
the video clips and OLE self-tests in 2005 did not seem to improve students’ performance in the end of the year examination. The focus on the students learning strategy seems justified. In the end of term surveys students were asked to indicate their grades obtained for the midterm test and statistics report coursework. As mentioned in a previous section, the surveys also included the R-SPQ-2F (Biggs, et al., 2001) learning approach inventory. Table 6 shows Spearman correlations between the coursework grades and the learning approach. In the first term, based on the CAPSI principles, grades negatively correlated with a surface approach. Students that were motivated mainly by a fear of failure and applied a narrow target, rote learning strategy had a tendency to obtain lower grades in the midterm test. However no significant Spearman correlation was found between the midterm grade and the deep learning approach. Table 6 shows the precise opposite pattern when it comes to the statistical report coursework grade and learning approach. In the more conventionally taught term, the grade positively correlated with the deep approach. Therefore it seems that the CAPSI learning environment was less supportive when students applied a surface approach, in other words, when
Computer-Aided Personalised System of Instruction
Table 6. Spearman’s correlation between learning strategy and coursework grades Scale Surface approach
Statistical Report (term 2)b
-0.28
*
-0.07
Motive
-0.32
*
-0.15
Strategy
-0.28
*
-0.02
Deep approach
a
Midterm test (term 1)a
0.10
0.32
**
Motive
0.26
0.28
**
Strategy
-0.01
0.33
**
obtained in end of the first term survey of 2005, with N between 51 and 53; b obtained in the end of the second term
survey of 2004 and of 2005, with N between 88 and 91. *p < .05. **p < .01.
they picked items and rote learnt them, from a fear of failing, students were less likely to obtain high grades with CAPSI. However, success in a CAPSI course did not seem to rely on students’ deep learning strategy. This might be explained by the behaviouristic tradition in which PSI was developed, placing an emphasis on the environment rewarding good behaviour (Keller & Sherman, 1974) and less on student comprehension and cognition. In the second term, students seem to get higher grades because of their deep learning strategy. In short, whereas in the CAPSI environment, the deep learning strategy is less important, in the traditional taught environment this seems to be a prerequisite for academic achievement. Table 7 shows the Pearson correlations between OLE use and the exam marks for the first and second term. For both terms the access to the home page of the OLE site, the use of the self-tests
and the discussion board correlated significantly with academic achievement. However, the size of the Pearson correlations is relatively small, and interestingly the use of the first term online self-test correlated significantly with the exam marks of the second term. Although the size of this Pearson correlation is smaller than the Pearson correlation between the exam marks of first term and use of the self-tests, it suggests that OLE use is perhaps mainly an indicator of students’ overall motivation and only for a small portion a direct predictor of academic achievement.
DiscSCssion This study started from the question as to how CAPSI might affect student attitude, learning and performance. From the findings it seems that stu-
Table 7. Pearson correlation between exam marks related to first and second term and OLE use (N = 334) OLE use
Term 1
Term 2
Hits on the home page of the OLE site
0.23**
0.12*
Number of messages read on discussion board
0.32**
0.24**
Number of messages posted on the discussion board
0.20**
0.21**
Percentage of first term online self-tests attempted
0.34**
0.22**
*p < .05. **p < .01.
287
Computer-Aided Personalised System of Instruction
dents were more positive about the CAPSI-based term than the traditionally taught term. They also found facilities used in the CAPSI term such as the printed material, video, online self-test and the discussion board, useful. Although the analysis of the survey data did not reveal that CAPSI changed the students’ learning approach, class attendance and the use of the discussion board were higher in the CAPSI-based first term than the second term. Whereas lectures, seminars and lab attendance in the lecture-based term correlated with a deep learning approach, in the first (CAPSI) term this correlated only with attendance at the motivational lectures. In short, the CAPSI learning environment seems to engage students but not to change their motivations or learning approach noticeably. This is promising because in the traditionally taught term, engagement seems to be a function of the student’s deep learning approach, which coincided with high academic achievement. In the CAPSI term this link was less strong; whether or not students applied a deep learning approach did not determine their academic achievement, though students that applied a surface approach in the CAPSI-based term obtained lower marks. In other words, for students to be successful in the CAPSIbased term, deep learning approach seems less important, however, applying a surface learning approach seems less effective. The correlations found in the first term between the deep learning approach and perceived usefulness of OLE tools (Table 5) seem to confirm the earlier report by Hoskins and Van Hoof (2005). The results support their concerns that the OLE might only be taken advantage of by highly motivated students. However, the Pearson correlations between OLE use and exam marks (Table 7), although significant, were relatively small. Therefore there seems little support to justify fears that an OLE would be an obstacle to student learning. Instead it seems that the teaching principles, rather than the use of OLE tools, are the determining factor. Marks for the CAPSI taught term were higher than those for the traditionally taught term. Therefore, the main
288
lesson learned from the study appear to be that the principles of PSI can effectively be used in an online learning environment, creating higher marks and students’ appreciation. The findings related to the videos and the OLE self-tests suggest that students perceived them as useful, and students might feel more comfortable in taking an assessment on a topic discussed in the videos. However, these OLE tools alone do not have a large impact on student performance. It seems more likely that they are more successful in combination with diagnostic tests; bringing the student from only passively watching the videos to actively studying them to pass the diagnostic test. This agrees with the overall observation that students’ learning activities were largely driven by assessment deadlines. The data on the online self-tests suggests that OLE tools were used more if students could clearly link them with the next upcoming assessment exercise. Kraemer (2003) also notices this effect. Completion rates of OLE material in her course were higher when librarian students were required to go through them for a grade. It seems that the call for aligning teaching, assessment, and learning objectives (Biggs, 2003) should include online teaching tools as well. Students should see the benefit in using these OLE tools for passing their assessments; otherwise, only the highly motivated students might use them.
Limitations of the Study As in any empirical study, this study also has limitations. The main limitation is that the results are restricted to those students who responded to the surveys, interview and diary studies. These students were probably more motivated and performed better than average. For example the average module mark of the participants in the interview was 66.2 (SD = 11.6) compared to a class average of 52.5 (SD = 17.1). The results of the surveys could also have a positive bias towards OLE tools as the surveys were OLE delivered. Stu-
Computer-Aided Personalised System of Instruction
dents that did not like or have problems using the OLE tools, might therefore be underrepresented in the survey response. Another limitation is the ability to control variables. For example, for obvious pedagogical and ethical reasons individual students could not be withheld or provided OLE access to study the effect these OLE tools had on student learning and performance. Furthermore, the material taught in the CAPSI-based term and traditionally taught term could be a confounding variable as students could be more familiar or interested in a particular topic that might have influenced their learning activities. However, the study applied a mixed methodology approach, grounding the analysis on data from various data sources. This has the advantage that limitations of one data source could be overcome by using another data source in some cases. Next, this is a case study about teaching mathematics to first year Computer Science and Information System students on a UK based university. Although this is very specific, it could be of wide interest as mathematics is taught in many courses in different fields including science, engineering, business studies and economics.
Further Development and Results The findings of the study combined with the university’s desire to run the course on an economically viable level have led to changes in how the module was taught in the academic years 2005-2006 and 2006-2007, simply referred to as 2006 and 2007 from here on. First of all, the lecturer that provided the lectures in the first term did this also now for the second term. Next, the second term has become CAPSI-based as well, with the written material broken up into ten units, each with its own written diagnostic test, online self-test, and supporting videos. The videos again consisted of an introduction and a summary clip for each unit and a number of Q&A videos. Developing the videos took around one and half week preparation, a week shooting them in a recording studio, and one and half weeks editing and converting to MPEG format. The 57 clips, including a general introduction video, are again available online on campus and on a separate DVD. Eighty headphones have been bought and made available to students to borrow in the lab, since it was observed that students did not bring their
Figure 4. Mean exam score, with a 95% confidence bar, obtained by students on topics taught in the first and second term in the 4 academic years, where in 2006 and 2007 the second terms has also become CAPSI-based.
0
Points
0 0 0 0 Term
0
Term
0 00
00
00
00
Year
289
Computer-Aided Personalised System of Instruction
own headphones. The question database for the first term has also been extended. Instead of the standard four or five questions, questions are now randomly drawn from a question database that has 20-30 questions for each unit. As others have reported (Engelbrecht & Harding, 2001), some problems were also experienced with displaying mathematical symbols in the OLE quizzes - unfortunately, some students are still reporting that they are unable to see some symbols on their computer at home. Developing the question database also took considerable time, but it is regarded as a long-term investment. Partly because of pressure to run the module with less staff, but also because of the findings, the question database is now used for supervised tests in the lab as part of the formal assessment. Students have four attempts throughout the year to take or retake these supervised OLE tests, and these replace the midterm test, and the project 2 coursework in the first term, and the two Mathletics tests in second term. For students this has the advantage that the online self-tests are directly aligned with the formal assessment —indeed they are drawn from the same question database. For staff it
removes the burden of marking, while students seem to perform equally well on OLE-based and paper-based tests (Poirier & O’Neil, 2000). Fig. 4 shows the effect these changes in the course had on the mean exam marks. Whereas in 2004 and 2005 exam marks on the term 2 topics were lower than marks for term 1 topics, in 2006 and 2007 this is no longer the case. A repeated measure ANOVA with as independent variables the year of the exam and term reveals therefore a significant two-way interaction effects (F(3, 676) = 138.65, p. < .001) on the exam marks. The analysis also found a significant main effect for year (F(3, 676) = 4.317, p. = .005) and term (F(1, 676) = 206.72, p. < .001). Detailed analysis shows that in 2007 the marks for term 2 were even significant higher (t(180) = -16.67, p. < .001) than for term 1. The results of the student survey obtained in the 4 years also seems to confirm a change after term 2 became CAPSI-based. More specifically, students’ rating of the overall quality of the module in term 2 seems to increase in 2006 and 2007 (Fig. 5). An ANOVA on this survey question with as independent variables year and term shows a significant main effect
Perceived overall quality
Figure 5. Mean rating perceived overall quality of the module, with a 95% confidence bar, given by students in the survey at the end of each term
– very good - good
- fair Term Term - poor 00
00
00 Year
290
00
Computer-Aided Personalised System of Instruction
for year (F(3, 355) = 6.64, p. < .001), but not for term (F(1, 355) = 0.88, p.> .05). However more interestingly, the analysis found again a significant two-way interaction effect (F(3, 355) = 7.48, p. < .001) between year and term. These findings suggest that differences found in 2004 and 2005 between CAPSI and the traditionally taught terms could not simply be attributed to different topics taught in these terms. Instead it seems to support CAPSI as effective teaching method. In 2007, because of university policy, the OLE also changed platform from WebCT Campus Edition to WebCT Vista. Also, following the original ideas of PSI, conditional access rights of the online-self test was set to 80% mastery in 2007. This means that students had to obtain at least an 80% score on an online-self test before they got access to written material, videos, discussion board and online-self test of the next learning unit. Fig. 6, Fig. 7 and Fig. 8 provide an insight on how this 80% mastery polity in 2007 relates to the 2005 and 2006 access policy that a student had tried it at least once before getting access to the online-self test of the next learning unit. As Fig. 6 shows the 80% mastery policy might have caused that some students stop taking the
online-self test in the first term. The 2007 term 1 pattern seems to drop to the 2004 line when the questions database was relatively small and the questions of the database were not used as part of a formally assessment. Applying a paired t-test on these percentage, reveals that on average over these 30 tests significantly (t(29) = 3.27, p. = .003) more students attempted a test in 2006 (M = 59%, SD = 15%) than in 2007 (M = 53%, SD = 21%). However this does not mean that the 80% mastery policy was a failure. Instead the opposite could be argued for. As Fig. 7 shows, the mean score on the online-self test were significantly (t(29) = -4.81, p. < .001) higher in 2007 (M = 85%, SD = 6%) than in 2006 (M = 69%, SD = 19%). The four dips in term 1 of 2006, one for each of the four modules in term 1, seems to suggest that allowing students to progress with a lack of understanding of the initial learning units might have limited students in their understanding of the more advanced learning units. Taking the percentage of students that at least attempted a test and the mean score together by multiplying these two gives a percentage of the understanding of learning unit by the entire student cohort (thereby assuming that students that had not taken
Figure 6. Percentage of students that attempted the online-self tests in 2005, 2006, and 2007. (Unit 1-10 are taught in term 2). 00
0%
00 00
0% 0%
UNIT
UNIT 0
UNIT
UNIT
UNIT
UNIT
UNIT
UNIT
UNIT
UNIT
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
0%
M LU
0%
M LU
% of students
00%
online self test
291
Computer-Aided Personalised System of Instruction
Figure 7. Mean scores students obtained in their best performance on each online-self test 00%
Mean score
0% 0% 0%
00 00
0%
UNIT 0
UNIT
UNIT
UNIT
UNIT
UNIT
UNIT
UNIT
UNIT
UNIT
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
00 0%
online self test
00%
00 00
0%
00 0% 0%
UNIT 0
UNIT
UNIT
UNIT
UNIT
UNIT
UNIT
UNIT
UNIT
UNIT
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
M LU
0%
M LU
0%
M LU
Mean score * %of students
Figure 8. Mean score: Percentage of student attended the online-self test
online self test
a test would score a zero on this test). As Fig. 8 shows this was significantly (t(29) = -2.49, p. = .019) higher in 2007 (M = 45%, SD = 19%) than in 2006 (M = 41%, SD = 16%). This therefore suggests that the 80% mastery policy might be more effective than a less demanding policy. Students’ opinion about the 80% mastery policy however varied. For example, one student posted the following message in 2007 when he/she had a problem with the module 2 unit 3 online-self test (M2 LU 3):
292
Do you have any suggestion for me to easily get on this translation bit? I really found it annoying now….Could you please explain to me what I’ve done wrong for these 2 questions and I really want to move on to the next unit. Similarly another message posted also illustrates student’s frustration of not being able to continue with next learning unit: “…Could you please give me the right answer for me to get the idea of it? I’ve already spent 2 week on this test so far and I really want to move on for catch up.” However afterwards some students realise that the 80% mastery policy did have its advantages,
Computer-Aided Personalised System of Instruction
as one student in an interview with ten students at the end of term 2 in 2007 stated: “Online self tests are great. It helps me to learn and prepare for the exam. Although the conditional access was annoying sometimes, I think it is beneficial, so I keep trying.”
Conluion The main conclusion of the study is that PSI can be used to enhance both student appreciation and achievement in a course that is supported by OLE. Students’ performance was significantly higher for the CASPI taught material than for traditionally taught material. Furthermore, the study found that while a deep learning approach was significantly correlated with good grades on the traditionally taught material, this was not the case for the CAPSI material; the conclusion is that even those students who are not taking a deep approach may in some way be helped to learn mathematics by a CAPSI course.
REFERENC Abbott, R.D., & Falstrom, P.M. (1975). Design of a Keller-plan course in elementary statistics. Psychological reports, 36(1), 171-174. Austin, S.M, & Gilbert, K.E. (1973). Student performance in a Keller-Plan course in introductory electricity and magnetism. American journal of physics, 41(1), 12-18. Biggs, J. (2003). Teaching for quality learning at university: What the student does (2nd ed). Berkshire, SRHE & Open University Press. Biggs, J., Kember, D., & Leung, D.Y.P. (2001). The revised two-factor study process questionnaire: R-SPQ-2F. British Journal of Educational Psychology, 71(1), 133-149.
Brinkman, W.-P., Haakma, R., & Bouwhuis, D.G. (2005). Empirical usability testing in a component-based environment: improving test efficiency with component-specific usability measures. In R. Bastide, P. Palanque, and J. Roth (Eds.) Proceedings of EHCI-DSVIS 2004, Lecture Notes Computer Science, 3425, 20-37. Berlin, Springer-Verlag. Brook, R.J., & Thomson, P.J. (1982). The evolution of a Keller plan service statistics course. Programmed Learning & Educational Technology, 19(2), 135-138. Coates, D., & Humphreys, B.R. (2003). An inventory of learning at a distance in economics. Social Science Computer Review, 21(2), 196-207. Coppola, N.W., Hiltz, S.R., & Rotter, N.G. (2002). Becoming a virtual professor: pedagogical roles and asynchronous learning networks. Journal of Management Information Systems, 18(4), 169-189. Davis, F.D., 1989, Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340. Debela, N. (2004). A closer look at distance learning from students’ perspective: a qualitative analysis of Web-based online courses. Journal of Systemics, Cybernetics and Informatics, 2(6). Emck, J.H., & Ferguson-Hessler, M.G.M. (1981). A computer-managed Keller plan. Physics Education, 16(1), 46-49. Engelbrecht, J., & Harding, A. (2001). WWW mathematics at the University of Pretoria: The trail run. South African Journal of Science, 97(910), 368-370. Green, B.A. (1971). Physics teaching by the Keller plan at MIT. American Journal of Physics, 39(7), 764-775. Green, S.M, Voegeli, D., Harrison, M., Phillips, J., Knowles, J., Weaver, M., et al. (2003). Evaluat-
293
Computer-Aided Personalised System of Instruction
ing the use of streaming video to support student learning in a first-year life sciences course for student nurses. Nurse Education Today, 23(4), 255-261. Hambleton, I.R., Foster, W.H., & Richardson, J.T.E. (1998). Improving student learning using the personalised system of instruction. Higher Education, 35(2), 187-203. Hereford, S.M. (1979). The Keller plan within a conventional academic environment: An empirical ‘meta-analytic’ study. Engineering education, 70(3), 250-260. Hiltz, S.R., & Turoff, M. (2002). What makes learning networks effective? Communication of the ACM, 45(4), 56-59. Hoskins, S.L., & van Hooff, J.C. (2005), Motivation and ability: Which students use online learning and what influence does it have on their achievement? British Journal of Educational Technology, 36(2), 177-192. Johnson, G.M. (2005). Student Alienation, Academic Achievement, and WebCT Use. Educational Technology & Society, 8(2), 179-189. Jones, G.H, & Jones, B.H. (2005). A comparison of teacher and students attitudes concerning use and effectiveness of web-based course management software. Educational Technology & Society, 8(2), 125-135. Keller, F.S. (1968). “Good-bye, teacher …” Journal of Applied Behavior Analysis, 1(1) 79-89. Keller, F.S., & Sherman, J.G. (1974). The Keller plan handbook: essays on personalized system of instruction. Menlo Park, W.A. Benjamin. Kinsner, W., & Pear, J.J. (1988). Computer-aided personalized system of instruction for the virtual classroom. Canadian Journal of Educational Communication, 17(1), 21-36. Koen, B.V. (2005). Creating a sense of “presence” in a Web-based PSI course: the search for Mark
294
Hopkins’ log in a digital world. IEEE Transactions on Education, 48(4), 599-604. Kraemer, E.W. (2003). Developing the online learning environment: the pros and cons of using WebCT for library instruction. Information Technology and Libraries, 22(2), 87-92. Kulik, J.A., Kulik, C.-L.C., & Cohen, P.A. (1979). A meta-analysis of outcome studies of Keller’s personalized system of instruction. American Psychologist, 34(4), 307-318. Kyle, J. (1999). Mathletics – A review. Maths & Stats, 10(4), 39-41. Lewis, J.R. (1995). IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. International Journal of Human-Computer Interaction, 7(1), 57-78. Pear, J.J., & Crone-Todd, D.E. (2002). A social constructivist approach to computer-mediated instruction. Computer & Education, 38(1-3), 221-231. Pear, J.J., & Novak, M. (1996). Computer-aided personalized system of instruction: a program evaluation. Teaching of psychology, 23(2), 119123. Pear, J.J. (2003). Enhanced feedback using computer-aided personalized system of instruction. In W. Buskist, V. Hevern, B. K. Saville, & T. Zinn (Eds.). Essays from E-xcellence in Teaching 3(11). Washington, DC: APA Division 2, Society for the Teaching of Psychology. Pelayo-Alvarez, M., Albert-Ros, X., Gil-Latorre, F., & Gutierrez-Sigler, D. (2000). Feasibility analysis of a personalized training plan for learning research methodology. Medical education, 34(2), 139-145. Poirier, T.I., & O’Neil, C.K. (2000). Use of Web technology and active learning strategies in a quality assessment methods course. American journal of pharmaceutical education, 64(3), 289-298.
Computer-Aided Personalised System of Instruction
Rae, A. (1993). Self-paced learning with video for undergraduates: a multimedia Keller plan. British Journal of Educational Technology, 24(1), 43-51. Ramsden, P., & Entwistle, N.J. (1981, November). Effects of academic departments on student’s approaches to studying. British Journal of Educational psychology, 51, 368-383. Ramsden, P. (2003). Learning to teach in higher education (2nd ed). London, RoutledgeFalmer. Roth, C.H. (1993). Computer aids for teaching logic design. Frontiers in education conference, 188-191). IEEE. Schultze-Mosgau, S., Zielinski T., & Lochner, J. (2004). Web-based, virtual course units as a didactic concept for medical teaching. Medical teacher, 26(4), 336-342.
Sheehan, T.J. (1978). Statistics for medical students: personalizing the Keller plan. The American statistician, 32(3), 96-99. Watson, J.M. (1986). The Keller plan, final examinations, and log-term retention. Journal for Research in Mathematics Education, 17(1), 60-68. Wernet, S.P., Olliges, R.H., & Delicath, T.A. (2000). Postcourse evaluations of WebCT (Web Course Tools) classes by social work students. Research on Social Work Practice, 10(4), 487-504. Zhang, D., Zhao, J. L., Zhou, L., & Nunamaker, J.F. (2004). Can E-learning replace classroom learning? Communication of the ACM, 47(5), 75-79.
295
Computer-Aided Personalised System of Instruction
APPENDIX Table 8. Questions of the online surveys Answer type
No
Included in survey
Question
1
T1-04, T1-05, T204, T2-05.
All things considered, how would you rate the quality of the [first/second] term of this module?
A
2
T1-04, T1-05, T204, T2-05.
How would you rate the level of difficulty of [the second term of] the module in your case?
B
3
T1-04, T1-05, T204, T2-05.
How would you rate your interest for [the second term of] the module?
C
4
T1-04, T1-05, T204, T2-05.
How useless/useful did you find the lab sessions [in the second term]?
D
5
T1-04, T1-05, T204, T2-05.
How useless/useful did you find the seminar sessions [in the second term]?
D
6
T1-04, T1-05, T204, T2-05.
How useless/useful did you find the lectures [in the second term]?
D
7
T1-04.
Roughly what proportion of the lecture/lab/seminar sessions have you attended?
E
8
T1-05, T2-04, T2-05.
Roughly what proportion of the lecture sessions did you attend [in the second term]?
E
9
T1-05, T2-04, T2-05.
Roughly what proportion of the lab sessions did you attend [in the second term]?
E
10
T1-05, T2-04, T2-05.
Roughly what proportion of the seminars sessions did you attend?
E
11
T1-05,
How useless/useful did you find the end of the lecture questions?
D
12
T1-04, T1-05, T204, T2-05.
How much of the subject matter covered in the [first/second] term did you already know?
E
13
T1-04, T1-05, T204, T2-05.
What is your educational background?
F
14
T1-04, T1-05, T204, T2-05.
Do you have access to a PC / laptop outside the lab sessions to work on?
G
15
T1-05.
Do you have access to a PC / laptop with a DVD player outside the lab sessions to work on?
G
16
T1-04, T1-05.
Which grade did you receive for the mid semester test?
H
17
T2-04, T2-05.
Which grade did you receive for Mathletics test 1?
H
18
T2-04, T2-05.
Which grade did you receive for Mathletics test 2?
H
19
T2-04, T2-05.
Which grade did you receive for Statistical Report coursework?
H
20
T1-04, T1-05, T204, T2-05.
How would you rate the usability of the WebCT environment for this module [in the second term]?
C
21
T1-04, T1-05, T204, T2-05.
How useless/useful did you find the WebCT discussion board of this module [in the second term]?
D
22
T1-04, T1-05,
How useless/useful did you find the self-tests on WebCT?
D
23
T2-04, T2-05.
How useless/useful did you find the Mathletics self-tests (not the assessments)?
D
24
T1-04.
How useless/useful did you find the videos?
D
25
T1-05.
How useless/useful did you find the videos that discuss questions?
D
26
T1-05.
How useless/useful did you find the videos that gave an introduction?
D
continued on following page
296
Computer-Aided Personalised System of Instruction
Table 8. continued 27
T1-05.
How useless/useful did you find the videos that gave a summary?
D
28
T1-05.
Roughly how many video clips that give an introduction to a module/unit have you watched?
I
29
T1-05.
Roughly how many video clips that discuss a question have you watched?
J
30
T1-05.
Roughly how many video clips that give a summary of a unit have you watched?
I
31
T1-04, T1-05.
How useless/useful did you find the example questions of the mid semester test?
D
32
T1-04, T1-05.
How useless/useful did you find the written material (Modules 1-5)?
D
33
T1-04, T1-05.
How useless/useful did you find the book “Discrete mathematics with application” by S.S. Epp?
D
34
T1-04, T1-05.
How useless/useful did you find the book “Computer Science: an overview” by J. Glenn Brookshear?
D
35
T1-04, T1-05, T2-04, T2-05.
How useless/useful did you find the feedback [regarding the assessment of your coursework/ you received on your Statistical Report coursework]?
D
36
T2-04, T2-05.
How useless/useful did you find the Lecture Handouts used in the second semester?
D
37
T2-4, T2-05.
How useless/useful did you find the Lab Session Notes used in the second term?
D
38
T2-04, T2-05.
How useless/useful did you find the Seminar: Problem Sheets in the second term?
D
39
T2-04, T2-05.
How useless/useful did you find the example exam questions on WebCT?
D
40
T2-04, T2-05.
How useless/useful did you find the Study Guide?
D
41
T2-04, T2-05.
How useless/useful did you find the template document you could use to create your Statistical Report?
D
42
T1-04, T1-05.
Any comments that you like to make about the module?
K
Note, Questions 43-62 were taken from the R-SPQ-2F inventory (Biggs, et al., 2001). Question phrases within [] were adapted to the context of the term. T1-04 = Term 1 in 2004; T1-05 = term 2 in 2005; T2-04 = term 2 in 2004; T2-05 = term 2 in 2005.
297
Computer-Aided Personalised System of Instruction
Table 9. Answer types used in the online surveys Answer type
Answer options
A
1) poor; 2) fair; 3) good; 4) very good; 5) not applicable.
B
1) very difficult; 2) difficult; 3) average; 4) easy; 5) very easy; 6) not applicable.
C
1) very low; 2) low; 3) average; 4) high; 5) very high; 6) not applicable
D
1) useless; 2) some parts useless some parts useful; 3) useful; 4) very useful; 5) not applicable.
E
1) 0-20%; 2) 21-40%; 3) 41-60%; 4) 61-80%; 5) 81-100%; 6) not applicable.
F
a) A levels; b) BTEC; c) GNVQ; d) Access; e) Other.
G
1) never; 2) sometimes; 3) regular; 4) not applicable.
H
1) F; 2) E; 3) D; 4) C; 5) B; 6) A; 7) not applicable.
I
1) 0-4; 2) 5-9; 3) 10-14; 4) 15 or more.
J
1) 0-4; 2) 5-9; 3) 10-14; 4) 15-19; 5) 20-24; 6) 25-29; 7) 30 or more.
K
Open answer
Table 10. Interview questions Category Students approach to learning
Teaching approach
No
Question
1
Many students have different ways how they approach their studies. There is not a single best approach, some people like going to lectures, seminars, and lab regularly, but other like to work on their own at home. Some people spread the learning across the year; others like to focus their learning activities at the moment before an assessment. Can you tell me how you approached your learning of this module, and also why did you approach it like this? Let’s start how you began the year and progress through the year up to the days of the exams.
2
What were the reasons for you to attend a lecture/seminar/lab session?
3
What were your motivations to start studying this module? Or what were the thinks that stop you from studying?
4
How much time did you spend studying on this module and why did you spend this amount of time to the module?
5
In the first semester, [name lecturer] only looked at one part of the material in his lectures -units 5- the other parts, units 1-4, was covered by seminars and lab sessions. Whilst in the second semester [name lecturer] covered all topics in her lectures, while the seminars and lab session focuses more on details and practical issues. Please tell me how you perceived this approach of teaching?
6
This module was assessed via both 50% coursework and 50% exams. Coursework included 6 tasks (Tarski, Mid term exam, Project two, Mathletics test 1 & 2, and Statistical Report). How do you perceive this approach of assessment?
7
Why do you like or not like this approach; and what suggestions would you like to make?
8
Were you clear about the assessment process before you undertook them?
continued on following page
298
Computer-Aided Personalised System of Instruction
Table 10. continued Student Characteristics
OLE tools
Closing question
9
Student of level one come from various educational backgrounds, for instance some might study math subject in GCSE and A level and some may not. What was your educational background?
10
If you look at your background and the material in the module, which thing did you already know and which things were new for you?
11
Students also differ in terms of where they stayed during their study period, on campus or off campus. Students that live off campus have different travelling rime to come to the university. What was your situation?
12
How might this have affected your study?
13
Some student only had access to a PC and Internet at the university; others have also access to these facilities at home. What is your situation?
14
How might this have affected your study?
15
Some students work or engaged in other activities during their study period, such as working, hobbies, sport, or other studies activities. What was your situation?
16
How might this have affected your study?
17
Some students have many friends and they study and do coursework in groups, whilst others do their study alone. What as your situation?
18
How might this have affected your study?
19
Different modules have different way to support student learning activities. This module offered WebCT and printed material of lectures, problem sheets and lab notes at the beginning of the semester. Different student used these resources and facilities differently. How would you explain your way of using these and why did you use WebCT? What are the facilities you particular used and which didn’t you use, and why?
20
Did you use any of these facilities?
21
Why did you use them?
22
Do you have any suggestions to improve these facilities?
23
Is there anything else what you like to mention, which was not covered by the previous questions?
299
300
Chapter XVIII
Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice Sandy el Helou École Polytechnique Fédérale de Lausanne (EPFL), Switzerland Denis Gillet École Polytechnique Fédérale de Lausanne (EPFL), Switzerland Christophe Salzmann École Polytechnique Fédérale de Lausanne (EPFL), Switzerland Yassin Rekik École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
Abra The École Polytechnique Fédérale de Lausanne is developing a Web 2.0 social software called eLogbook and designed for sustaining interaction, collaboration, and learning in online communities. This chapter describes the 3A model on which eLogbook is based as well as the main services that the latter provides. The proposed social software has several innovative features that distinguish it from other classical online collaboration solutions. It offers a high-level of flexibility and adaptability so that it can fulfill the requirements of various Communities of Practice. It also provides community members with ubiquitous access and awareness through its different interfaces. Finally, eLogbook strengthens usability and acceptability thanks to its personalization and contextualization mechanisms.
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice
INnion Since 2000, the École Polytechnique Fédérale de Lausanne (EPFL) has been deploying eMersion, which is a Web-based environment for sustaining remote and virtual experimentation activities in higher engineering education (Gillet, 2005). The eMersion environment provides students and educators with services covering the main needs for carrying out collaborative hands-on activities such as controlling and enabling access to experimentation resources, storing and sharing experimental data, managing tasks and activities, as well as supporting and monitoring the learning process. Evaluations performed over four years showed a great acceptance of the eMersion environment by students, teaching assistants, and professors (Nguyen-Ngoc 2004). These results are very encouraging since the use of eMersion is completely optional, in that the students always have the possibility of carrying out their experiments within the university campus in a traditional face-to-face way (Salzmann, Gillet, Scott & Quick 2008). The evaluations of the eMersion environment demonstrate clearly that the key service for the acceptance of the learning modality and the appropriation of the environment by the students, is a shared electronic notebook called eJournal, which has been introduced to support collaboration and interaction between the members of a learning community (Farkas, Nguyen Ngoc, & Gillet, 2005). This tool allows flexible integration and collaborative usage of laboratory resources to support knowledge building and sharing. In the context of the Palette European integrated project (palette.ercim.org), the eJournal and the associated features are currently enhanced and extended in order to address the needs of a broader range of online communities, especially communities of practice (CoPs), to effectively support mediated interaction, collaboration, and learning in management, education and engineering. CoPs can be defined as groups of people who
share a concern or a passion for something they do and learn how to do it better as they interact regularly”. “Because its constituent terms specify each other, the term “Community of practice” should be viewed as a unit” (Wenger, 1998, p. 72). As an example, the students who conduct lab experiments within the control laboratory course taught at EPFL, and the teaching assistants and educators who guide them, form altogether a laboratory-oriented CoP. Another example is ePreP, a non-profit CoP involved in the Palette project. The ePreP CoP gathers educators from French and international institutions, sharing practices for the development of a first higher education cycle preparing students for the competitive entrance exams to the French “Grandes Écoles”, through the use of information and communication technology (ICT). In a similar vein, Medical doctors discussing their practices and sharing cases studies also constitute a CoP. Extending the eJournal in order to support all types of CoPs is motivated by the fact that the latter have been recognized as effective environments to support learning in professional organizations and educational institutions (La Contora, 2003). In both academic and professional contexts, CoPs represent an interesting alternative to formal and institutional learning and training. CoPs allow bypassing organizations’ boundaries and building virtual communities of actors sharing common interests and goals. While formal learning focuses mainly on information delivery, CoPs focus on participation and collaboration and help members capitalize and share knowledge, acquire collaboration and cooperation skills, and develop argumentation and negotiation capabilities (La Contora, 2003). This paper describes our work on extending the eJournal concepts to become an innovative framework for sustaining interaction, collaboration, and learning for all types of communities of practice. The first step towards this objective was to develop a generic framework for modeling CoPs structure and behavior. Then, a Web 2.0 applica-
301
Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice
tion, namely eLogbook, was implemented based on the proposed model. The paper is organized as follows. Section 2 gives a short overview of the eLogbook precursor, the eJournal tool and its main services and features. Section 3 presents the 3A model proposed for CoPs. The five main concepts of this model, which are Actors, Activities, Assets, Events, and Protocols, are detailed. Section 4 describes the eLogbook application. It gives an overview of its main functionalities based on the 3A model, its awareness services and its multiples interfaces. Section 5 concludes the paper
Figure 1. The eJournal user interface
302
and presents the current state of implementation, as well as the future work perspectives.
TheJOURNAL TOOL The eJournal is no more than a digital asset management system (Natu, 2003), an ePortfolio (Carroll, 2005) or an electronic laboratory notebook (Talbott, 2005). It can be defined as an assets-based interaction system. Its core feature is designed as a mailbox, a familiar metaphor
Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice
for users. Instead of simple emails, the eJournal contains digital assets of various types. Contrary to a mailbox that belongs to a unique person, the eJournal is shared by members of a team. The team members can either tag or annotate the assets at creation or later. Some context-related tags and metadata are also automatically added when the assets are created. In addition to the mailbox-like Asset area (See bottom part of Figure 1), the eJournal integrates contextual and awareness information in the workspace (See top part of Figure 1). The idea behind this design is that the users should not have to look for basic context and awareness information elsewhere. They should not even have to think about finding such information. It should be implicitly obtained while manipulating assets. As an example, the Team area provides awareness about the role and right for the user in the given context, as well as indications regarding the possible presence of other team members. The Activity area provides information regarding pending tasks. The Folder area provides means to filter the context-oriented assets to be displayed. The Category column in the Asset area is used to summarize user and system-defined metadata. The eJournal differs from typical digital assets management (DAM) systems in many aspects. First, the eJournal was initially designed for e-Learning applications where the process of creating and manipulating assets by annotating, linking, tagging and rating them, has more value than the assets themselves. DAM systems are typically designed for digital-repository applications (pictures, movies, documents, etc) where the value is only in the assets. In addition, the eJournal is a pivotal service used to build more comprehensive systems integrating other assetoriented components/services, while DAM are usually closed systems due to right management constraints. One could also compare the eJournal with forums or blogs supporting group work. Forums and blogs are driven by comments, some of which being augmented by assets. The eJournal is
driven by assets, some of which being augmented by comments. Interaction within the eJournal is mostly asynchronous since many of the actions performed do not require other components or users to be active or online at the same time. For this reason, the eJournal user interface provides simple synchronous awareness indicators only (for example, an indication of the number of online members instead of the full list of their names). These indicators may trigger interest for more detailed or additional information in some contexts.
Theelogbook 3A Model Despite the large acceptance of the eJournal tool by students and educators at EPFL, this tool has a very strict limitation. In fact, the eJournal was designed to be used in formal learning contexts where the community is governed by fixed and predefined rules. Nevertheless, in the context of the Palette European project, the requirements have changed as the CoPs involved have varying structures ranging from flat, to semi-structured, to fully structured and hierarchical CoPs with predefined rules. This lead to the development of eLogbook, a new Web 2.0 social software based on a generic model flexible enough to meet the needs of various types of academic and professional communities. The first step towards this objective was to propose an adequate model for interaction and collaboration in CoPs. Several models of CoPs already exist (La Contora, 2003; Wenger, 1999). Almost all of them meet on the basic concepts. Moreover, by studying these models, certain common limitations were identified and further motivated the development of a new model. The main limitations underlined in the study, are listed below: 1.
The majority of the studied models are very detailed and complex. They focus on
303
Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice
2.
3.
4.
5.
a detailed presentation of all the structural abstractions, interaction processes and relations in CoPs. This complexity makes it very difficult to translate these models into usable and functional environments. In the majority of the studied models, two extremes dominate. On one hand, there are models that just focus on structural aspects of CoPs. On the other hand, models that take into account CoPs behavior, lack dynamic flexibility and are too rigid to support CoPs dynamics and changes in behavior over time. (Dourish, 1992) The existing models do not take into account heterogeneous communities involving human and non-human software agents. In laboratory-oriented CoPs, for example, the role of experimentation equipment and tools are as relevant as that of human actors. None of the existing models allows dynamic allocation of accessibility and visibility rights over assets and activities. Lastly, the existing models do not consider the concept of CoPs memory. Indeed, since the majority of models focuses on a struc-
Figure 2. Main concepts of the 3A model
304
tural representation of the community, it is often impossible to have an insight into the lifecycle of CoPs and their productions. Building on the results of our study on existing models, of the Palette participatory design approach and on the deconstruction of the eMersion environment, we developed a new model, better adapted to CoPs. When developing it, we mainly focused on three aspects: simplicity, extensibility and adaptability to various contexts and situations, as well as on a flexible representation of CoPs life cycle, in terms of behavior and composition. The 3A model proposed takes its roots from the Activity Theory (Ryder, 1999) and the Actor Network Theory. It consists of a generic model designed at the right level of abstraction, to be easily implemented and translated into an actual collaboration environment. It is based on three main structural concepts and two behavioral ones (See Figure 2). The three structural concepts are Actors, Assets and Activities (from where the name of the model 3A derived). The two relational concepts are Events and Protocols. All these concepts are detailed in the next paragraphs.
Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice
The Concept of Actor In the 3A model, any entity capable of triggering an Event or performing an action that can be perceived directly or indirectly by the community is considered as actor. Typically, a user that creates a community space is considered as an actor. In addition, and according the above definition, actors are not only limited to “human” users but can also consist of software agents, tools, and services. For example, a simulation tool capable of producing assets, posting them in common repositories, accessing them and modifying them in an automatic way is treated as an actor ( Salzmann, Yu, El Helou & Gillet). An external Web service, exchanging information with eLogbook on behalf of a user is also an actor. An external Web service exchanging information with eLogbook on behalf of a user is also treated as an actor. It is important to clearly distinguish here tools or services that play the role of an actor within the community by directly triggering actions, and tools and services that are just used by users in order to perform their own activities. In many CoPs models, such tools are called resources. In the 3A model, only tools that perform direct actions in the workspace are considered as actors. Actors should be allowed to perform LTR actions over themselves and others. LTR is a special acronym invented to refer to three main actions (linking, tagging and rating) judged useful for sustaining collaboration and communication. Social tagging helps in locating experts on a topic within the community. Defining bidirectional and unidirectional links between people (e.g. “friend”, “colleague”) is commonly used today in groupware applications to describe and maintain social ties or relations. Finally, rating is also interesting as it explicitly expresses the impression actors have of one another and therefore motivates them to become active in their community and convey a good image of themselves. As an example, allowing students to anonymously provide direct feedback by rating and tagging teaching assistants’
work, triggers immediate assistants’ reactions and improvements by changing the manner in which they interact with students. In the eMersion environment, traditional evaluation forms were filled at the end of the semester, and included comments such as “The teaching assistant doesn’t answer questions clearly”, “My group didn’t understand the motivation behind this experiment ”. If those comments had been discovered earlier during the semester, some problems could have been handled in a better way during the semester and, as a result the learning experience would have improved sooner.
The Concept of Activity An activity is the formalization of a common objective to be achieved by a group of actors such as discussing topics or completing tasks. All the members of a community are considered as taking part of a main activity which general goal is simply the one behind the existence of this community. Then, under the umbrella of this main activity, members of a community can form sub-activities where all the members or a group of them collaborate together to accomplish specific sub-objectives. The concept of activity encapsulates two other sub-concepts: the concept of roles and the concept of deliverables. Roles are useful for organizing an activity, through assigning rights and distributing tasks and duties among its members. Deliverables are used to define or specify tangible objectives and set concrete deadlines for their completion and their evaluation. Another useful feature needed for structuring the CoP activities is allowing actors to link their activities together. The frequency of using roles, deliverables and hierarchical activities structures varies from one CoP to another. On one hand, there are CoPs with well-defined structures, a general goal as well as fine-grained objectives. On the other hand, there are emerging CoPs with no particular predefined structures and where members interact and com-
305
Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice
municate in order to fulfill the general goal of sharing their practices, but having however no detailed tangible objectives to meet. For instance, Laboratory-oriented CoPs are in general structured communities with concrete and specific objectives to reach. In this case, it is necessary to define a hierarchy, classify and organize work into different activities and sub-activities. Each one of these activities has specific objectives mapped into deliverables, and specific roles distributed among actors involved. Other types of CoPs might start with the creation of one main flat activity for the purpose of sharing information about their common interests and shared practices. Later on, the same CoPs structure might evolve leading to the creation of other sub-activities with particular subtasks or sub-topics. An application following the 3A model is expected to be flexible enough to allow dynamic restructuring and easy reconfiguration for supporting the evolution of the community in nature, behavior and composition. It is important to allow actors to perform LTR actions in addition to the traditional CRUD (i.e. create, read, update, delete) actions. First, allowing users to define appropriate community-specific links between their different activities helps in structuring the community and connecting their different topics together. Second, the social tagging phenomenon is widely spread in today’s Web applications. As far as the community is concerned, social tagging helps it build its own vocabulary and classify its different activities accordingly; hence allowing an easy navigation through them. Finally, rating is also a useful feature that indicates the global and relative importance of activities, thus encouraging activeness and participation.
The Concept of Asset An asset is any kind of resource produced and shared by community actors during their collaboration to reach their predefined goals. In the Activity theory, an asset is equivalent to an
306
artifact mediating the relation among the community members and between them and their final product (Bannon L. & Bødker S. 1991). It is created and transformed throughout the collaboration. The simplest and most traditional form of information sharing and exchange form of assets is text documents. The proposed definition goes beyond this form of exchange and includes other resources such as images, audio and video files, discussion threads, and wiki pages. As it is the case for activities, actors should be able to tag, rate and link assets together. To start with, an asset can be augmented and annotated by user-defined tags. Again here, tagging is useful for building a common vocabulary for the CoP, and allows keyword-based clustering, filtering and navigation. Linking assets is also useful, to add contextual meaning and relate assets together. For example, a graphical asset can serve as an “illustration” for another textual asset. Other commonly used links between two assets are “reply”, “analysis” and “comment”.
The Role of Events In order to model the dynamical and relational aspect of a community, the concept of events is introduced. An event is a pivotal concept that connects assets, actors and activities altogether. In fact, an actor initiates the creation of an event, every time he/she performs an action over one or more entities. Actions performed are either organizational or operational. Organizational events include structuring the activities of the community through defining common objectives, managing the associated roles and scheduling the related deliverables. Operational actions enclose all other kinds of non-administrative collaborative actions such as the manipulation of shared assets or the submission of deliverables. Events can be defined as the persistent representation of all actions performed by actors stamped by their time and context of execution. They make it possible to situate and store the
Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice
actions and operations performed during the activities’ lifecycle. This being said, combining activity-related, asset-related and actor-related events makes it possible to build an incremental representation of the community dynamics and behavior. This can be used in order to provide actors with advanced awareness such as activities work progress, actors’ activeness, assets circulation, and other awareness information, which are indispensable for sustaining collaboration in CoPs.
The Role of Protocols Protocols consist of a set of CoP specific rules and policies governing interaction and collaboration between its members. They define what actions each and every actor is allowed to perform over other actors’ information, activities and assets. An example is to whether or not to allow an actor to access the resources of a particular activity, check the profile of another actor or edit a shared asset. To start with, it has been shown through the participatory design approach adopted in Palette, that CoP members require an easy mechanism of controlling who has access to their own profile information, their assets and their activities. Keeping some information private or sharing it between some or all CoP and non-CoP members should be made possible. Moreover, dynamic “reconfiguration” of the access rules over assets, activities and actors information should be kept easy. As a first impression, the privacy requirement can be seen as contradicting with the very basic principle of CoPs, which is of course information sharing. Nevertheless, even within CoPs, there exist a need to keep some information private, sometimes temporarily and sometimes even permanently. For instance, students taking part in Learn-Nett, a Palette CoP (Daele et al., 2006), expressed their need for a private subspace where they can discuss the course topics with their peers, outside the tutor's reach.
Furthermore, means to have conditional and unconditional access rules should be provided. To explain, the right to access an asset, an activity and/or an actor’s profile can be assigned unconditionally to a specific actor. Still, it should also be possible to grant access rights according to particular conditions such as remaining a member of a particular activity or even maintaining a particular role within it. For instance, the author of a document decides at time t to grant a conditional access to an asset, allowing actors to check his/her asset if and only if they are members of a particular activity. This conditional grant creates a dynamic access link between the concerned activity and asset. This being said, as soon as a new member joins the activity at time t+1, s-he will be able to access the asset. However, if at time t+2, an actor is no longer member of this activity, then s-he can no longer see the asset in his workspace, nor can s-he have access to the new tags, links, rates, and awareness statistics related to the asset. The same applies for rights over activities, which can also be defined in a conditional or unconditional way. For instance, an activity creator can decide that actors will have access to his/her specific activity X as long as they maintain a specific role in another activity Y. So, for example, a teaching assistant has access to “Prelab Evaluation” only as long as s-he is a member of “Mechanical Engineering Laboratory” with the role “Teaching Assistant”. When the former condition is no longer valid, the assistant looses his/her access rights over “Prelab Evaluation”. For the time being, the two possible conditions considered by the model are: being members of a specific activity or maintaining a particular role in an activity. Further conditions deemed useful such as access right expiry date could be incorporated in the future. In addition, through the observation and the analysis of the needs of Palette CoPs, it was noticed that there are some general implicit or explicit conventions over who can do what. For that reason, it is a good design choice to set default rights over activities and assets such as
307
Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice
allowing by default all members of an activity to post assets in it. However, actors should be able to easily parameterize all these settings in order to address specific needs. For example, in the context of EPFL laboratory-oriented CoP of students, the teacher wishes to give the right of rating or evaluating a submitted asset in the “Mechanical Engineering Laboratory Session 1” activity only to the members having the role of “assistants” and not to those with the role “Students”. Consequently, the convention-based but extremely flexible protocols, with the possibility of assigning various conditional and unconditional access rules, are able to fulfill general as well as specific CoPs requirements.
ImplemenTATIion of THE 3A model: THE eLog WEB 2.0 SARE The eLogbook social software is a collaborative Web-Based environment aiming at sustaining and strengthening collaboration and coordination in laboratory-oriented CoPs. It can serve simultaneously as a task management system, an asset repository and social network connecting people. This section, gives an overview of eLogbook functions based on the 3A model discussed in the previous section. Then, the eLogbook underlying awareness services are summarized. Finally, the different eLogbook interfaces are described.
Functional Description of eLogbook Asset Repository eLogBook offers a collaborative space where assets can be created, shared among different users. The main functions are based on the 3A model, and can be summarized as follows. •
308
Assets can be created, updated, deleted, rated, tagged, and linked together. Assets
•
can also be submitted for a particular activity deliverable. The tags and links created can be kept private as they can be shared by others helping in such a way to build a community’s vocabulary. In both cases, tag-based assets filtering and search are possible. There are three types of rights that can be granted over Assets. They are “reader”, “editor” and “owner”. The owner possesses all rights over assets, as s-he can read it, edit it, and has the exclusive right of disseminating it or granting rights to other actors over it. Granted rights can be unconditional or conditional (activity or role-dependent as mentioned earlier within the 3A model description).
Activity Management Space The eLogbook social software consists of a taskoriented workspace where common objectives can be set, tasks organized and roles distributed among different community members. Since it is based on the 3A model, the following features are available: • •
•
Activities can be created, updated, deleted, tagged, rated and linked together. Invitations for activities can be either unconditional or conditional (activity or roledependent as mentioned earlier within the 3A model description). One or more role(s), objective(s) and deliverable(s) are associated with each activity. Each member taking part of an activity has a role assigned, which not only serves as a “label” for him/her but also defines the set of rights this member has over the activity and its resources. On the creation of every activity, two roles are generated by default: the “Administrator€” and the “Member”. The first one has the right to perform all operational and administrative actions over the activity, while the second
Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice
can only perform operational actions, such as tagging the activity or posting an asset in it. To achieve flexibility, users can always define their own roles and corresponding set of rights. Moreover, it is possible to copy the roles structure from one activity to another. A detailed description of the activities access rules available is given below and is followed by an illustrative example. A private activity is one that is secret to everyone except for the people explicitly invited by the administrator. A public activity, is one to which all registered members are automatically invited to join. Those who enter eLogbook as guests also know about its existence. “Publicizing” an activity is as simple as setting one of its roles to being the public role. It is worth mentioning that at any point in time, the administrator of an activity can decide to change the scope of his/her activity by completely or partially opening it to the public, or “privatizing” it. An open activity allows all invited people to access its resources, whereas a close one allows that exclusively to its joined members. In both cases, it is only when users accept to join the activity that they acquire the rights associated with the
role they have been assigned. When the context allows it, it is good to opt for the open activity option rather than the close one. This is due to the fact that allowing users to see what is happening within an activity, gives them a better idea of what it is about and whether or not it is would be beneficial for them to join it and collaborate with its members. The concept of public open activities is similar to public forums where users can read others’ contributions but cannot add their own, unless they decide to register for the forum. The table below summarizes the different access rules available for activities. Briefly examining how the CoP ePreP made a representation of its structure within eLogbook and organized its different projects and interests can serve as a good illustrative example. The steps for creating a suitable environment for sustaining collaboration for ePreP using the proposed social software are listed below: •
First the main existential objective of ePreP is formalized in a mother activity called “the ePreP CoP”. It is described as follows: “ePreP aims at sharing practices and exchanging thoughts related to the development of a first higher education cycle preparing students
Table 1. Types of Access Rules for Activities Public Open
Private Close
Open
Close
Who knows about its existence?€
- All registered users
- Only explicitly invited people
Can those actors view€its resources even before joining?€
Yes
Can they perform CRUD and LTR actions over it?€
- Only after joining
- Only after joining
- Depends on rights associated with the assigned role
- Depends on rights associated with the assigned role
- eLogbook guest visitors No
Yes
No
- Guests cannot join before registering for eLogbook
309
Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice
•
•
•
for the competitive entrance exams to the French Grandes Écoles through the use of information and communication technology (ICT).” As a next step, for each ePreP project, an open private sub-activity is created. This means that only explicitly invited users will know about these projects (because the activities are private). This also means that the invited people will be able to go over each activity and get an idea of what people are doing within it, even before deciding to join (because the activities are open). The three activities are “Coopération Internationale”, “Wikiprépas”, and “Plate-forme francophone”. In each of them, three roles are maintained. The “Administrator” one that exists by default and two other ones created by an ePreP coordinator: “Observer” with viewonly rights and “Active Member” with the rights to perform operational actions. As a matter of fact, these roles were only defined once and then copied from one sub-activity to the other. All members of the “ePreP” main activity are “conditionally” invited to take either one of the two roles available in each subactivity. Consequently, for each project, ePreP members decide either to take the role “Observer” or the role “Active Members” depending on their interest.
Social Networking By allowing actors to exchange ideas on different topics through the exchange of assets and the participation in collaborative activities, by allowing them to express their perception of and their relation with one another through tagging, rating and linking, eLogbook helps building and maintaining social and professional ties between CoP members.
310
Events Logging The eLogbook environment keeps track of the history of the community. All events occurring within eLogbook are logged not only in a chronological order but also, and more importantly, in a contextual fashion. Every action is traced within the context in which it occurred. Moreover, it is worth mentioning that the use of event logging to provide awareness takes into account the privacy of each actor and the privacy of each subgroup. Actors are held aware of actions and events that only concern the actors, activities, and assets they are related to.
Awareness Services Dourish & Belloti (1992) define awareness as “an understanding of the activities of others, which provides a context for one’s own activity”. This definition highlights the need for providing awareness services in Computer Supported Collaborative Work (CSCW). The eLogbook awareness services classify awareness information based on the 3A model. Thus, three main interconnected awareness types can be identified. Asset-related Awareness corresponds to notifications of events related to the dissemination (i.e. granting conditional or unconditional access) and the manipulation of exchanged assets (e.g. rating, linking, tagging, update, modification and submission) and related statistics (e.g. average rate, and activeness level). Actor-related Awareness includes informing each actor of the status of other actors to whom s-he is directly related (through user-defined semantic links) or indirectly connected (through shared activities and assets). It also considers making a user aware of what tags, links and rates have been “put” on him/her and on actors connected to him/her. Activity or Task-related Awareness includes notifying actors of all events occurring within their activities. Creating new deliverables or new roles,
Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice
updating or deleting existing ones, inviting new members to join the activity, are examples of such activity-related events. Reminders of submission and validation deadlines are also considered.
The Multiple Interfaces of eLogbook In order to enhance the acceptance and the appropriation of the eLogbook by CoPs members, multiple interfaces and access means are provided. Each interface is supposed to answer specific needs. Initially three main interfaces are provided: the context-sensitive Web interface, the email interface and the RSS view.
The eLogbook Context-Sensitive Web Interface The context-sensitive graphical Web user interface maps the 3A interaction model (Gillet, El Helou, Rekik, & Salzmann, 2007). It integrates three lateral areas corresponding to actors, activities and assets that are located on the left, top and right part of the Web page, respectively (See Figure 3). These areas are scrollable lists (white arrows) in which elements can be added using the corresponding “+” signs. The idea behind this interface is to avoid classical list-based views and to use adequate graphical metaphors in order to display all assets, actors, and activities related to a central context chosen by the user. Choosing a context and shifting from one context to another are done by simply double clicking on one of the entities appearing in the three different main areas. Thereafter, when an actor, an activity or an asset is selected as the focal element or context, it is moved to the center and the surrounding areas are updated so as to show the entities related to it and the nature of the relationships. The color of the focal element corresponds to the area from which it was selected to better identify its type (i.e. actor, activity or asset). Figure 3 shows the context-sensitive view when the focal element is an activity. In this case,
the center area of the view shows important information (e.g. name, description, deliverables, rates, tags) and allowed actions (edition, deletion, roles management) related to the center activity. In the peripheral areas, one can see the members of this activity with their role(s) specified, the assets posted in it, as well as its related activities. Embedded indicators display the relationships between the focal element and the listed entities. Possible related actions that the current user is allowed to perform are also accessible through icons. Awareness “cues” of various types are seamlessly incorporated in every region through the use of symbolic icons, colors, and display orders of information. As an example, the upper left icon in any actor’s rectangle embeds presence awareness information, indicating whether or not the actor is online. The color of the bullet, which appears next to the role in the same rectangle, indicates whether or not the user has joined the activity and accepted to take the role assigned to him/her. It is worth mentioning that moving the mouse over any icon, triggers the appearance of a text box explaining what information it conveys and giving more related details.
The eLogbook Email-Based Interface The email-based eLogbook interface enables users to trigger collaborative actions (e.g. posting an asset) and request awareness information (e.g. request recent activities invitations) by interacting with eLogbook via email using a simple syntax (see Figure 4). The motivations behind such a lightweight interface and how it actually works are discussed thereafter. Providing an email-based interface has several advantages (Gillet D., Man Yu C., El Helou S., Rekik Y., Berastegui A., Salzmann Ch., 2007). First of all, CoPs face difficulties to select and adopt new environments without inducing disturbances (de Moor, 2004). It is believed that by creating a bridge between eLogbook and popular tools already familiar to CoPs, a smooth appro-
311
Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice
Figure 3. The Context-sensitive Web interface
priation of the eLogbook model and advanced collaboration features will take place. To validate this assertion, an email-based interface has been designed and is briefly described thereafter. Second, such an interface helps making eLogbook services ubiquitously accessible. As a matter of fact, to use the email-based interface, the users only need to have email client installed on their desktop, laptop or mobile devices. It is very com-
mon to have built-in email clients in computers, pocket PCs and even smart-phones. Another important factor is that the communication cost induced by using an email-based interface is cheaper than the Web-based one. Moreover, the mail-based interface provides offline information management. Users can store emails on their devices. Afterwards, they can manage joint activities and access shared assets without connecting
Figure 4. Body of the email with the subject “create new asset”
312
Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice
to the Internet. Also, providing an email-based pull-awareness mechanism for requesting information is an unobtrusive user-controlled way of disseminating awareness information. eLogbook handles emails sent from registered users according to the following steps: Step 1: Sender Identification. In this step, a check on whether the email sender is indeed a registered eLogbook user is performed. If this is the case, step 2 is initiated. Otherwise, the mail is ignored. Step 2: Action Identification. The content of the mail is parsed (subject and body), and the action to be performed and the entities involved are identified. In cases of ambiguous requests, an error message is sent back to the user. Step 3: Protocol Checking. A check is performed in this step to make sure that the sender is allowed to perform the requested action based on the access rights granted over the entities involved. For example, if the user wishes to create a sub-activity of another already existing activity, s-he must have administrative rights over the latter. Step 4: Confirmation Request. If the sender is allowed to perform the requested action, then an email is sent back, requesting a user confirmation. This step is important for two reasons. First, it is used for security purposes in order to make sure that the corresponding eLogbook user was indeed the one who actually sent the request. Second, it ensures that the user indeed wishes to perform the action based on what sâ•‚he had sent and how it was interpreted by eLogbook. Each confirmation request has an expiry date and contains a unique reference number. Step 5: Action Execution. If the user replies to the confirmation request email sent from eLogbook, then the requested action will be executed and the user will be notified.
The eLogbook RSS Feeds RSS (Real Simple Syndication or Rich Site Summary) consists of an XML application based on a combination of push and pull technology. RSS feeds are made of channels, each of which is composed of a title, a link and a description. A channel may also include other items such as an image and/or a rating. In addition, it can have any number of elements, with each containing at least a title or a description. By subscribing to RSS feeds of interest, the user can check, on a single screen, all relevant news coming from different sites. Access to eLogbook news via RSS feeds is made available, as this approach has several benefits, in particular for mobile users (El Helou S., Gillet D., Salzmann Ch. & Y. Rekik, 2007). In fact, it allows the delivery of information in an unobtrusive way. This is achieved by transmitting information via a familiar interface where users usually expect to have updates and intentionally check for them. Moreover, the format of RSS feeds is particularly useful for mobile users subjected to device constraints. News, for which users willingly subscribe, is sent in a compact way and optional fields (such as images) are skipped for mobile users. At this time, only news related to recently created activities are available via RSS (see Figure5). In the future, the possibility to subscribe to a particular activity and get only related information will be implemented. Also, a general RSS news page, to which each user can subscribe and which will filter and rank events based on their relevancy for a particular user in a particular context will be considered.
Conlu REMARKS and perpeive This paper presented the 3A model and eLogbook, a Web 2.0 social software, based on it. The proposed 3A model focuses on a representation of
313
Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice
Figure 5. eLogbook news read from Google RSS reader on a Simulated BlackBerry Device
the structures as well as the behaviors of CoPs. It consists mainly of 3 fundamental entities Actors, Activities, Assets, where events generated by actors over any entities, are governed by protocols. eLogbook is developed at the EPFL with the aim of sustaining collaborative learning and knowledge building within CoPs. Contrary to traditional learning modalities, CoPs are nonformal structures, often virtual, governed by moving rules, often built dynamically by community actors. The eLogbook environment has been designed to fulfill the requirements of this specific context and to make it possible for a community to dynamically build its own vocabulary, formalize its own objectives, manage its activities and resources according to its own rules, capitalize its knowledge and maintain a trace of its life cycle.
314
eLogbook presents some innovative features that differentiate it from other classical collaboration workspaces. First, eLogbook is flexible and adaptable so that it can fit the requirements of various CoPs. Second, eLogbook is usable and efficient, thanks to its personalization and contextualization mechanisms. Third, eLogbook provides ubiquitous access to its services by providing multiple views augmented with awareness information. eLogbook is being developed following the participatory design approach adopted within Palette (Daele et al., 2006) and so it is continuously evolving. On one hand, additional features needed to satisfy emerging CoPs needs are being implemented. On the other hand, already existing features are being modified based on the technical and pedagogical evaluators feedback on their usability, acceptability and usefulness with respect to CoPs needs. In parallel to the implementation of eLogbook, other issues related to interaction and collaboration in CoPs, are investigated. The first issue is awareness. Our goal here is to develop a personalized awareness mechanism that ranks and filters entities and related notifications according to their importance with respect to a particular user in a particular context, while enforcing the need to respect the users privacy. The ranking and filtering will affect the order in which entities are displayed in the contextsensitive Web interface. Similarly, the selection and order of appearance of information sent via mail or RSS feeds are adapted. The second issue we are working on is coupling eLogbook with other services in order to better respond to CoPs needs. One such example is the possible interaction with efficient knowledge management tools. The idea is to help communities build their own ontologies and benefit from knowledge extraction and classification services. Another aspect to be considered is the interaction with services offering voice/video-based synchronous interactions and widely used by CoPs (Salzmann, Yu, El Helou, &
Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice
Gillet 2008). The idea is to reinforce the activity
of community members by encouraging dialogue and argumentation, increasing motivation, and building mutual trust.
ACKknowledgmen The elements presented in this paper result from various e-Learning projects and activities carried out with the support of the Board of the Swiss Federal Institutes of Technology and of the European Union in its sixth framework program (ProLEARN Network of Excellence and Palette Integrated Project).
ReferenCES Bannon, L., & Bodker, S. (1991). Beyond the interface: Encountering artifacts in use. In Carroll, J. M. (Eds.), Designing Interaction: Psychology at the Human-Computer Interface, 227-253. Cambridge: Cambridge University Press. Carroll N.L., & Calvo, R.A. (2005). Certified assessment artifacts for ePortfolios, Proceedings of the Third International Conference on Information Technology and Applications, 2005, 2, 130- 135. Daele A., Erpicum M., Esnault L., Pironet F., Platteaux H., Vandeput E., et al. (2006). An example of participatory design methodology in a project which aims at developing individual and organisational learning in communities of practice. Proceedings of the first European Conference on Technology Enhanced Learning (EC-TEL’06), Greece, 2006. De Moor A., & Van Den Heuvel W. (2004). Web service selection in virtual communities. Proceeding of the 37th Annual Hawaii International Conference on System Sciences (HICSS’04), Big Island, Hawaii, January 5-8, 2004.
Dourish P. (1992). Applying reflection to CSCW design Position paper for Workshop “Reflection and Metalevel Architectures”, European Conference on Object-Oriented Programming, Utrecht, Netherlands, 1992.Retrieved May 1, 2007 from http ://www.laputan.org/pub/utrecht/dourish. text Dourish P., & Belloti V. (1992). Awareness and coordination in shared workspaces.Proceedings of ACM Conference on Computer supported cooperative Work (CSCW’92), Toronto, Canada, November, 1992 (pp. 107-114). El Helou S., Gillet D., Salzmann Ch. & Y. Rekik Y.(2007): Feed-oriented awareness services for eLogbook mobile users. Proceedings of the 2nd International Conference on Interactive Mobile and Computer aided Learning (IMCL), Jordan, April 17-21, 2007. Farkas G. J., Nguyen Ngoc A.V., & Gillet D. (2005). The electronic laboratory journal: A collaborative and cooperative learning environment for Webbased experimentation. Computer Supported Cooperative Work (CSCW), 14(3), 189-216. Gillet D., Nguyen Ngoc A.V., & Rekik, Y. (2005). Collaborative Web-based experimentation in flexible engineering education. IEEE Transactions on Education, 48(4), 696–704. Gillet D., El Helou S., Rekik Y., & Salzmann Ch. (2007). Context-sensitive awareness services for communities of practice. Proceedings of the 12th International Conference on HumanComputer Interaction (HCI2007), Beijing, July 22-27, 2007. Gillet D., Man Yu C., El Helou S., Rekik Y., Berastegui A., Salzmann Chr., & Rekik Y. (2007). Tackling acceptability issues in communitis or practice by providing a lightweight e-mail-based interface to eLogbook: A Web 2.0 collaborative activity and asset management system. Proceedings of the 2nd International Workshop on Building Technology Enhanced Learning solutions for
315
Social Software for Sustaining Interaction, Collaboration, and Learning in Communities of Practice
Communities of practice (TEL-CoPs’07), Crete, Greece, September 17, 2007. LaContora J. M., & Mendonca D. J. (2003). Communities of practice as learning and performance support systems. Proceedings of the International Conference on Information Technology: Research and Education, New Jersey Institute of Technology, Newark, NJ: USA, 2003. Natu S., & Mendonca J. (2003). Digital asset management using a native XML database implementation. Proceedings of the 4th Conference on information Technology Curriculum (CITC4 03), Lafayette, Indiana, USA, October 16 - 18, 2003 (pp. 237-241). New York, USA: ACM press. Nguyen-Ngoc A.V., Gillet D., & Sire S. (2004). Evaluation of a Web-based learning environment for Hands-on experimentation. In Aung, W., et al. (eds.), Innovations - 2004: World Innovations in Engineering Education and Research (pp. 303-315). New York, USA: iNEER in cooperation with Begell House Publishing.
316
Ryder (1999). Spinning Webs of Significance: Considering anonymous communities in activity systems. Retrieved December 8, 2007 from http:// carbon.cudenver.edu/~mryder/iscrat_99.html Salzmann Ch., Yu. C. M, El Helou S., & Gillet D. (2008). Live interaction in social software with application in collaborative learning. Proceedings of the 3rd International Conference on Interactive Mobile and Computer aided Learning (IMCL), Jordan, April 16-18, 2008. Salzmann Ch., Gillet D., Scott P., & Quick K. (2008). Remote lab: Online support and awareness analysis. Proceedings of the 17th IFAC World Congress, Seoul, Korea, July 6-11, 2008. Talbott T., Peterson M., Schwidder J., & Myers J.D. (2005). Adapting the electronic laboratory notebook for the semantic era. Proceedings of the 2005 International Symposium on Collaborative Technologies and Systems, 2005 (pp. 136-143). Wenger E. (1999). Community of practices: Learning, meaning and identity. Cambridge UK, Cambridge University Press.
317
Chapter XIX
Multimedia Authoring for Communities of Teachers Agnès Guerraz INRIA Rhône-Alpes, France Cécile Roisin INRIA Rhône-Alpes, France Jan Mikáč INRIA Rhône-Alpes, France Romain Deltour INRIA Rhône-Alpes, France
AbSTRra One way of providing technological support for communities of teachers is to help participants to produce, structure and share information. As this information becomes more and more multimedia in nature, the challenge is to build multimedia authoring and publishing tools that meets requirements of the community. In this paper we analyze these requirements and propose a multimedia authoring model and a generic platform on which specific community-oriented authoring tools can be realized. The main idea is to provide template-based authoring tools while keeping rich composition capabilities and smooth adaptability. It is based on a component-oriented approach integrating homogeneously logical, time and spatial structures. Templates are defined as constraints on these structures. Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Multimedia Authoring for Communities of Teachers
INnion We are involved in a multidisciplinary project, the aim of which is to support the activities of communities of practice (CoP) in pedagogical environment. This project will provide tools for document production and for document reuse in heterogeneous applications. The objective is to reduce the current limitations caused by the proliferation of data sources deploying a variety of modalities, information models, and encoding syntaxes. This will enhance applicability and performances of document technologies within pedagogically consistent scenarios. In this paper, we will focus on the authoring needs of teacher communities and propose a new authoring model, LimSee3. In the educational context, there exists a large variety of authoring tools, see (Brusilowski, 2003) for an extensive review. The main objective of these systems is to provide adaptive educational hypermedia thanks to well-structured hyperlinked content elements that are mostly static content. In Hoffman and Herczeg (2006), the created documents are video centric, providing a way to add timed hot-spot embedding additional media and interaction facilities in the resulting hypervideo. The time structure is, therefore, straightforwardly given by the video media, while the time model of our approach (given by the SMIL time model) is much more general. In our project, we want to provide educators with a way to take advantage of multimedia synchronization to offer more lively pedagogical material. But it is worth noting that multimedia brings a higher order of complexity for authors. In order to reduce this complexity, we propose a multimedia authoring model that will provide similar authoring services than formed-based hypermedia systems (Grigoriadou & Papanikolaou, 2006). The LimSee3 project aims at defining a document model dedicated to adaptive and evolutive multimedia authoring tools, for different categories of authors and applications, to easily generate
318
documents in standard formats. Our approach is to focus on the logical structure of the document while keeping some semantics of proven technologies such as SMIL (SMIL). This provides better modularity, facilitates the definition of document templates, and improves manipulation and reusability of content. The LimSee3 authoring process is given on Figure 1: a document is created from a template by adding content in an applicationguided way. The obtained LimSee3 document can be exported into one or several presentation documents suitable for rendering. This paper is organized as follows: next section presents a scenario example that will be developed throughout the paper and thereby analyzes CoPs requirements for authoring multimedia documents. We then define the main concepts on which multimedia authoring tools are based, and we classify existing approaches in the light of these concepts. After that, we introduce the LimSee3 document model and show how it can be used for the development of authoring tools tuned for specific CoPs. The last section presents the current state of our development and our perspectives.
A LEARNING-ORIENTED EXAMPLE OF AUTHORING Multimedia Storytelling for Enhanced Learning Educators have integrated practice into their curriculum to different degrees; Figure 2 shows this continuum and how LimSee3 can be naturally used to enhance authoring multimedia documents. Edward Bilodeau (2003) illustrated that moving towards full immersion requires substantial changes to course design. Careful consideration must be given to the optimal location for student learning to occur on this continuum. Using templates in LimSee3 authoring tool for pedagogical approach allows production process during this
Multimedia Authoring for Communities of Teachers
Figure 1. The authoring process in LimSee3
Figure 2. Continuum of immersion into practice (Adopted from Hogan, 2002) and LimSee3 use
continuum. It gives a way of making things simpler and faster to teachers and writers. It focuses on pedagogical issues. It produces practical units of learning (UoL). Researchers such as Dolores Durkin (1961), Margaret Clark (1976), Regie Routman (1988; 1991), and Kathy Short (1995) have found evidence that children who are immersed in rich, authentic literary experiences become highly engaged in literature and develop literary awareness. Their studies revealed that positive and meaningful experiences with books and written language
play a critical role in the development of literacy skills. Other researchers have found that students acquired reading and thinking strategies in literature-based programs that included teacher-led comprehension instruction (Baumann, 1997; Block, 1993; Goldenberg, 1992/1993).
Storytelling in the Learning Process Stories are basic: we start telling our children about the world by telling them stories. Stories
319
Multimedia Authoring for Communities of Teachers
are memorable: their narrative structure and sequence makes them easy to remember. “What happens next?” is a very basic form of interest and engagement. Stories are everywhere: very rarely in real life do we set out to convey ideas in terms of hierarchies or taxonomies or bullet points. Instead, we tell stories. Teaching is one of the predominantly professional activities that do habitually communicate by means of stories, and also use elaborated language codes. We want to go deeper inside the multimedia process, taking into account the advantage of creating multimedia in the immersion process. As an example, U-Create (Sauer, Osswald, Wielemans, & Stifter, 2006) is an authoring tool centered on 2-D and 3-D scenes targeted to nonprogrammers who want to easily produce story documents. The tool is based on predefined structural elements (from story, scene, to action, stageSet, and asset) and associated dedicated GUIs. In this paper we consider a group of teachers that are working together—a CoP in our terminology—to create and share course materials based on tale storytelling.
Tale Learning Example Little Red Riding Hood Example Little Red Riding Hood is a well-known and wellloved fairytale dating back to over 3 centuries ago when it was first published by Charles Perrault in his Histoires ou Contes du temps passé in 1697 and based on oral European tales. Since then, Little Red Riding Hood has been retold in a variety of forms and styles, as Big Books and Lift-the-flap books, as poems and plays, and whilst some details may have changed, many of the essential elements have stayed the same. Little Red Riding Hood makes a great literature teaching unit theme for elementary school. A general synopsis follows:
320
Act I.1 Setting at the edge of a forest; the Little Red Riding Hood goes off to take a basket to her ill grandmother, her mother warns her not to dawdle in the woods or to talk to strangers.
Act I.2 A place inside the forest. Woodcutters can be heard chopping wood. Little Red Riding Hood comes out of some bushes. As she pauses to pick some flowers, the wolf catches sight of her. On the path he stops her and makes up a story about a shortcut to grandmother’s house. When he challenges her to see who will get there first, she agrees, and both of them run off in different directions as the woodcutters resume their work.
Act II.1 The chorus explains that the wolf has not eaten for 3 days and was able to get to grandmother’s house first. The wolf, pretending to be Little Red Riding Hood, manages to get into the house and swallow grandmother. He takes her place in the bed before Little Red Riding Hood arrives. In several questions she expresses her surprise at how differently grandmother looks now, and the wolf swallows her.
Act II.2 Some hunters and woodcutters, who have been tracking the wolf, come by and enter the house. They find the wolf asleep and open his belly to let grandmother and Little Red Riding Hood out. After they sew up the wolf again, he repents and is permitted to live in the forest as long as he lives up to his promise to be good. In the learning process, it is possible to exploit this story in different approaches (Franc-parler. org, 2006), for instance:
Multimedia Authoring for Communities of Teachers
Figure 3. LimSee3 template and document links
Story and the time: a) After reading the tale and giving explanations of difficult points and incomprehensions, to work on the chronology from drawings. b) Try and feel the knowledge of the terms “before,” “later,” during this time line. Oral expression: a) Drawing images and explaining them. b) Playing dialogues without written support. c) Reciting a rhyme or poem, singing a song. A variety of resources, ranging from texts, illustrations, media presentations to computerbased interactive materials for students are available for use in classroom. Based on these materials, a teacher can propose: Story: Tell the story, watch and comment movies • Songs: Organize some spoken drill type activities Handcraft activities: Propose drawing, • folding, coloring of sceneries, puppets, and so forth Play:Study and put on stage a personalized • version of the story •
Basically, the units of learning that are exchanged in this CoP of teachers are multimedia story documents that are composed of sequences of story steps where data elements are heterogeneous and multimedia. The challenges are to enrich information with the synchronization of data elements (for instance an activity with the corresponding material) and to provide a document structure enabling knowledge sharing and reusability (of stories). The CoP of teachers needs templates for making things simpler, faster, for being focused on pedagogical issues, for producing practical units of learning. The Fig. 3 shows the structural link between LimSee3 Template and LimSee3 document contents. At the lower level, a narrative part inside the template corresponds to text literature and/or illustration and/or audio storytelling. At a higher level, a template walkthrough corresponds to a sequence of screenshots. The first level is offered by the BNF (BNF, 2001) that, for instance, gives out textual contents and illustrations. To fully instantiate upper levels, we show a possible making of the tale with a logic modeling [template
321
Multimedia Authoring for Communities of Teachers
and document] from which we can extract levels to enhance associated authoring. From the continuum of immersion into practice, represented inside the rectangle of the Fig. 2, we learn that the greater the degree of authenticity of the learning activities, the more the students will be able to be integrated into the practice. Different programs and courses benefit from different levels of immersion. Moving towards full immersion requires substantial changes to course design. Teachers need authoring tools to set up these types of pedagogical materials. Careful consideration must be given to the optimal location for student learning to occur in this continuum. In this CoP, a number of teachers will create templates to build this type of very specialized tools. Such a modeling will naturally emerge from CoP work, using LimSee3 inside this continuum (see Fig. 2).
Basic Requirements for CoP-Oriented Learning In order to be useful, the cooperative services to be provided to the CoPs must have the two following basic features: (i) authoring tool of stories dedicated to teachers; (ii) access tool to read the existing stories. Looking more closely at the ways in which CoPs participants are producing multimedia information, we can identify some requirements for the authoring and presentation platform: 1.
2.
3.
322
Simple and efficient authoring paradigms— because CoPs members are not (always) computer science technicians. Easy and rapid handling of the authoring tool—because new members can join CoPs. Modular and reusable content—because multimedia information results in a coconstruction process between members.
4.
5.
Evolutive structuring of documents —because of the dynamic nature of CoPs objectives. Use of standard formats—because CoPs need portability, easy publishing process, and platform independence.
Basically, our approach proposes a template mechanism to cope with requirements 1 and 2, a component-based structuring enabling requirements 3 and 4, and relies on proven standard technologies to ensure the last requirement. Before further stating our authoring model, we present in the next section the main concepts and approaches of multimedia authoring on which this work is based.
MULTIMEDIA DOCUMENTS AND MULTIMEDIA AUTHORING In traditional text-oriented document systems, the communication mode is characterized by the spatial nature of information layout and the eye’s ability to actively browse parts of the display. The reader is active while the rendering itself is passive. This active-passive role is reversed in audio-video communications: active information flows to a passive listener or viewer. As multimedia documents combine time, space, and interactivity, the reader is both active and passive. Such documents contain different types of elements such as video, audio, still-picture, text, synthesized image, and so on, some of which have intrinsic duration. Time schedule is also defined by a time structure synchronizing these media elements. Interactivity is provided through hypermedia links that can be used to navigate inside the same document and/or between different documents. Due to this time dimension, building an authoring tool is a challenging task because the WYSIWYG paradigm, used for classical documents, is
Multimedia Authoring for Communities of Teachers
Figure 4. Multiview authoring in LimSee2
not relevant anymore: it is not possible to specify a dynamic behavior and to immediately see its result. Within the past years, numerous researchers have presented various ways of authoring multimedia scenarios, focusing on the understanding and the expressive power of synchronization between media components: approaches can be classified in absolute-based (Adobe, 2004), constraint-based (Buchanan & Zellweger, 1993; Jourdan et al., 1998), event-based (Sénac, Diaz, Léger, & de Saqui-Sannes, 1996) and hierarchical models (SMIL), (Van Rossum, Jansen, Mullender, & Bulterman, 1993). Besides, to cope with the inherent complexity of this kind of authoring, several tools (Adobe, 2004), (Microsoft, n.d.), (Hua, Wang, & Li, 2005) have proposed limited but quite simple solutions for the same objective.
Dedicated authoring, template-based authoring, and reduced synchronization features are the main techniques to provide reasonable editing facilities. But we can notice that these tools generally also provide scripting facilities to enrich the authoring capabilities and therefore, lose in some way their easiness. Beside timelines, script languages, and templates, intermediate approaches have been proposed through “direct manipulation” and multiviews interface paradigms. IBM XMT authoring tool (IBM) and SMIL tools such as LimSee2 (LimSee2) and Grins (Oratrix GRiNS) are good examples. In LimSee2, the time structure of SMIL is represented for instance in a hierarchical timeline as shown in of Fig. 2 (4). Time bars can be moved or resized to finely author the timing
323
Multimedia Authoring for Communities of Teachers
scenario. This kind of manipulation has proven very useful to manipulate efficiently the complex structures representing time in multimedia XML documents. However, even if XMT and SMIL are wellestablished languages, these tools are too complex for most users because they require a deep understanding of the semantics of the language (e.g., the SMIL timing model). Moreover, these models generally put the time structure at the heart of the document, whereas it does not always reflect exactly the logical structure in the way it is considered by the author. Our approach instead sets this logical dimension as the master structure of the document, which is a tree of modular components, each one specifying its own time and spatial structures. Additionally, the document can be constrained by a dedicated template mechanism. A template document is a kind of reusable document skeleton that provides a starting point to create document instances. Domain-specific template systems are a user-friendly authoring solution but require hardly extensible dedicated transformation process to output the rendering format. We chose, on the contrary, to tightly integrate the template syntax in the document: the template is itself a document constrained by schema-like syntax. The continuum between both template and document permits to edit templates as any other document, within the same environment, and enables an evolutive authoring of document instances under the control of templates. There is no need to define a dedicated language to adapt to each different use case. We believe that the combination of document structuring and template definition will considerably help CoPs in (i) reusability of materials, (ii) optimization of the composition and life cycle of documents, (iii) development and transmission of knowledge, (iv) drawing global communities together effectively.
324
THE LimSee 3 AUTHORING LANGUAGE Main Features In the LimSee3 project, we define a structured authoring language independently of any publication language. Elements of the master structure are components that represent semantically significant objects. For instance, a folktale can be seen as a sequence of scenes. Each step is composed of several media objects and describes a phase of the story (departure from home, encountering the wolf,...). Components can be authored independently, integrated in the document structure, extracted for reusability, constrained by templates, or referenced by other components. The different components of a multimedia document are often tightly related with one another: when they are synchronized or aligned in space, when one contains an interactive link to another, and so on. Our approach, which is close to the one proposed in Silva, Rodrigues, Soares, and Muchaluat Saade (2004), is for each component to abstract its dependencies to external components by giving them symbolic names. This abstraction layer facilitates the extraction of a component from its context; thus, enhancing modularity and reusability. Finally, the goal is to rely on existing proven technologies, in both contexts of authoring environments and multimedia representation. The timing and positioning models are wholly taken from SMIL. Using XML (XML, 2006) provides excellent structuring properties and enables the use of many related technologies. Among them are XPath (Xpath, 1999), used to provide fine-grained access to components, and XSLT (XSLT, 1999), used in templates for structural transformation and content generation. The authoring language is twofold: it consists in a generic document model for the representa-
Multimedia Authoring for Communities of Teachers
Figure 5. Example 1 - A simple scene LimSee3 document ... ... ... ... ...
Figure 6. Example 2 - A LimSee3 object with an external dependency relation # ...
325
Multimedia Authoring for Communities of Teachers
Figure 7. Example 3 - A scene template Fill in the title of this scene ... ... ... ... ...
Figure 8. The LimSee3 three-layer architecture
tion of multimedia documents, and it defines a dedicated syntax to represent templates for these documents. In this section, we describe the main features of the LimSee3 language and we illustrate their syntax with short excerpts of the storytelling example.
326
Document Model A document is no more than a document element wrapping the root of the object hierarchy and a head element containing metadata. This greatly facilitates the insertion of the content of a docu-
Multimedia Authoring for Communities of Teachers
Figure 9. Authoring with LimSee3
ment in a tree of objects, or the extraction of a document from a subtree of objects. A compound object is a tree structure composed of nested objects. Each compound object is defined by the object element with the type attribute set to compound. It contains a children element that lists child objects, a timing element that describes its timing scenario, a layout element that describes its spatial layout, and a related element that abstracts out dependencies to external objects. The value of the localId attribute uniquely identifies the component in the scope of its parent object; thereby also implicitly defining a global identifier id when associated with the localId of the ancestors. In Example 1, the first child of object scene1 has the local id title and hence is globally identified as scene1.title. The timing model, and similarly the positioning model, is taken from SMIL 2.1. The timing
element defines an SMIL time container. The timing scenario of a component is obtained by composition of the timed inclusions defined by the timeRef elements, whose refId attributes are set to local ids of children. A media object is actually a simple object that wraps a media asset, i.e. an external resource (such as an image, a video, an audio track, a text...) referenced by its URI. It is defined by the object element with the type attribute set to either text, image, audio, video, or animation. The URI of the wrapped media asset is put into the src attribute. Example€2 shows an image media object with local id right-button that wraps the media asset identified by the relative URI ./medias/right-arrow.png. Area objects inspired from the SMIL area element can be associated to media objects. They are used for instance to structure the content of a media object or to add a timed link to a media object. An area is defined as an object element
327
Multimedia Authoring for Communities of Teachers
Figure 10. Instantiating a template document by drag-and-drop
with the type attribute set to area. For instance, in Example 2 the media object right-button has a child area, which defines a hyperlink. Relations of dependency between objects are described independently of their semantics in the document. External dependencies are declared with ref elements grouped inside the related child element of objects. The value of refId of a ref element is the id of the related element, and the value of localId is a symbolic name that is used within the object to refer to the related object. For instance, in Example€2, object right-button provides a clickable image that links to the object story.scene2 by first declaring the relation in a ref element and then using this external object locally named target to set the value of the src attribute of the link, using attribute and value-of elements taken form XSLT.
328
Templates Template nodes aim at guiding and constraining the edition of the document. In order to have better control and easy GUI setup, the language includes two kinds of template nodes: media placeholders and repeatable structures. A placeholder is a template node that defines a reserved place for a media object. It is represented by an object element, whose type and src attributes are not (yet) set. It specifies the kind of media resources it accepts in a special template: types attribute (the values can be text, img, audio, video, animation, or a list of these types). The author can also specify content that will be displayed to invite the user to edit the media zone with the template:invite element (of any media type). For instance, Example€3 shows a media placeholder
Multimedia Authoring for Communities of Teachers
Figure 11. Modifying the timeline
title for a text, with textual invitation. During the
authoring process, placeholders are filled with media objects inserted by the user. A repeatable structure, represented by the template:model element, is a possibly complex template node that defines a homogeneous list of objects. Each item of the list matches the model. The cardinality of the list can be specified with the min and max attributes. Example€3 shows a tale scene template named tale-scene: this complex model is composed of several placeholders (title, questions), an embedded model (illustrations), and the navigation object. Finally, our model makes it is possible to lock out parts of a document with the locked attribute, to prevent authors from editing these parts. This allows for instance to guide more strongly in-
experienced users by restricting their access to the only parts of the document that make sense to them.
AuTHhoring wi LimSee 3 The LimSee3 model defined in the previous section aims at being an internal kernel of our authoring system as can be seen on Figure 8. This model is hidden from the user by several abstraction layers: experienced users interact with a full-featured generic platform (LimSee3 core) that enables them to finely tune all document properties, however, at the cost of some technical overhead, while basic users interact with template-specific simplified interfaces that
329
Multimedia Authoring for Communities of Teachers
Figure 12. Synchronizing a video with a story document
allow them to author documents with less effort. In the case of a teacher’s CoP that wants to create and share multimedia materials for tale telling, the benefits that could be achieved with LimSee3 are the following: •
•
330
To describe the author’s vocabulary by structuring basic medias into author-defined logical multimedia structures (“tale scenes” viewed as collections of “illustrations” and “narrations,” rather than mere “pictures” and “texts”) To adopt the author’s vocabulary in the authoring process by leveraging the logical coherence between the document to produce and the way to produce it (the template structure reflects the logics of the presentation, not its technical needs)
•
To facilitate document reuse in the CoP by easily extracting, adapting, merging documents, applying alternate layouts for different pedagogical purposes
Figure 9 shows the different steps of the production of tale tellings by the member of teachers CoP. First, an experienced author creates a tale story from scratch using the LimSee3 core platform (flow (1) in the Figure 9) in order to define the logical structure of this pedagogical material that will allow a fruitful use in classroom. Eventually this multimedia tale will be refined thanks to inputs from other teachers of the CoP. When a consensus is reached, this teacher can use the LimSee3 core to extract a template document from this instance (flow (2) in the Figure 9). The main structure of the document, in this case a sequence
Multimedia Authoring for Communities of Teachers
of scenes, can be constrained by template nodes such as repeatable structures. The result of this step will be a dedicated authoring interface that other teachers can use (flow (3) in the Figure 9) to create new multimedia tale stories. This is a typical example of participative design leading to the development of a dedicated tool based on the LimSee3 generic platform. The Figures 10 and 11 illustrate this last step of authoring with a dedicated GUI: • •
Figure 10 shows how the placeholders defined in the template structure can lead to simple drag-and-drop authoring actions. Figure 11 illustrates the advantages gained with the separation of logical, spatial, and time information. It allows the authoring and rendition of several scenarios of the same content: thanks to a direct manipulation in the timeline view, an author has defined a sequential display of the illustrations instead of the default parallel one.
Finally, the proposed application can evolve to take into account new needs of the CoP members. For instance, a teacher wants to register his/her course, using a camera that films her/him while (s)he gives a talk illustrated with the multimedia tale document. In order to easily synchronize the video with the different parts of the tale document, the authoring tool is enriched with a simple control panel, as can be seen on the left part of Figure 12.
Conluion The LimSee3 model leads to the development of authoring tools that fit the requirements stated at the beginning of the paper. The LimSee3 core is currently under development as a cross-platform open-source Java software: we provide this generic platform with widgets to manipulate all the elements defined in the model (documents, compound
objects, timing and layout details, relations...). It provides features based on the proven authoring paradigms such as multiviews, timeline, structure tree, and 2-D canvas. The model presented in this paper develops a practice-based approach to multimedia authoring dedicated to communities where collaborative and participative design is of high importance. It improves reusability with template definitions and with the homogeneous structuring of documents. In the context of Palette, we will use this model to develop dedicated authoring tools for pedagogical CoPs.
ReferenCES Adobe. (2004). Adobe Authorware 7 and Macromedia Director MX 2004. Retrieved from http://www.adobe.com/products/ Baumann, J. F. & Ivey, G. (1997). Delicate Balances: Striving for curricular and instructional equilibrium in a second-grade, literature/ strategybased classroom. Reading Research Quarterly, 32(3), 244. Bilodeau, E. (2003, November 7-8, 2003). Using communities of practice to enhance student learning. Paper presented at EGSS Conference 2003, McGill University, Montreal. Block, C. C. (1993). Str-ategy instruction in a literature-based reading program. Elementary School Journal, 94(2), 139-151. BNF. (2001). Autour du Petit Chaperon rouge. Retrieved from http://expositions.bnf.fr/contes/ pedago/chaperon Brusilovsky, P. (2003). Developing adaptive educational hypermedia systems: From design models to authoring tools. In Authoring tools for advanced technology learning environments: Toward cost-effective adaptive, interactive and intelligent educational software. Kluwer Academic Pub. 331
Multimedia Authoring for Communities of Teachers
Buchanan, M. C., & Zellweger, P. T. (1993). Automatic temporal layout mechanisms (pp. 341-350). ACM Multimedia.
Jourdan, M., et al. (1998). Madeus, an authoring environment for interactive multimedia documents. ACM Multimedia.
Clark, M. (1976). Young fluent readers: What can they teach us? Heinemann.
LimSee2. (2003-2006). Retrieved from http://limsee2.gforge.inria.fr/
Durkin, D. (1961). Children who read before grade one. The Reading Teacher. 14, 163-166.
Microsoft. (n.d.). MS Producer for PowerPoint. Retrieved from http://www.microsoft.com/office/powerpoint/producer/prodinfo/
Franc-parler.org. (2006). La communauté mondiale des professeurs de français. Franc-parler. org un site de l’Organisation internationale de la Francophonie. Retrieved from http://www. francparler.org/parcours/conte.htm Goldenberg, C. (1992/1993). Instructional conversations: promoting comprehension through discussion. The Reading Teacher, 46(4), 316-326. Grigoriadou, M., & Papanikolaou, K. (2006). Authoring personalised interactive content. In Proceedings of First International Workshop on Semantic Media Adaptation and Personalization (SMAP’06), (pp. 80-85). Hoffmann, P., & Herczeg, M. (2006). Interactive narrative systems - Hypervideo vs. storytelling integrating narrative intelligence into hypervideo (LNCS 4326, pp. 37-48). Hogan, K. (2002). Pitfalls of community-based learning: How power dynamics limit adolescents’ trajectories of growth and participation. Teachers College Record, 104(3), 586-624. Hua, X., Wang, Z., & Li, S. (2005). LazyCut: Content-aware template based video authoring. ACM Multimedia. IBM. (n.d.). Authoring in XMT. Retrieved from http://www.research.ibm.com/mpeg4/Projects/ AuthoringXMT/ Jourdan, M., Layaïda, N., Roisin, C., Sabry-Ismail, L., & Tardif, L., Madeus (1998, September). An authoring environment for interactive multimedia documents. In ACM Multimedia, Bristol, UK, September 1998, pp. 267Â�272.
332
Oratrix GRiNS. Retrieved from http://www. oratrix.com/ Palette. Retrieved from http://palette.ercim.org/ Routman, R. (1988). Transitions: from literature to literacy. Heinemann. Routman, R. (1991). Invitations: Changing as Teachers and Learners K-12. Heinemann. Sauer, S., Osswald, K., Wielemans, X., & Stifter, M. (2006). Story authoring - U-Create: Creative authoring tools for edutainment applications (LNCS 4326, pp. 163-168). Sénac, P., Diaz, M., Léger, A., & de Saqui-Sannes, P. (1996). Modeling logical and temporal synchronization in hypermedia systems. IEEE Journal of Selected Areas on Communications, 14(1), 84-103. Short, K. (1995). Research and professional resources in children’s literature: Piecing a patchwork quilt. International Reading Association. Silva, H., Rodrigues, R. F., Soares, L. F. G., & Muchaluat Saade, D. C. (2004). NCL 2.0:Iintegrating new concepts to XML modular languages. ACM DocEng. SMIL. (n.d.). SMIL 2.1. Retrieved from http:// www.w3.org/TR/SMIL2/ Van Rossum, G., Jansen, J., Mullender, K., & Bulterman, D. C. A. (1993). CMIFed:A presentation environment for portable hypermedia documents. ACM Multimedia.
Multimedia Authoring for Communities of Teachers
XML. (2006). Extensible markup language (XML) 1.1. Retrieved from http://www.w3.org/ TR/xml11
XPath. (1999). XML path language (XPath) 1.0. Retrieved from http://www.w3.org/TR/xpath XSLT. (1999). XSL transformations 1.0. Retrieved from http://www.w3.org/TR/xslt
333
334
Compilation of References
AACSB (1999). Corporate universities emerge as pioneers in market-driven education,” Newsline, Fall 1999 Abbott, R.D., & Falstrom, P.M. (1975). Design of a Keller-plan course in elementary statistics. Psychological reports, 36(1), 171-174. Abernathy, D. J. (1998). The WWW of distance learning: Who does what and where? Training and Development, 52(1), 29-30. Ackerman, M.S. Halverson, C.A., Erickson, Th., & Kellogg, W.A. (Eds.) (2008). Resources, co-evolution and artifacts theory in CSCW. Springer Series: Computer Supported Cooperative Work. Ackerman, P. L. (1987). Individual differences in skill learning: An integration of psychometric and information processing, perspectives. Psychological Bulletin, 102(3-27). Ackermann, F. (1996). Participants’ perceptions on the role of facilitators using group decision. Group Decision and Negotiation, 5, 93–112. ADA Accessibility Guidelines. Retrieved Mar 22, 2006, from http://www.usdoj.gov/crt/508/report2/standards. htm Adobe. (2004). Adobe Authorware 7 and Macromedia Director MX 2004. Retrieved from http://www.adobe. com/products/ Agarwal, R., & Day, A. E. (1998). The Impact of the Internet on Economic Education. Journal of Economic Education, Spring, 99-110.
Aggarwal, A. K., & Legon, R. (2006). Case study Web-based education diffusion. International Journal of Web-Based Learning and Teaching Technologies, 1(1), 49-72. Ajzen, I., & Fishbein, M. (1980). Understanding Attitudes and Predicting Social-Behavior, Prentice Hall, Englewood Cliffs, NJ. Alavi, M. (1994). Computer-mediated collaborative learning: An empirical evaluation. MIS Quarterly, 18(2), 159-174. Alavi, M., Marakas, G.M., & Yoo, Y. (2002). A comparative study of distributed learning environments on learning outcomes. Information Systems Research, 13(4), 404-415. Alavi, M., Wheeler, B. C., & Valacich, J. S. (1995). Using IT to reengineer business education: An exploratory investigation of collaborative tele-learning. MIS Quarterly, 19(3), 293-312. Albanese, R., & Fleet, D.V. (1985). Rational behavior in groups: The free-riding tendency. Management Review, 10(2), 244-255. Al-Khaldi, M. A., & Al-Jabri, I. M. (1998). The relationship of attitudes to computer utilization: New evidence from a developing nation. Computer in Human Behavior, 14(1), 23-42. Allen, I. E., & Seaman, J. (2004). Entering the mainstream: The quality and extent of online education in the United States , 2003 and 2004. Needham , MA : Sloan-C. Retrieved December 4, 2005, from http://www. sloan-c.org/resources/entering_mainstream.pdf
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Compilation of References
Allen, I. E., & Seaman, J. (2007). Online nation. Sloan Consortium and Babson Survey Research Group, Allinson, C. W., & Hayes, J. (1990). The validity of the Learning Styles Questionnaire. Psychological Reports, 67, 859-866. Al-Mutka, K., Uimonen, T., & Jarvinen, H-M. (2004). Supporting students in C++ programming courses with automatic program style assessment. Journal of Information Technology Education, 3(2) (Ed. Linda Knight) Information Sciences Institute, Santa Rosa, USA. American Council on Education (2003). Building a stronger higher education community: Connecting with our members. 2003 Annual Report. Retrieved November 15, 2007 from http://www.acenet.edu/AM/Template.cfm ?Section=Search§ion=annual_reports_past_&template=/CM/ContentDisplay.cfm&ContentFileID=1058 Anderson, P. (2007). What is Web 2.0? Ideas, technologies and implications for education. JISC TechWatch report. Available at: http://www.jisc.ac.uk/whatwedo/ services/services_techwatch/techwatch/techwatch_ic_ reports2005_published.aspx (Last accessed on 18th February 2008) Anderson, T. (2004). Theory and practice of online learning (eds T. Anderson and F. Elloumi), chapter 2: Toward a theory of online learning. Athabasca University. http://cde.athabascau.ca/online_book/index.html€last accessed 15 April 2007. Anderson, T., Rourke, L., Garrison, D. R., & Archer, W. (2001). Assessing teaching presence in a computer conferencing context. Journal of Asynchronous Learning Networks, 5(2), Retrieved September 2005 from: http:// www.aln.org/alnweb/journal/jaln-vol5issue2v2.htm Anson, R., Bostrom, R., & Wynne, B. (1995). An experiment assessing group support system and facilitator effects on meeting outcomes. Management Science, 41(2), 189-208. Arbaugh, J. B., & Hwang, A. (2006). Does “teaching presence” exist in online MBA courses? The Internet and Higher Education, 9(1), 9-21.
Ardichvili, A., Page, V., & Wentling, T. (2003). Motivation and barriers to participation in online knowledgesharing communities of practice. Journal of Knowledge Management, 7(1), 64-77. Arina, T. (2007a). Serendipity 2.0: Missing third places of learning. Keynote at EDEN 2007 Conference NEW LEARNING 2.0? Emerging digital territories Developing continuities, New divides, EDEN 2007 Annual Conference, 13-16 JUNE, 2007, Naples, Italy. Arina, T. (2007b). Serendipity 2.0: Missing Third Places of Learning, posted by Arina June, 23. Tarina. Retrieved on 20 December, 2007 from http://tarina.blogging. fi/2007/06/ Armstrong, D.L., & Cole, P. (1995). Managing distances and differences in geographically distributed work groups. In S. Jackson & M. Ruderman (Eds.), Diversity in Work Teams: Research Paradigms for a Changing Workplace 187-215, Washington, DC: American Psychological Association. ASTD. (2006). State of the industry report (Online). Available at www.astd.org Ausburn, L. J., & Ausburn, F. B. (1978). Cognitive styles: Some information and implications for instructional design. Educational Communication and Technology, 26(4), 337-354. Austin, S.M, & Gilbert, K.E. (1973). Student performance in a Keller-Plan course in introductory electricity and magnetism. American journal of physics, 41(1), 12-18. Baker, A. (2006). E-Strategies for empowering learners E-Portfolio Conference, Oxford, England, on 11-13 October 2006. Retrieved on 20 December, 2007 from http://www.eife-l.org/news/ep2006 Baker, R.€S., Boilen, M., Goodrich, M.€T., Tamassia, R., & Stibel, B.€A. (1999). Tester and visualizers for teaching data structures. In Proceedings of the ACM 30th SIGCSE Tech. Symposium on Computer Science Education, 261–265. Baklavas, G., Economides, A.A., & Roumeliotis, M. (1999). Evaluation and comparison of Web-based testing
335
Compilation of References
tools. In Proceedings WebNet-99, World Conference on WWW and Internet, 81-86, AACE. Bandy, K.E., & Young, J.I. (2002). Assessing cognitive change in a computer-supported collaborative decisionmaking environment. Information Technology, Learning, and Performance Journal, 20(2), 11-23. Bannon, L., & Bodker, S. (1991). Beyond the interface: Encountering artifacts in use. In Carroll, J. M. (Eds.), Designing Interaction: Psychology at the Human-Computer Interface, 227-253. Cambridge: Cambridge University Press. Banzato, M. (2006). Blog e didattica. Dal web publishing alle comunità di blog per la classe in rete. TD Tecnologie Didattiche 38(2). Barrett, H. (2004). My “online portfolio adventure”. Versions of my online portfolios developed using different systems or online publishing tools. Retrieved on 20 December, 2007 from http://electronicportfolios. org/myportfolio/versions.html Barrett, H. (2006). Authentic sssessment with electronic portfolios using common software and Web 2.0 Tools. Retrieved on 20 December, 2007 from http://electronicportfolios.org/web20.html Bauer, A. (2002). Using computers in the classroom to support the English language arts standards. Retrieved January 19, 2005 from the World Wide Web: http://eric. ed.gov Baumann, J. F. & Ivey, G. (1997). Delicate Balances: Striving for curricular and instructional equilibrium in a second-grade, literature/ strategy-based classroom. Reading Research Quarterly, 32(3), 244. Baxter, G., Elder, A., & Glaser, R. (1996). Knowledge-based cognition and performance assessment in the science classroom. Educational Psychologist, 31(2):133–140. Baxter, H. (2007). An introduction to online communities. Retrieved on 13/07/2007 from http://www.providersedge. com/docs/km_articles/An_Introduction_to_Online_ Communities.pdf.
336
Beauclair, R.A. (1989). An experimental study of GDSS support application effectiveness. Journal of Information Science, 15(6), 321-332. Bechhofer, S., Horrocks, I., Goble, C., & Stevens, R. (2001). OilEd: A reason-able ontology editor for the Semantic Web. Proceedings of the Joint German/Austrian Conference: Advances in Artificial Intelligence (KI-01). Becker, H.J. (2001). Who’s wired and who’s not: Children’s access to and use of computer technology. The Future of Children and Computer Technology, 2(10), 44-75. Ben-Ari, M. (1998). Constructivism in computer science education. SIGCSE Bull., 30(1), 257–261. Benbasat, I., & Lim, L.H. (1993). The effects of group, task, context, and the technology variables on the usefulness of group support systems. Small Group Research, 24. Benford, S.€D., Burke, K.€E., & Foxley, E. (1993). A system to teach programming in a quality controlled environment. The Software Quality Journal ,177-197.. Benn, N., Shum, B.S., & Domingue, J. (2005). Integrating scholarly argumentation, texts and community: Towards an ontology and services. Tech Report KMI-05-5, http:// kmi.open.ac.uk/publications/pdf/kmi-05-5.pdf Berners-Lee T., Hendler J., Lassila O. (2001, May): The Semantic Web: A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities. Scientific American, 17. Bevan, N., Kirakowskib, J., & Maissela, J. (1991, September). What is Usability? In Proceedings of the 4th International Conference on HCI. Biggs, J. (2003). Teaching for quality learning at university: What the student does (2nd ed). Berkshire, SRHE & Open University Press. Biggs, J., Kember, D., & Leung, D.Y.P. (2001). The revised two-factor study process questionnaire: R-SPQ-2F. British Journal of Educational Psychology, 71(1), 133-149.
Compilation of References
Bilodeau, E. (2003, November 7-8, 2003). Using communities of practice to enhance student learning. Paper presented at EGSS Conference 2003, McGill University, Montreal. Bissell, J., White, S., & Zivin, G. (1971). Sensory modalities in children’s learning, in Psychology and educational practice, Lesser, G. S. (Ed.) Scott, Foresman, & Company, Glenview, IL. Blau, J.R., & Goodman, N., (Eds.) (1995). Social roles & social institutions. New Brunswick: Transaction Publishers. Block, C. C. (1993). Str-ategy instruction in a literaturebased reading program. Elementary School Journal, 94(2), 139-151. Bloom, B.S., Englehart, M.D., Furst, E. J., Hill, W.H., & Krathwohl, D.R. (1956). A taxonomy of education objectives: Handbook I. The cognitive domain. New York: McKay. Blumenstein, M., Green, S., Nguyen, A., & Muthukkumarasamy, V. (2004). An experimental analysis of game: A generic automated marking environment. In Proceedings of the 9th annual SIGCSE conference on Innovation and technology in computer science education, 67-71. BNF. (2001). Autour du Petit Chaperon rouge. Retrieved from http://expositions.bnf.fr/contes/pedago/chaperon Bonaiuti, G. (2006). Learning 2.0. Il futuro dell’apprendimento in rete tra formale e informale. I quaderni di Form@re. Trento: Erickson. Bond-Raacke, J. (2006). Students’ attitudes towards introduction of course Web site. Journal of Instructional Psychology, 33(4), 251-255. Bosco, A. (2007). EVAINU research: New virtual learning environments for educational innovation at university. Journal of Cases on Information Technology, 9(2), 49-60. Boster, F.J., Meyer, G. S., Roberto, A.J., & lnge, C. C. (2002). A report on the effect of the united streaming
application on educational performance. Farmville, VA: Longwood University. Boudreau, M.-C., & Robey, D. (2005). Enacting integrated information technology: A human agency perspective. Organization Science, 16(1), 3-18. Boulos, M., & Wheeler, S. (2007). The emerging Web 2.0 social software: An enabling suite of sociable technologies in health and health care education. Health Information and Libraries Journal, 24, 2-23. Boyatzis, Richard E., & Kram, K. E. (1999). Reconstructing management education as lifelong learning,” Selections, 16(1),17-27. Boyle, A., & O Hare, D. (2003). Finding appropriate methods to assure quality computer-based development in UK Higher Education. In Proceedings of the 7th computer-assisted assessment conference, 67-82, Loughborough University, United Kingdom. Braak, 1. V. (2001). Factors influencing the use of computer mediated communication by teachers in secondary schools. Computers & Education, 36(1), 41-57. Brainbridge, W.L., Lasley, TJ., & Sundre, S.M. (2004). Policy initiatives to improve urban schools: An agenda. Retrieved on October 26, 2004 at: http://www.schoolÂ� match. comlarticleslSESjUNE03 htm Br a n sford , J., Br ow n , A ., & C o ck i ng, R . (1999). How people learn: Brain, mind experience and school. National Academy of Sciences. http://www.nap.edu/html/howpeople1 last accessed 15 April 2007. Braun S., Schmidt A., Walter A., Zacharias V. (2007). The ontology maturing approach for collaborative and work integrated ontology development: Evaluation results and future directions, Proceedings of the International Workshop on Emergent Semantics and Ontology Evolution, 12 Nov. 2007, Bexco, Busan Korea. Brinkman, W.-P., Haakma, R., & Bouwhuis, D.G. (2005). Empirical usability testing in a component-based environment: improving test efficiency with component-
337
Compilation of References
specific usability measures. In R. Bastide, P. Palanque, and J. Roth (Eds.) Proceedings of EHCI-DSVIS 2004, Lecture Notes Computer Science, 3425, 20-37. Berlin, Springer-Verlag.
Brusilovsky, P. (1999). Adaptive and intelligent technologies for Web-based education. In C. Rollinger and C. Peylo (Eds.), [Special issue]. Künstliche Intelligenz, 4, 19-25.
Brook, R.J., & Thomson, P.J. (1982). The evolution of a Keller plan service statistics course. Programmed Learning & Educational Technology, 19(2), 135-138.
Brusilovsky, P. (2003). Adaptive navigation support in educational hypermedia: The role of learner knowledge level and the case for meta-adaptation. British Journal of Educational Technology, 34(4), 487-497.
Brown, A. L., Bransford, J. D., Ferrara, R. A., & Campione, J. C. (1983). Learning, remembering, and understanding, in Handbook of Child Psychology: Cognitive Development, Wiley, 77-166. Brown, J. D. (1997). Computers in language testing: present research and some future Bruckman, A. (2002). The Future of e-learning communities. Communications of the ACM, 45(4), 60-63. Brunet, P., Feigenbaum, B. A., Harris, K., Laws, C., Schwerdtfeger, R., & Weiss. L. (2005). Accessibility requirements for systems design to accommodate users with vision impairments. IBM Systems Journal, 44(3), 445-467. Bruno, S. F., & Osterloh, M. (2002). Successful management by motivation: Balancing intrinsic and extrinsic incentives. Berlin Herdelberg: Springer-Verlag Brusilovsky, P. & Eklund, J. (1998). A study of user model based link annotation in educational hypermedia. Journal of Universal Computer Science, 4 (4), 429-448. Brusilovsky, P. & Vassileva, J. (2003). Course sequencing techniques for large-scale Web-based education. International Journal of Continuing Engineering Education and Lifelong Learning, 13(1-2), 75-94. Brusilovsky, P. (1995). Intelligent tutoring systems for World-Wide Web. In R. Holzapfel (Ed.), Poster proceedings of Third International WWW Conference. Darmstadt, (pp 42-45). Brusilovsky, P. (1996). Methods and techniques of adaptive hypermedia. User Modeling and User-Adapted Interaction, 6(2-3), 87-129.
338
Brusilovsky, P. (2003). Developing adaptive educational hypermedia systems: From design models to authoring tools. In Authoring tools for advanced technology learning environments: Toward cost-effective adaptive, interactive and intelligent educational software. Kluwer Academic Pub. Brusilovsky, P., Eklund, J., & Schwarz, E. (1998). Webbased education for all: A tool for developing adaptive courseware. Computer Networks and ISDN Systems, 30(1-7), 291-300. Buchanan, M. C., & Zellweger, P. T. (1993). Automatic temporal layout mechanisms (pp. 341-350). ACM Multimedia. Burke, J. (1994). Education’s new challenge and choice: Instructional technology--Old byway or superhighway? Leadership Abstracts, 7(10), 22-39. Calvani, A. (2005). Reti, comunità e conoscenza: costruire e gestire dinamiche collaborative. I quaderni di Form@re. Trento: Erickson. Carlson, R. D. (1994). Computer-adaptive testing: A shift in the evaluation paradigm. Journal of Educational Technology Systems, 22(3), 213–224. Carmel, E. (2006). Building your information systems from the other side of the world: How infosys manages time zone differences, MISQ Executive 5(1), 43-53. Carr, S. (2000). As distance education comes of age the challenge is keeping students. Chronicle of Higher Education, February 11. Carroll N.L., & Calvo, R.A. (2005). Certified assessment artifacts for ePortfolios, Proceedings of the Third
Compilation of References
International Conference on Information Technology and Applications, 2005, 2, 130- 135. Cedefop Glossary (2000). In: Making Learning Visible. Cedefop, Thessaloniki. Chang, K. T., & Lim, J. (2006). The role of interface elements in Web-mediated interaction and group learning: Theoretical and empirical analysis. International Journal of Web-Based Learning and Teaching Technologies, 1(1), 1-28. Chau, P. Y. K. (2001). Influence of computer attitude and self-efficacy on IT usage behavior. Journal of End User Computing, 13(1), 26-33. Chen, E. K. Y. (1983). Multinational corporations and technology diffusion in hong kong manufacturing. Applied Economics, 15(3), 309-312. Chen, L. I., & Thielemann, J. (2001). Understanding the “Digital Divide” – The increasing technological disparity in America; Implications for Educators. In D. Willis, & Price, J. (Eds.), Technology and Teacher Education Annual – 2001. Charlottesville, VA: Association for Advancement of Computing in Education, 2685-2690. Chen, W., & Mizoguchi, R. (1999). Communication content ontology for learner model agent in multi-agent architecture. In Prof. AIED99 Workshop on Ontologies for Intelligent educational Systems. Available on-line: http://www.ei.sanken.osaka-u.ac.jp/aied99/a-papers/WChen.pdf. Chute, A. G., Thompson, M. M., & Hancock, B. W. (1999). The Mcgraw-Hill handbook of distance learning. New York: McGraw-Hill. Cigognini, M.E., Mangione, G.R., & Pettenati, M.C. (2007b). E-Learning design in (in)formal learning.€ TD41-Tecnologie Didattiche, Ortona: Menabò Edizioni. Retrieved on 20 December, 2007 from http://www. tdmagazine.itd.cnr.it/ Cigognini, M.E., Mangione, G.R., Pettenati, M.C., Fini, A., & Sartini A. (2007a). Le Social software pour la construction de la connaissance dans l’apprentissage
collaborative. Journal International des Sciences de l’Information et de la Communication (ISDM), vol. 29(499),TICE Méditerranée 2007. Retrieved on 20 December, 2007 from http://isdm.univ-tln.fr/PDF/isdm29/ CIGOGNINI.pdf Cisco http://www.cisco.com Cito Group http://www.cito.nl/ Clariana, R. B. (1997). Considering learning style in computer-assisted learning. British Journal of Education Technology, 28(1), 66-68. Clark, M. (1976). Young fluent readers: What can they teach us? Heinemann. Clawson, V.K., Bostrom, R.P., & Anson, R. (1993). The role of the facilitator in computer-supported meetings. Small Group Research, 24(4), 547-565. Coates, D., & Humphreys, B.R. (2003). An inventory of learning at a distance in economics. Social Science Computer Review, 21(2), 196-207. Cohen, V. L. (1997). Learning styles in a technologyrich environment. Journal of Research on Computing in Education, 29(4), 339-350. Coley, Cradler, & Engel, P. (1997). Computers and classrooms: The status of techÂ�nology in U.S. schools. Princeton, NJ: Educational Testing Service, Policy Information Center. Collier, P. (2001). A differentiated model of role identity acquisition. Symbolic Interactionist, 24(2), 217-235. Colton, D., Fife, L., & Thompson, A. (2006). A Webbased Automatic Program Grader. In Information Systems Education Journal, 4(114). http://isedj.org/4/114/. ISSN: 1545-679X. (Also appears in The Proceedings of ISECON 2006: §3522. ISSN: 1542-7382.) CompTIA http://www.comptia.org/certification/ Conklin, J. (2006). Dialogue mapping: Building shared understanding of wicked problems. John Wiley & Sons,
339
Compilation of References
Conklin, J., & Begeman, M. (1989). gIBIS: A tool for all reasons. Journal of the American Society for Information Science, 40(3), 200-213. Conner, M. L. (2004). Informal Learning. Ageless Learner. Retrieved on 20 December, 2007 from http:// agelesslearner.com/intros/informal.html Connolly, T., Jessup, L.M., & Valacich, J.S. (1990). Effects of anonymity and evaluative tone on idea generation in computer-mediated groups, Management Science, 36, 97-120
Cross, J. (2006). Informal learning for free-range learners. Internet Time Group LLC. Retrieved on 20 December, 2007 from http://www.jaycross.com/informal_book/nutshell.html Crowder, N. A. (1959). Automatic tutoring by means of intrinsic programming. In E. Galanter (Eds.), Automatic teaching: The state of the art. New York: Wiley. Cuban, L. (2001). Oversold and underused: Computers in the classroom. Cambridge, MA: Harvard University Press.
Coppola, N., Hiltz, S.R., & Rotter, N. (2004). Building trust in virtual teams, IEEE Transactions on Professional Communication, 47(2) 95-104.
Cury, L. (1991). Patterns of learning style across selected medical specialties. Educational Psychology, 11, 247-77.
Coppola, N.W., Hiltz, S.R., & Rotter, N.G. (2002). Becoming a virtual professor: pedagogical roles and asynchronous learning networks. Journal of Management Information Systems, 18(4), 169-189.
Daele A., Erpicum M., Esnault L., Pironet F., Platteaux H., Vandeput E., et al. (2006). An example of participatory design methodology in a project which aims at developing individual and organisational learning in communities of practice. Proceedings of the first European Conference on Technology Enhanced Learning (EC-TEL’06), Greece, 2006.
Cotten, K. (1991, 8/31/01). Computer Assisted Instruction. Retrieved June 11, 2003, from http://www.nwrel. org/scpd/sirs/5/cu10.html Cox, P. W., & Gall, B. G. (1981). Field dependence-independence and psychological differentation, Educational Testing Service, Princeton. Cradler, R., & Cradler,. (1999). Just in time: Technology innovation challenge grant year 2evaluationreport for Blackfoot School District No. 55. San Mateo, CA: Educational Support Systems. Craig, K.T., & Shepherd, M. (2001). Collaborative technology in the classroom: A review of the GSS research and a research framework. Information Technology and Management, 2(4), 395-418. Cranor, L., Langheinrich, M., Marchiori, M., Presler-Marshall, M., & Reagle, J. (2002). The platform for privacy preferences 1.0 (P3P1.0) Specification. World WideWeb Consortium (W3C). http://www. w3.org/TR/P3P/. Creahan, T. A., & Hoge, B. (1998). Distance learning: Paradigm shift of pedagogical drift. Presentation at Fifth EDINEB Conference, September, 1998, Cleveland, Ohio.
340
Daft, R. L. (1978). A dual-core model of organizational innovation. Academy of Management Journal, 21(2), 193-210. Daft, R., & Lengel, R. (1986) Organizational information requirements, media richness, and structural design. Management Science, 32, 554-572. Damanpour, F. (1987). The adoption of technological, administrative, and ancillary innovations: Impact of organizational factors. Journal of Management, 13(4), 675. Damanpour, F. (1991). Organizational innovation: A meta-analysis of effects of determinants and moderators. Academy of Management Journal, 34(3), 555-590. Damanpour, F., & Gopalakrishnan, S. (1998). Theories of organizational structure and innovation adoption: The role of environmental change. Journal of Engineering and Technology Management, 15(1), 1-24.
Compilation of References
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 318-340.
Deci, E. L., & Ryan, R. M. (1987). The support of autonomy and the control of behavior. Journal of Personality and Social Psychology, 53(6), 1024-1037.
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1992). Extrinsic and intrinsic motivation to use computers in the workplace. Journal of Applied Social Psychology, 22(14), 1111-1132.
Deloitte Research. (2002). From e-learning to enterprise learning, becoming a strategic organization. Retrieved from www.dc.com/research
Davis, F.D., 1989, Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319-340. De Bra, P., Aerts, A., Berden, B., De Lange, B., Rousseau, B., Santic, T., Smits, D., & Stash, N. (2003). AHA! The Adaptive Hypermedia Architecture. In Proceedings of the ACM Hypertext Conference, Ottingham, UK. Retrieved July 22, 2007, from http://wwwis.win.tue. nl/~debra/ht03/pp401-debra.pdf De Bra, P., Aerts, A., Smits, D., & Stash, N. (2002). AHA! Version 2.0, More adaptation flexibility for authors. In Proceedings of the e-Learn --World Conference on ELearning in Corporate, Government, Healthcare, and Higher Education, Association for the Advancement of Computing in Education, (pp. 240-246). De Leenheer, P., & Meersman, R. (2007). Towards community-based evolution of knowledge-intensive systems. In Proceedings of Ontologies, Databases, and Applications of Semantics. De Moor A., & Van Den Heuvel W. (2004). Web service selection in virtual communities. Proceeding of the 37th Annual Hawaii International Conference on System Sciences (HICSS’04), Big Island, Hawaii, January 5-8, 2004. de Moor, A., & Aakhus, M. (2006). Argumentation support: from technologies to tools. Commun. ACM Vol. 49, No 3 (Mar. 2006), pp. 93-98. Debela, N. (2004). A closer look at distance learning from students’ perspective: a qualitative analysis of Web-based online courses. Journal of Systemics, Cybernetics and Informatics, 2(6).
Dennis, A., & Kinney, S. (1998). Testing media richness theory in the new media: The effects of cues, feedback, and task equivocality. ISR, 9(3), 256-274. Dennis, A., & Valacich, J., (1999). Rethinking media richness: Towards a theory of media synchronicity. In Proceedingsof the 32th Hawaii International Conference of Systems Sciences, IEEE Computer Society. Los Alamitos. Dennis, A., Wixom, B.H., & Tegarden, D. (2005). Systems analysis and design with UML version 2.0, 2nd edition, John Wiley & Sons Inc. Dennis, A.R. & Valacich, J.S. (1991). Electronic versus nominal group brainstorming. Working paper, University of Arizona, Tucson. Dennis, A.R. (1991). Parallelism, anonymity, structure and group size in electronic meetings. Unpublished doctoral dissertation, University of Arizona, Tucson. Dennis, A.R., & Kinney, S.T. (1998). Testing media richness theory in the new media: The effects of cues, feedback, and task equivocality. Information Systems Research, 9(3), 256-274. Dennis, A.R., & Wixom, B.H. (2002). Investigating the moderators of the group support systems use with metaanalysis. Journal of Management Information Systems, 18(3), 235-257. Dennis, A.R., George, J.E., Jessup, L.M., Nunamaker, J.F., & Vogel, D.R. (1998). Information technology to support electronic meetings. MIS quarterly, 12(4), 591-624. Dennis, A.R., Tyran, C.K., Vogel, D.R., & Nunamker, J.F. (1997). Group support systems for strategic planning. Journal of Management Information Systems, 14(1), 155-184.
341
Compilation of References
Dennis, A.R., Valacich, J.S., & Nunamaker, J.F. (1990). An experimental investigation of the effects of group size in an electronic meeting environment. IEEE Transactions on Systems, Man, and Cybernetics, 25(5), 1049-1057. DeSanctis, G., & Gallupe, R.B. (1987). A foundation for the studies of group decision support systems. Management Science, 33(5), 589-609. DeSanctis, G., & Poole, M.S. (1991). Understanding the differences in collaborative system use through appropriation analysis. In J.F. Nunamaker, Jr. (Ed.), Proceedings of the Twenty-Fourth Annual Hawaii International Conference on System Sciences, 3, 750-757). Los Alamitos, CA: IEEE Computer Society Press. Dewar, R. D., & Dutton, J. E. (1986). The adoption of radical and incremental innovations: An empirical analysis. Management Science, 32(11), 1422-1433. Dewey, J. (1933). How we think (rev. ed.). Boston: D.C. Heath. Dewey, J. (1938). Experience and education (7th printing, 1967). New York: Collier. Dickson, P., Understanding the trade winds: The global evolution of production, consumption, and the Internet, Journal of Consumer Research, June 2000, 27, 115122. Diehl, M., & Stroebe, W. (1987). Productivity loss in brainstorming groups: Towards the solution of a riddle. Journal of Personality Social Psychology, 53(3), 497509. Dillenbourg, P., Baker, M., Blaye, A., O’Malley, C. (1996). The evolution of research on collaborative learning. In Reinmann, P., Spada, H. (eds.). Learning in humans and machines: Towards an interdisciplinary learning science. Pergamon, Oxford 189-205. Dillengourg, P. (1999). What do you mean by collaborative learning? In P. Dillenbourg (Ed.) Collaborativelearning: Cognitive and Communicational Approaches, Oxford: Elsevier. Ding, Y., & Fensel, D. (2001). Ontology library systems: The key to successful ontology re-use. In Proceedings of
342
the 1st International Semantic Web Working Symposium (SWWS’01). directions. Language Learning & Technology, 1(1), 44-59. Disability Rights Commission (2003), W3C argues with DRC Web Accessibility Findings. Retrieved April 5, 2006 from http://www.usabilitynews.com/news/article1664.asp Dividing lines (2001). Technology counts 2001: The new divides: Looking beneath the numÂ�bers to reveal digital inequities. Retrieved on October 26, 2004, at: http://counts.edweek.orglsreportsltc01ItcOlarticle.cfm ?slug=35divideintro.h2 O. Dolog, P., & Schäfer, M. (2005). Learner modelling on the Semantic Web?. Workshop on Personalisation on the Semantic Web PerSWeb05, July 24-30, Edinburgh, UK. Domingue, J., Motta, E., Shum, S.B., Vargas-Vera, M., Kalfoglou, Y., & Farnes, N. (2001). Supporting ontology driven document enrichment within communities of practice. In Proceedings of the 1st International Conference on Knowledge Capture (K-CAP-01), (pp. 30-37), New York, USA: ACM Press. Douce, C., Livingstone, D., Orwell, J., Grindle, S., & Cobb, J. (2005). A technical perspective on asap—automated system for assessment of programming. In Proceedings of the 9th International Conference on Computer Aided Assessment. Doucette, D. (1994). Transforming teaching and learning using information technology. Community College Journal, 65(2), 18-24. Dourish P. (1992). Applying reflection to CSCW design Position paper for Workshop “Reflection and Metalevel Architectures”, European Conference on Object-Oriented Programming, Utrecht, Netherlands, 1992.Retrieved May 1, 2007 from http ://www.laputan. org/pub/utrecht/dourish.text Dourish P., & Belloti V. (1992). Awareness and coordination in shared workspaces.Proceedings of ACM
Compilation of References
Conference on Computer supported cooperative Work (CSCW’92), Toronto, Canada, November, 1992 (pp. 107-114).
Dunn, R. S., & Dunn, K. J. (1979). Learning styles/ teaching styles: Should they. Can they be mateched? Educational Leadership, 36, 238-244.
Dourish P., & Bellotti V. (1992). Awareness and coordination in shared workspaces. In proceedings of the 1992 ACM Conference on Computer-Supported Cooperative Work (Toronto, Ontario, Canada, November 01 - 04, 1992.
Dunn, R., & Griggs, S. (1998). Learning styles: Quiet revolution in American secondary schools, National Association of Secondary School Principals, Reston, VA.
Downes, S. (2005). E-Learning 2.0. eLearn Magazine, 10 (October 2005).NewYork: AMC. Downs, G. W., & Mohr, L. B. (1976). Conceptual issues in the study of innovation. Administrative Science Quarterly, (21), 700-714.
Durkin, D. (1961). Children who read before grade one. The Reading Teacher. 14, 163-166. Eason, K. D., & Damodaran, I. (1981). The needs of the commercial users, in Computer Skills and the User Interface, Atly, M. J. C. J. I. (Ed.) Academic Press, New York, NY.
Drummond, R. J. (1987). Review of Learning Styles Inventory, in Test Critiques, Sweetland, D. J. K. R. C. (Ed.) Test Corporation of America, Kansas City, Missouri, 308-312.
Economides, A.A. (2005a). Computer adaptive testing quality requirements. In Proceedings E-Learn 2005, World Conference on E-Learning in Corporate, Government, Healthcare, & Higher Education, 288-295, AACE.
Duggan, S., & Barich, S. (2001). The knowledge economy and corporate e-learning. CA: The Silicon Valley World Internet Centre.
Economides, A.A. (2005b). Personalized feedback in CAT. WSEAS Transactions on Advances in Engineering Education, 3(2), 174-181.
Duin, H., & Hansen, C. (1994). Reading and writing on computer networks as social construction and social interaction. In Selfe, C. & Hilligoss, S. (Eds.) Literacy and computers: The complications of teaching and learning with technology, (pp. 89-112). New York: The Modern Language Association.
Eduventures http://www.eduventures.com
Duncan, R. B. (1976). The ambidextrous organization: Designing dual structures for innovation. In R. H. Kilmann, L. R. Pondy & S. D. P. (Eds.), The Management Of Organization: Strategy And Implementation (Vol. 167-188). New York: North- Holland. Dunkel, P. (1999). Considerations in developing or using second/foreign language proficiency computer-adaptive tests. Language Learning & Technology, 2(2), 77-93. Dunn, R. (1996). How to implement and supervise a learning style program, Association for Supervision and Curriculum Development, Alexandria, VA.
Edyburn, D., Higgins, K., & Boone, R. (2005). Handbook of special education technology research and practice. Whitefish Bay, WI: Knowledge by Design, Inc. Ekstrom, R. B., French, J. W., Harman, H. H., & Dermen, D. (1999). Manual for Kit of Factor-Referenced Cognitive Tests 1976, Educational Testing Service, Princeton, NJ. El Helou S., Gillet D., Salzmann Ch. & Y. Rekik Y.(2007): Feed-oriented awareness services for eLogbook mobile users. Proceedings of the 2nd International Conference on Interactive Mobile and Computer aided Learning (IMCL), Jordan, April 17-21, 2007. Emck, J.H., & Ferguson-Hessler, M.G.M. (1981). A computer-managed Keller plan. Physics Education, 16(1), 46-49.
343
Compilation of References
Engelbrecht, J., & Harding, A. (2001). WWW mathematics at the University of Pretoria: The trail run. South African Journal of Science, 97(9-10), 368-370.
Ferrell, B. G. (1983). A factor analytic comparison of four learning-style instruments. Journal of Educational Psychology, 75, 33-39.
Englefield, P., Paddison, C.. Tibbits, M., & Damani, I. (2005). A proposed architecture for integrating accessibility test tools. IBM Systems Journal, 44(3), 537-556.
Fichman, R. G. (2001). The role of aggregation in the measurement of it-related organizational innovation. MIS Quarterly, 25(4), 427-455.
e-Portfolio (2006) eStrategies for Empowering Learners. ePortfolio Conference. Oxford, England, on 11-13 October 2006 Retrieved on 20 December, 2007 from http://www.eife-l.org/news/ep2006
Fichman, R. G. (2004). Going beyond the dominant paradigm for information technology innovation research: Emerging concepts and methods. Journal of the Association for Information Systems, 5(8), 314-355.
Eseryel, D., Ganesan, R., & Edmonds, G.S. (2002). Review of computer-supported collaborative work systems. Educational Technology & Society, 5(2), 130-136.
Finneran, Kevin. (2000). Let them eat pixels. Issues in Science and Technology, 1(3), 1-4
European Commission (2000). Memorandum on Lifelong Learning. Brussels: Commission of the European Communities. Retrieved on 20 December, 2007 from http:// ec.europa.eu/education/policies/lll/life/memoen.pdf Evangelou, C.E., Karacapilidis, N., & Tzagarakis M. (2006). On the development of knowledge management services for collaborative decision making. Journal of Computers, 1(6), 19-28. Evans, C., & Gibbons, N. J. (2007). The interactivity effect in multimedia learning. Computers & Education 49, 1147–1160. Fallows, J. (2006). Homo Conexus. Technology Review. Retrieved on 20 December, 2007 from http://www. technologyreview.com/read_article.aspx?id=17061&c h=infotech Farkas G. J., Nguyen Ngoc A.V., & Gillet D. (2005). The electronic laboratory journal: A collaborative and cooperative learning environment for Web-based experimentation. Computer Supported Cooperative Work (CSCW), 14(3), 189-216. Farnham-Diggory, S. (1990). “Schooling”: The developing child, Harvard University Press, MA, Cambridge. FastTest Pro http://www.assess.com/Software/FTP16Main.htm
344
Fischer, G., Lemke, A.C., McCall, R., & Morch, A. (1991). Making argumentation serve design. Human Computer Interaction 6(3-40, 393-419. Fjermestad, J. (1998). An integrated framework for group support system. Journal of Organizational Computing and Electronic Commerce, 8(2), 83-107. Fjermestad, J., & Hiltz, S. R. (1998-1999). An assessment of group support systems experimental research: Methodology and results. Journal of Management Information Systems, 15(3), 7-149. Fleming, N. D., & Bonwell, C. C. (2002). VARK pack Version 4.1. Fleming, N. D., & Mills, C. (1992). Helping students understand how they learn, Magma Publications, Madison, Wisconsin. Flouris G., Manakanatas D., Kondylakis H., Plexousakis D., & Antoniou G. (2008). Ontology change: Classification and survey. Knowledge Engineering Review (KER), to appear. Flouris, G. (2007) On the evolution of ontological signatures. Proceedings of the Workshop on Ontology Evolution (OnE-07.. Foreman, J., & Widmayer, S. (2000). How online course management systems affect the course. Journal of Interactive Instruction Development, Fall, 16-19.
Compilation of References
Franc-parler.org. (2006). La communauté mondiale des professeurs de français. Franc-parler.org un site de l’Organisation internationale de la Francophonie. Retrieved from http://www.francparler.org/parcours/conte.htm Gabel T., Sure Y., & Voelker J, (2004) KAON-Ontology Management Infrastructure. SEKT informal deliverable, 3(1) Gadzella, B. M. (1995). Differences in academic achievement as a function of scores on hemisphericity. Perceptual and Motor Skills, 81, 153-154. Gallis, H., Kasbo, J. P., & Herstad, J. (2001). The multidevice paradigm in know-mobile - Does one size fit all? In S. Bjørnestad, R. E. Moe, A. I. Mørch, & A. L. Opdahl (Eds.), In Proceedings of the 24th Information System Research Seminar in Scandinavia, 3, 491–504. Gallupe, R.B., Dennis, A.R., Cooper, W.H., Valacich, J.S., Nunamaker Jr., J.F., & Bastianutti, L., (1992). Group size and electronic brainstorming. Academy of Management Journal, 35(2), 350-369. Gao, T., & Lehman, J., D. (2003). The effects of different levels of interaction on the achievement and motivational perceptions of college students in a Web-based learning environment. Journal of Interactive Learning Research, 14(4), 367-387. Garcia, O., Nagaragan, S.V., & Croll, P. (2003). Towards and automatic marking system for object-oriented programming education. In Proceedings of Software Engineering and Applications. (ed. M.Hamza), Marina Del Rey, USA, Acta Press. Gärdenfors, P. (1992). Belief revision: An introduction. In Gärdenfors, P. (ed). Belief Revision, (pp. 1-20), Cambridge University Press.
Garrison, D. R., & Archer, W. (2000). A transactional perspective on teaching-learning: A framework for adult and higher education. Oxford, UK: Pergamon. Garrison, D. R., (2006). Online community of inquiry review: Understanding social, cognitive and teaching presence. Invited paper presented to the Sloan Consortium Asynchronous Learning Network Invitational Workshop, Baltimore, MD, August. Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence and computer conferencing in distance education. American Journal of Distance Education, 15(1), 7-23. Garrison, D.R., & Cleveland-Innes, M. (2003). Critical factors in student satisfaction and success: Facilitating student role adjustment in online communities of inquiry. In Elements of Quality Online Education: Into the Mainstream, Vol. 4 in the Sloan-C Series, J. Bourne and J. Moore (Eds.), 29-38. Needham, MA: Sloan-C. Garrison, D.R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: Computer conferencing in higher education. Internet and Higher Education, 11(2), 1-14. Garrison, R., & Cleveland-Innes, M. (2005). Facilitating cognitive presence in online learning: Interaction is not enough. American Journal of Distance Education, 19(3), 133-148. Garrison, R., Cleveland-Innes, M., & Fung, T. (2004). Student role adjustment in online communities of inquiry: Model and instrument validation. Journal of Asynchronous Learning Networks, 8(2), 61-74. Retrieved December 2004 from http://www.sloan-c.org/publications/jaln/v8n2/pdf/v8n2_garrison.pdf.
Garrison, D. R., & Anderson, T. (2003). E-Learning in the 21st Century: A framework for research and practice. London: Routledge/Falmer.
Georgiadou, E., Triantafillou, E., & Economides, A.A. (2006). Evaluation parameters for computer adaptive testing. British Journal of Educational Technology, 37(2), 261-278.
Garrison, D. R., & Arbaugh, J. B. (2007). Researching the community of inquiry framework: Review, issues, and future directions. Internet and Higher Education, 10(3), 157-172.
Georgiadou, E., Triantafillou, E., & Economides, A.A. (2007). A review of item exposure control strategies for computerised adaptive testing developed from 1983
345
Compilation of References
to 2005. Journal of Technology, Learning, and Assessment, 5(8). Georgouli, K. (2004). WASA: An intelligent agent for Web-based self-assessment. In Kinhuk, Sampson, D. & Isaias, P. (Eds.), Cognition and Exploratory Learning in Digital Age (CELDA 2004), ISBN: 972-98947-7-9, 43-50. Assoc. Editors, L. Rodrigues and P. Barbosa, Lisbon, December. Gephart, M., Marsick, V., Van Buren, M., & Spiro, M., (1996, December). Learning organizations come alive. Training & Development, 50(12), 34-45. Gill, G. (2006). Asynchronous discussion groups: A usebased taxonomy with examples. Journal of Information System Education, 17(4), 373-383. Gillet D., El Helou S., Rekik Y., & Salzmann Ch. (2007). Context-sensitive awareness services for communities of practice. Proceedings of the 12th International Conference on Human-Computer Interaction (HCI2007), Beijing, July 22-27, 2007. Gillet D., Man Yu C., El Helou S., Rekik Y., Berastegui A., Salzmann Chr., & Rekik Y. (2007). Tackling acceptability issues in communitis or practice by providing a lightweight e-mail-based interface to eLogbook: A Web 2.0 collaborative activity and asset management system. Proceedings of the 2nd International Workshop on Building Technology Enhanced Learning solutions for Communities of practice (TEL-CoPs’07), Crete, Greece, September 17, 2007. Gillet D., Nguyen Ngoc A.V., & Rekik, Y. (2005). Collaborative Web-based experimentation in flexible engineering education. IEEE Transactions on Education, 48(4), 696–704. Giouroglou, H., & Economides, A. (2004). State-of-theart and adaptive open-closed items in adaptive foreign language assessment. In Proceedings 4th Hellenic Conference with International Participation: Informational and Communication Technologies in Education, Athens, 747-756. Giouroglou, H., & Economides, A.A. (2005). The development of the adaptive item language assessment (AILA)
346
for mixed-ability students. In Proceedings E-Learn 2005 World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education, 643650, AACE. GMAT http://www.gmat.org , http://www.mba.com, http://www.gmat-mba-prep.com/, http://www.800score. com/gmat-home.html Goldenberg, C. (1992/1993). Instructional conversations: promoting comprehension through discussion. The Reading Teacher, 46(4), 316-326. Gomez, E. A., Wu, D., Passerini, K., & Bieber, M. (2006). Introducing computer-supported team-based learning: preliminary outcomes and learning impacts. In M. Khosrow-Pour (Ed.), Emerging trends and challenges in information technology management: Proceedings of the 2006 Information Resources Management Association Conference, (pp. 603-606). Hershey, PA: Idea Group Publishing. Gomez, E.A., & Bieber, M. (2005). Towards active teambased learning: An instructional strategy. In Proceedings of the Eleventh Americas Conference on Information Systems (AMCIS)( pp. 728-734). Omaha, . Gomez, E.A., Wu, D., Passerini, K., & Bieber, M. (2006, April 19-21). Computer-supported learning strategies: An implementation and assessment framework for team-based learning. Paper presented at ISOneWorld Conference, Las Vegas, NV. Gopal, A., Bostrom, R., & Chin, W. (1993). Applying adaptive structuration theory to investigate the process of group support systems use. Journal of Management Information Systems, 9(3), 45-69. Gordon, T.F., & Karacapilidis, N. (1997). The zeno argumentation framework. In Proceedings of the 6th International Conference on Artificial Intelligence and Law, ACM Press, New York. Gray, B. (2004). Informal learning in an online community of practice. Journal of Distance Education, 19(1), 20-35.
Compilation of References
GRE http://www.ets.org , http://www.800score.com/greindex.html Green, B.A. (1971). Physics teaching by the Keller plan at MIT. American Journal of Physics, 39(7), 764-775. Green, S.M, Voegeli, D., Harrison, M., Phillips, J., Knowles, J., Weaver, M., et al. (2003). Evaluating the use of streaming video to support student learning in a first-year life sciences course for student nurses. Nurse Education Today, 23(4), 255-261.
Guskin, A. E. (1994). Reducing student costs & enhancing student learning, Part II: Restructuring the role of faculty. Change, 26(5), 16-25. Gustafsson, A., Ekdahl, F., Falk, K., & Johnson, M. (2000). Linking customer satisfaction to product design: A key to success for volvo. Quality Management Journal, 7(1), 27-38. Guttentag, S., & Eilers, S. (2004). Roofs or RAM? Technology in urban schools. Retrieved on October 26, 2004, at: http://www.horizonmag.com/4/roofram.htm.
Grigoriadou, M., & Papanikolaou, K. (2006). Authoring personalised interactive content. In Proceedings of First International Workshop on Semantic Media Adaptation and Personalization (SMAP’06), (pp. 80-85).
Hableton, R.K., Zaal, J.N., & Pieters, J.P. (2000). Computerized adaptive testing€: theory, applications, and standards. Reston, MA: Kluwer,
Grossman, L. (2006). Time’s Person of the Year: You. Times Journal, Wednesday, Dec. 13, 2006. Retrieved on 20 December, 2007 from http://www.time.com/time/
Hackman, J.R. (1968). Effects of task characteristics on group products. Journal of Experimental Social Psychology, 4, 162-187.
Gruber, T.R. (1993). A translation approach to portable ontology specifications. Available at: http://ksl-web. stanford.edu/KSL_Abstracts/KSL-92-71.html
Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate Data Analysis, Prentice-Hall, Inc., Upper Saddle River, New Jersey.
Grudin, J. (1991). Obstacles to user involvement in software product development, with implications for CSCW. International Journal of Man-Machine Studies, 34(3), 435-452.
Halasz, F. (1988). Reflections on note cards: Seven issues for the next generation of hypermedia systems. Communications of the ACM, 31(7), 836-852.
Guangzuo C. (2004). OntoEdu: Ontology-based education grid system for E-Learning. GCCCE2004, HongKong. Guiller, J., & Durndell, A. (in press). Students’ linguistic behaviour in online discussion groups: Does gender matter? Computers in Human Behavior. Guizzardi, G., Ferreira Pires, L., & van Sinderen, M. (2005). Ontology-based evaluation and design of domainspecific visual modeling languages. Proceedings of the 14th International Conference on Information Systems Development, Karlstad, Sweden. Gunawardena, C. N., & Boverie, P. E. (1992). Impact of learning styles on instructional design for distance education. Paper presented at the World Conference of the International Council of Distance Education,, Bangkok Thailand.
Hall, Dianne J., Cegielski, Casey G., Wade, James N. (2006). Theoretical value belief, cognitive ability, and personality as predictors of student performance in objectoriented programming environments. Decision Sciences Journal of Innovative Education 4(2), 237-257. Hambleton, I.R., Foster, W.H., & Richardson, J.T.E. (1998). Improving student learning using the personalised system of instruction. Higher Education, 35(2), 187-203. Hamid, A. A. (2001). E-Learning-Is it the’’E’’or the learning that matters? The Internet and Higher Education, 4(3-4), 311-316. Hare, A.P. (1981). Group size. American Behavioral Scientist, 24, 695-708. Harreld, J. B. (1998). Building faster, smarter organizations. In D. Tapscott, A. Lowy & N. Klym (Eds.),
347
Compilation of References
Blueprint the digital economy: Creating wealth in the era of e-business. New York: McGraw Hill. Harrison, A. W., & Rainer, R. K. J. (1992). The influence of individual difference on skill in end-user computing, Journal of Management Information Systems, 9(1), 93-111. Haythornthwaite, C., Kazmer, M. M., Robins, J., & Showmaker, S. (2000). Community development among distance learners. Journal of Computer-Mediated Communication, 6(1). Hearn, G., & Scott, D. (1998). Students staying home. Futures, 30(7), 731-737. Helmick, M.T. (2007). Interface-based programming assignments and automatic grading of java programs. In Proceedings of the 12th Annual SIGCSE Conference on Innovation and Technology in Computer Science Education, ITiCSE 2007, Dundee, Scotland, UK, June 25-27, 2007. ACM 2007, (pages 63-67) ISBN 978-159593-610-3 Hereford, S.M. (1979). The Keller plan within a conventional academic environment: An empirical ‘meta-analytic’ study. Engineering education, 70(3), 250-260. Hevner, A.R., March, S.T., Park, J., & Ram, S. (2004). Design science in information systems research. MIS Quarterly, 28(1), 75-105. Hew, K. F., & Cheung, W.S. (2007). Attracting student participation in asynchronous online discussions: A case study of peer facilitation. Computers & Education (doi:10.1016/j.compedu.2007.11.002). Hewitt, J., & Brett, C. (2007). The relationship between class size and online activity patterns in asynchronous computer conferencing environments. Computers & Education 49, 1258–1271. Higgins, C., Symeonidis, P., & Tsintsifas, A. (2001). The marking system for course master. In Proceedings of the 6th annual conference on Innovation and Technology in Computer Science Education (ITiCSE), 46–50.
348
Hiltz, S.R., & Turoff, M. (1993). The network nation: Human communication via computer. Cambridge, MA: The MIT Press. Hiltz, S.R., & Turoff, M. (2002). What makes learning networks effective? Communication of the ACM, 45(4), 56-59. Hiltz, S.R., (1995). Teaching in a virtual classroom. International Journal of Educational Telecommunications, 1(2), 185-198. Hoadley, C.M., & Kilner, P.G. (2005). Using technology to transform communities of practice into knowledge-building communities. SIGGROUP Bulletin, 25(1), 31-40. Hoffman, D., & Novak. T. (1996). Marketing in hypermedia computer-mediated environments: Conceptual foundations. Journal of Marketing, 60(3), 50-68. Hoffmann, P., & Herczeg, M. (2006). Interactive narrative systems - Hypervideo vs. storytelling integrating narrative intelligence into hypervideo (LNCS 4326, pp. 37-48). Hogan, D., & Kwiatkowksi, R. (1998). Emotional aspects of large group teaching. Human Relations, 51(11), 1403-1417. Hogan, K. (2002). Pitfalls of community-based learning: How power dynamics limit adolescents’ trajectories of growth and participation. Teachers College Record, 104(3), 586-624. Holliday, W. G. (1976). Teaching verbal chains using flow diagrams and text. AV Communications Review, 24(1), 63-78. Hoskins, S.L., & van Hooff, J.C. (2005), Motivation and ability: Which students use online learning and what influence does it have on their achievement? British Journal of Educational Technology, 36(2), 177-192. Howell, S. L., Williams, P.B., Lindsay, N.K. (2003). Thirty-two trends affecting distance education: An informed foundation for strategic planning. Online Journal of Distance Learning Administration, 6(3).
Compilation of References
http://www.westga.edu/~distance/ojdla/fall63/howell63. html (last accessed January 26, 2008) http://www.topshareware.com/Cisco-Practice-Testsfrom-Boson-download-10944.htm Hua, X., Wang, Z., & Li, S. (2005). LazyCut: Contentaware template based video authoring. ACM Multimedia. Huang, W., Luce, T., & Lu, Y. (2005). Virtual team learning in online MBA education: An empirical investigation. Issues in Information Systems, VI(1). Huber, G. (1984). Issues in the design of group decision support systems. MIS Quarterly, 8(3), 195-204. Huber, G.P. (1990). A theory of the effects of advanced information technologies on organizational design, intelligence, and decision making. Academy of Management Review, 15, 47-71. Hull, F., & Hage, J. (1982). Organizing for innovation: beyond burns and stalker’s organic. Sociology, 16(4), 564-577. Hunter, J. E. (1986). Cognitive ability, cognitive aptitude, job knowledge, and job performance. Journal of Vocational Behavior, 29, 340-362. Hunter, J., & Schmidt, F. (1990). Methods of metaanalysis: Correcting error and bias in research findings. Beverly Hills CA: Sage. Huynh, M. Q., Umesh, U. N., & Valacich, J. S. (2003). E-Learning as an emerging entrepreneurial enterprise in universities and firms. Communications of the Association for Information Systems, 12, 48-68. Hwang, Y., & Yi, M. Y. (2003). Predicting the use of webbased information systems: Self-efficacy, enjoyment, learning goal orientation, and the technology acceptance model. International Journal of Human-Computer Studies, 59(4), 431-449. IBM. (n.d.). Authoring in XMT. Retrieved from http://www.research.ibm.com/mpeg4/Projects/AuthoringXMT/
Ice, P., Arbaugh, B., Diaz, S., Garrison, D. R., Richardson, J. Shea, P., & Swan, K. (2007). Community of Inquiry Framework: Validation and Instrument Development. The 13th Annual Sloan-C International Conference on Online Learning, Orlando, November. Igbaria, M., A., & Livari, J. (1995). The effects of self-efficacy on computer usage. OMEGA International Journal of Management Science., 23(6), 587-605. IMS LIP (2001). IMS learner information package specification. The Global Learning Consortium. Available on line: http://www.imsglobal.org/profiles/index.html Initiative (WAI). Available online at http://www.w3.org/ WAI/. Institute for Higher Education Policy. (2000). Quality on the line: Benchmarks for Success in Internet Distance Education. Washington, D.C. International Society for Technology in Education (2004). Available at: http://www. iste.org/standardsl. Isabella, L., & Waddock, S. (1994). Top management team certainty: environmental assessments, teamwork, and performance implications. Journal of Management, Winter. Iskold, A. (2007). The Evolution of Personal Publishing. Read Write Web. Post and comment, 11 December 2007). Retrieved on 20 December, 2007 from http://www. readwriteweb.com/archives/the_evolution_of_ personal_publ.php Issroff, K., & Scanlon, E. (2002). Using technology in higher education: An activity theory perspective. Journal of Computer Assisted Learning, 18, 77-83. Jackson, D., & Usher, M. (1997). Grading student programming using assyst. In Proceedings of 28th ACM SIGCSE Tech. Symposium on Computer Science Education, pages 335–339. Jacobson, R. L. (1993). New computer technique seen producing a revolution in educational testing. Chronicle of Higher Education, 40(4), pp. 22–23.
349
Compilation of References
James, J. (1951). A preliminary study of size determinant in small group interaction. American Sociology Review, 16, 474-477.
Jourdan, M., et al. (1998). Madeus, an authoring environment for interactive multimedia documents. ACM Multimedia.
James, W. B., & Blank, W. E. (1993). Review and critique of available learning-style instruments for adults. New Directions for Adult and Continuing Education, 39(Fall).
Jourdan, M., Layaïda, N., Roisin, C., Sabry-Ismail, L., & Tardif, L., Madeus (1998, September). An authoring environment for interactive multimedia documents. In ACM Multimedia, Bristol, UK, September 1998, pp. 267Â�272.
Jarvenpaa, S., & Todd. P. (1997). Consumer reactions to electronic shopping on the World Wide Web. International Journal of Electronic Commerce, 1(2), 59-88. Jiang, M., & Ting, E. (2000). A study of factors influencing students’ perceived learning in a Web-based course environment. International Journal of Educational Telecommunications, 6(4), 317-338. Johnson, D., & Johnson, R. (1999). What makes cooperative learning work. Japan Association for Language Teaching, pp. 23-36. Johnson, D.W., Johnson, R.T., & Smith, K.A. (1991). Cooperative learning: increasing college faculty instructional productivity. ASHE-ERIC Higher Education Report No. Johnson, G.M. (2005). Student Alienation, Academic Achievement, and WebCT Use. Educational Technology & Society, 8(2), 179-189. Johnson, R.S. (2002). Using data to close the achievement gap: How to measure equity in our schools. Thousand Oaks, CA: Corwin Press. Jonassen, D.H., & Carr, C.S. (2000). Mindtools: Affording multiple representations for learning. In Lajoiem S.P. (Ed.), Computers as cognitive tools II: No more walls: Theory change, paradigm shifts and their influence on the use of computers for instructional purposes, 165-196. Mawah, NJ: Erlbaum. Jones, G.H, & Jones, B.H. (2005). A comparison of teacher and students attitudes concerning use and effectiveness of web-based course management software. Educational Technology & Society, 8(2), 125-135.
350
Juedes, D.€W. (2003). Experiences in Web-based grading. In 33rd ASEE/IEEE Frontiers in Education Conference. Kalyanpur, A., Parsia, B., Sirin, B., Cuenca-Grau, B., & Hendler, J.(2005) Swoop: A ‘Web’ ontology editing browser, Journal of Web Semantics, 4(2), 144-153 Kanwar, M., & Swenson, D. (2000). Canadian sociology. Iowa: Kendall/Hunt Publishing Company. Kapsalis, A.G. (2004). Pedagogic psychology. 3rd edition, Kiriakidis S.A. Kaput, J. (1992). Technology and mathematics education. In Handbook or research on mathmematics and teaching and learning (D. A. Grouws ed., pp. 515-556). New York, NY: Macmillan Publishing Company. Karacapilidis, N., & Papadias, D. (2001). Computer supported argumentation and collaborative decision making: The HERMES system. Information Systems, 26(4), 259-277. Karacapilidis, N., Loukis, E, & Dimopoulos, S. (2005). Computer-supported G2G collaboration for public policy and decision making. Journal of Enterprise Information Management, 18(5), 602-624. Kariya, S. (2003). Online education expands and evolves. IEEE Spectrum, 40(5), 49-51. Karvounarakis G., Magkanaraki A., Alexaki S., Christophides V., Plexousakis D., Scholl M., & Tolle K. (2004). RQL: A functional query language for RDF. The Functional Approach to Data Management: Modelling, Analyzing and Integrating Heterogeneous Data,
Compilation of References
(pp. 435-465). P.M.D.Gray, L.Kerschberg, P.J.H.King, A.Poulovassilis (eds.), LNCS Series, Springer-Verlag.
characteristics of language production. The Modern Language Journal, 79, 457- 476.
Katz, D., & Kahn, R. (1978). The social psychology of organizations. New York: Wiley.
Kerr, N.L., & Bruun, S.E. (1981). Alternative explanations for the social loafing effect, Personality and Social Psychology Bulletin, 7, 224-231.
Kay J., Kummerfeld B., & Lauder P. (2002). Personis: A server for user modelling. In Proceedings of the 2nd International Conference on Adaptive Hypermedia and Adaptive Web-Based Systems (AH’2002), 201–212. Kay, J. (2001). Learner control. User Modeling and User-Adapted Interaction, 11, 111-127. Kear, K. (2004). Peer learning using asynchronous discussion systems in distance education. Open Learning, 19(2), 151- 164. Kear, K., & Heap, N. (1999). Technology-supported group work in distance learning. Active Learning 10, 21-26. Keefe, J. W. (1985). Assessment of learning style variables: The NASSP task force model. Theory Into Practice 24, 138-144. Keefe, J. W. (1987). Learning style theory and practice, National Association of Secondary School Principals, Reston, VA. Keefe, J. W., & Monk, J. S. (1990). Learning style profile examiner’s manual, National Association of Secondary School Principals, Reston, VA. Keller, F.S. (1968). “Good-bye, teacher …” Journal of Applied Behavior Analysis, 1(1) 79-89. Keller, F.S., & Sherman, J.G. (1974). The Keller plan handbook: essays on personalized system of instruction. Menlo Park, W.A. Benjamin. Kendall, D., Murray, J., & Linden, R. (2000). Sociology in our times. (2nd ed.). Ontario: Nelson Thompson Learning. Kerlinger, F. N. (1986). Foundations of behavioral research, 3rd. ed. Holt, Rinehart & Winston. Kern, R. (1995). Restructuring classroom interaction with networked computers: Effects on quantity and
Kerres, M. (2006). Micro-learning as a challenge for instructional design. In Hug, T. & Lindner, M. (Eds.) Didactics of Microlearning. Muenster, Waxmann. Khalifa, H., Davison, R., & Kwok, R.C.-W. (2002). The effects of process and content facilitation restrictiveness on GSS-mediated collaborative learning, Group Decision and Negotiation, 11(5), 345-361. Khalifa, M., & Kwok, R.C.-W. (1999). Remote learning technologies: Effectiveness of hypertext and GSS. Decision Support Systems, 26, 195-207. Kimmel, P. D., Weygandt, J. J., & Kieso, D. E. (2004). Financial accounting tools for business decision making, John Wiley & Sons, Inc., New York, NY. Kinsner, W., & Pear, J.J. (1988). Computer-aided personalized system of instruction for the virtual classroom. Canadian Journal of Educational Communication, 17(1), 21-36. Kirkman, B.L., Rosen, B., Gibson, C.B., Etsluk, P.E., & McPherson, S. (2002). Five challenges to virtual team success: Lessons from Sabre, Inc. The Academy of Management Executive, 16(3). Kirschner, P., Buckingham Shum, S., & Carr, C. (2003). Visualizing argumentation: Software tools for collaborative and educational sense-making. London, UK: Springer Verlang. Kirschner, P., Buckingham-Shum, S., & Carr, C. (2003). Visualizing argumentation: Software tools for collaborative and educational sense-making. London: Springer Verlag. Kiser, K. (1999). 10 things we know so far about online training Training, 36(11), 66-74. Klausmeier, H. J., & Loughlin, L. J. (1961). Behaviors during problem solving among children of low, average,
351
Compilation of References
and high intelligence. Jouranl of Educational Psychology, 52, 148-152. Kline, P. (1993). The handbook of psychological testing, London: Routledge. Knoop, R. (1994). Work values and job satisfaction. Journal of Psychology, 128(6), 683. Knuttila, M. (2002). Introducing sociology: A critical perspective. Don Mills, Ontario: Oxford University Press. Koedinger, K., & Anderson. (1999). PUMP algebra project: AI and high school math. Pittsburgh, PA: Carnegie Mellon University, Human Computer Interaction Institute. Retrieved February 24, 2003, at: http://act.psy. cmu.edu/awpt/awpt-home. html Koen, B.V. (2005). Creating a sense of “presence” in a Web-based PSI course: the search for Mark Hopkins’ log in a digital world. IEEE Transactions on Education, 48(4), 599-604. Kolb, D. A., Rubin, I. M., & McIntyre, J. M. (1979). Organizational Psycho1ogy:An Experiential Approach (3rd ed.). Englewood Cliffs, NJ: Prentice-Hall. Konstantinidis, G., Flouris, G., Antoniou, G., & Christophides, V. (2007). Ontology evolution: A framework and its application to RDF. In Proceedings of the Joint ODBIS & SWDB Workshop on Semantic Web, Ontologies, Databases (SWDB-ODBIS-07). Kopp, S. F. (2000). The role of self-esteem. LukeNotes, 4(2). Retrieved September, 2005 from http://www.sli. org/page80.html Koschmann, T. (1996). Paradigm shifts and instructional technology: An introduction. In T. Koschmann (Ed.) CSCL: Theory and Practice of an Emerging Paradigm, Mahwah, NJ: Lawrence Erlbaum. Kozaki K., Sunagawa E., Kitamura Y., & Mizoguchi R. (2007). A framework for cooperative ontology construction-based on dependency management of modules. Proceedings of the International Workshop on Emergent Semantics and Ontology Evolution, 12 Nov. 2007, Bexco, Busan Korea.
352
Kraemer, E.W. (2003). Developing the online learning environment: the pros and cons of using WebCT for library instruction. Information Technology and Libraries, 22(2), 87-92. Ktoridou, D., Zarpetea, P., & Yiangou, E. (2002). Integrating technology in EFL. Retrieved November 22, 2004 from the World Wide Web: http://www.uncwil. edu/cte/et/articles/Ktoridou3/ Kuhl, D. (2002). Investigating online learning communities. U.S. Department of Education Office of Educational Research and Improvement (OERI). Kulik, C.-L. C., Kulik, J. A., & Shwalb, B. J. (1986). The effectiveness of computer-based adult education: A meta-analysis. Journal of Educational Computing Research, 2(2), 235-252. Kulik, J. A., Kulik, C. L. C., & Cohen, P. A. (1980). Effectiveness of computer-based college teaching: A meta-analysis of findings. Review of Educational Research, 50(4), 525-544. Kulik, J.A., Kulik, C.-L.C., & Cohen, P.A. (1979). A meta-analysis of outcome studies of Keller’s personalized system of instruction. American Psychologist, 34(4), 307-318. Kwok, R.C.-W, Ma, J., & Vogel, D.R. (2002). Effects of group support systems and content facilitation on knowledge acquisition. Journal of Management Information Systems, 19(3), 185-230. Kwok, R.C.-W., Lee, J.-N., Huynh, M.Q., & Pi, S.-M. (2002). Role of GSS on collaborative problem-based learning: A study on knowledge externalization. European Journal of Information Systems, 11, 98-107. Kyle, J. (1999). Mathletics – A review. Maths & Stats, 10(4), 39-41. LaContora J. M., & Mendonca D. J. (2003). Communities of practice as learning and performance support systems. Proceedings of the International Conference on Information Technology: Research and Education, New Jersey Institute of Technology, Newark, NJ: USA, 2003.
Compilation of References
Lang, T. K., & Hall, D. (2006). Academic motivation profile in business classes. Academic Exchange Quarterly, Fall 2005, 145-151. Larsen, R. E. (1992). Relationship of learning style to the effectiveness and acceptance of interactive video instruction. Journal of Computer-based Instruction, 19(1), 17-21. Laurillard, D. (2002). Rethinking university teaching: A conversational framework for the effective use of educational technology. NewYork: Routledge Falmer. Lavooy, M. J., & Newlin, M. H. (2003). Computer Mediated Communication: Online instruction and interactivity. Journal of Interactive Learning Research, 14(2), 157-165. Leeper, J. D. (2004). Choosing the correct statistical test. Retrieved February 27, 2004. Leidner, D., & Fuller, M. (1996). Improving student processing and assimilation of conceptual information: GSS-supported collaborative learning vs. individual constructive learning. In Proceedings of the 29th Hawaii International Conference on System Sciences (HICSS29), Big Island, Hawaii, pp. 293-302. Leidner, D.E., & Jarvenpaa, S.L. (1995). The use of information technology to enhance management school education: A theoretical view. MIS Quarterly, 19(3), 265-291. Leinonen, P., Järvelä, S., & Lipponen, L. (2003). Individual Students’ Interpretations of their contribution to the computer-mediated discussions. Journal of Interactive Learning Research, 14(1), 99-122. Lévy, P. (1996). L’intelligenza collettiva. Per un’antropologia del cyberspazio. Milano: Feltrinelli. Lewis, J.R. (1995). IBM computer usability satisfaction questionnaires: psychometric evaluation and instructions for use. International Journal of Human-Computer Interaction, 7(1), 57-78. Li, F. W. B., & Lau, R. W. H. ( 2006). On-demand e-learning content delivery over the internet. Interna-
tional Journal of Distance Education Technologies, 4(1), 46-55. Lifelong Learning Trends: A Profile of Continuing Higher Education. 7th Edition. (2002, April) University Continuing Education Association. Liguorio B. & Hermas H., (2005). Identità dialogiche nell’era digitale. I quaderni di Form@re. Trento: Erickson. Lilley, M., & Barker, T. (2003). An evaluation of a computer-adaptive test in a UK University context. In Proceedings of the 7th computer-assisted assessment conference, 171-182, United Kingdom: Loughborough University. Lilley, M., Barker, T., & Britton, C. (2004). The development and evaluation of a software prototype for computer-adaptive testing. Computers & Education 43, 109-123. LimSee2. (2003-2006). Retrieved from http://limsee2. gforge.inria.fr/ Lipnack, J., & Stamps, J. (1997). Virtual teams. In Reaching Across Space, Time, and Organizations with Technology, New York: John Wiley and Sons. Lipnack, J., & Stamps, J. (1997). Virtual teams. New York: John Wiley and Sons, Inc. Litchfield, B. C., Driscoll, M. P., & Dempsey. J. V. (1990). Presentation sequence and example difficulty: Their effect on concept and rule learning in computer-based instruction. Journal of Computer-based Instruction, 17, 35-40. Litzinger, M. E., & Osif, B. (1993). Accommodating diverse learning styles: Designing instruction for electronic information sources, in What is good instruction now? Library instruction for the 90s, Shitaro, L. (Ed.) Pierian Press, Ann Arbor, MI, Pierian Press Liu, Y., & Ginther, D. (Fall1999). Cognitive styles and distance education. Online Journal of Distance Learning Administration, II(III).
353
Compilation of References
Lonergan, M. (2001). Preparing urban teachers to use technology for instruction. (ERIC Document Reproduction Service ED 460 190). López, J. M, Millán, E., Pérez, J. L., & Triguero, F. (1998). Design and implementation of a Web-based tutoring tool for linear programming problems. In Proceedings of workshop Intelligent Tutoring Systems on the Web at ITS’98, 4th International Conference on Intelligent Tutoring Systems. Retrieved July 22, 2007, from http://www. lcc.uma.es/~eva/investigacion/papers/its98wsh.ps Luck, M, & Joy, M. (1999). A secure online submission system. In Software-Practice and Experience, 29(8), 721–740. Luyben, P. D., Hipworth, K., & Pappas, T. (2003). Effects of CAI on academic performance and attitudes of college students. Teaching of Psychology, 30(2), 154-158. Macdonald, J. (2003). Assessing online collaborative learning: process and product. Computers and Education, 40, 377-391. Malhotra, Y., & Galletta, D. (2003, January 6-9). Role of commitment and motivation in knowledge management systems implementation: Theory, conceptualization, and measurement of antecedents of success. In Proceedings of 36th Annual Hawaii International Conference on Systems Sciences, pp. 1-10. IEEE. Mandal, C., Sinha, V.€L., & Reade, C. M.€P. (2004). A Web-based course management tool and web services. Electronic Journal of E-Learning, 2(1). Markus, M. (1994). Electronic mail as the medium of managerial choice. Organizational Science, 5, 502527. Markus, M.L. (1994). Finding a happy medium: Explaining the negative effects of electronic communication on social life at work. ACM Transactions on Information Systems, 12(2), 119-149. Marshall, C., & Shipman, F. (1997). Spatial hypertext and the practice of information triage. In Proceedings of the ACM HT97, Southampton UK, available online
354
from http://www.csdl.tamu.edu/~shipman/papers/ht97viki.pdf. Marsick, V.J., & Watkins, K.E. (1999). Facilitating learning organizations: Making learning count. Aldershot, U.K. and Brookfield, VT: Gower. Martz, W. B, Reddy, V., & Sangermano, K. (2004). Assessing the impact of Internet testing: Lower perceived performance. Distance Learning and University Effectiveness: Changing Educational Paradigms for Online Learning, Caroline Howard, Karen Schenk and Richard Discenza (eds.). Hershey, PA: Idea-Group, Inc.: Martz, William B., & Shepherd, Morgan, M., (2207). Managing distance education for success. International Journal of Web-Based Learning and Teaching Technologies, 2(2), 50-59. Martz, Wm. Benjamin, Jr., & Venkateshwar, Reddy. (2005). Five factors for operational success in distance education. Encyclopedia of Online Learning and Technology, Caroline Howard, (ed.) Maudet, N., & Moore, D. J. (1999). Dialogue games for computer supported collaborative argumentation. In Proceedings of the 1st Workshop on Computer Supported Collaborative Argumentation (CSCA99). McCrea, F., K. , Gay, R., & Bacon, R. (2000). Riding the big waves: A white paper on the B2B e-learning industry. In (pp. 1-51.). San Francisco: Thomas Weisel Partners. McGrath, J.E. (1984). Groups: Interaction and Performance. Englewood Cliffs, NJ: Prentice Hall. McLeod, P.L. (1992). An assessment of empirical literature on electronic support of group work: Results of a meta-analysis. Human-Computer Interaction, 7(3), 257-280. McLuhan, M. (1995). Understanding media: The extensions of man. Cambridge: The MIT Press. M C S E h t t p : / / w w w. m i c r o s o f t . c o m / l e a r n i n g / m c p / m c s e / h t t p: // w w w. s y b e x . c o m / s y b e x b o o k s . n s f /A d d i t i o n a l C o n t e n t / 2946OnlineDemo?OpenDocument#
Compilation of References
Mead, J., Gray, S., Hamer, J., James, R., Sorva, J., Clair, C.€S., & Thomas, L. (2006). A cognitive approach to identifying measurable milestones for programming skill acquisition. SIGCSE Bull., 38(4), 182–194. Meijer, R.R., & Nering, M.L. (1999). Computerized adaptive testing: Overview and introduction. Applied psychological measurement. 23(3), 187-194. Merchant, S., Kreie, J., & Cronan, T. (2001). Training end users: Assessing the effectiveness of multimedia CBT. Journal of Computer Information Systems, 41(3), 20-25. Merit Education (2003). Six higher education mega trends what they mean for the distance learning. http://www. meriteducation.com/six-mega-trends-higher-educationcontinued.html (accessed January 29, 2005) Merrill, M. D. (2002). First principles of instruction, ETR&D, vol. 50 n. 3, pp.43-59. Meyer, K. A. (2003). Face-to-face versus threaded discussions: The role of time and higher-order thinking. Journal of Asynchronous Learning Networks, 7(3), 55-65. Michaelsen, L., Fink, D., & Knight, A. (2002). Teambased learning: A transformative use of small groups in college teaching. Sterling VA: Stylus Publishing. Microsoft. (n.d.). MS Producer for PowerPoint. Retrieved from http://www.microsoft.com/office/powerpoint/producer/prodinfo/ Mooij, T., & Smeet, E. (2001). Modeling and supporting ICT implementation in secondary schools. Computers & Education, 36(3), 265-281. Moor, A., & Aakhus, M. (2006, March). Argumentation support: From technologies to tools. Communications of the ACM, 49(3), 93-98.
Mouza, C., Kaplan, D., & Espinet, I. (2000). A Webbased model for online collaboration between distance learning and campus students (IR020521): Office of Educational Research and improvement, U.S. Department of Education. National Center for Education Statistics. (2004). Technology in schools: Suggestions, tools, and guidelines for assessing technology in elementary and secondary education. Retrieved on October 26, 2004, at: http://nces. ed.govlpubs2003/tech_schools/index.asp. Natu S., & Mendonca J. (2003). Digital asset management using a native XML database implementation. Proceedings of the 4th Conference on information Technology Curriculum (CITC4 03), Lafayette, Indiana, USA, October 16 - 18, 2003 (pp. 237-241). New York, USA: ACM press. NEA (2002). The promise and the reality of distance education. NEA Higher Education Research Center, 8(3). Nguyen-Ngoc A.V., Gillet D., & Sire S. (2004). Evaluation of a Web-based learning environment for Hands-on experimentation. In Aung, W., et al. (eds.), Innovations - 2004: World Innovations in Engineering Education and Research (pp. 303-315). New York, USA: iNEER in cooperation with Begell House Publishing. Nickell, G. S., & Pinto, J. N. (1986). The computer attitude scale. Computers in Human Behavior, 2, 301-306. Nielsen, J. (2001). Beyond ALT text: Making the Web easy to use for users with disabilities: 75 best practices for websites and intranets, based on usability studies with people using assistive technology. Retrieved from http://www.nngroup.com/reports/accessibility Nielsen, J. (1993). Usability engineering. San Francisco: Morgan Kaufmann.
Moran, A. (1991). What can learning styles research learn from cognitive psychology? Educational Psychology, 11(3/4), 239-246.
Noble, D. F. (1999). Digital diplomas mills. Retrieved November 28, 2002 , from http://www.firstmonday. dk/issues/issue3_1/noble/index.html
Morgan, G. (2003). Faculty use of course management systems, University of Wisconsin System, Boulder, CO.
Nord, W. R., & Tucker, S. (1987). Implementing routine and radical innovation. Lexington, MA: Lexington Books.
355
Compilation of References
Norris D., Mason J., Lefrere P. (2003). Transforming E-Knowledge - A Revolution in the Sharing of Knowledge. Society for College and University Planning Ann Arbor. Michigan. Novacek V., Handschuh S., Maynard D., Laera L., Kruk S. R., Volkel M., Groza T., & Tamma V. (2007). D2.3.8v1 Report and prototype of dynamics in the ontology lifecycle, Deliverable 2.3.8 of the Knowledge Web FP6-507482 Project available at: http://knowledgeweb.semanticweb. org/semanticportal/sewView/frames.jsp Noy N., Chugh A., Liu W. and Musen M. (2006) A Framework for Ontology Evolution in Collaborative Environments. In: 5th International Semantic Web Conference, Athens, GA, USA Noy, N., Fergerson, R., & Musen, M. (2000). The knowledge model of protégé-2000: Combining interoperability and flexibility. Proceedings of the 12th International Conference on Knowledge Engineering and Knowledge Management: Methods, Models, and Tools (EKAW-00), 17−32
and problem solving, Andre, G. D. P. T. (Ed.) Academic Press, New York, 21-48. O’Malley, C. (1995). Computer supported collaborative learning. Berlin: Springer-Verlag. O’Reilly, T. (2005). What Is Web 2.0. Design Patterns and Business Models for the Next Generation of Software. Retrieved on 20 December, 2007 from http://www. oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/whatis-web-20.html Offir, B., Lev, Y., & Bezalel, R. (2007). Surface and deep learning processes in distance education: Synchronous versus asynchronous systems. Computers & Education, doi:10.1016/j.compedu.2007.10.009. Olea, J., Revuelta, J., Ximenez, M.C., & Abad, F.J. (2000). Psychometric and psychological effects of review on computerized fixed and adaptive tests. Psicologica 21,. 157-173. Olson, G.M., & Olson J.S. (2000). Distance matters, Human-Computer Interaction, 15, 139-178.
Noy, N.F., & Musen, M.A. (2003.) The PROMPT suite: Interactive tools for ontology merging and mapping. International Journal of Human-Computer Studies, 59(6), 983-1024
Ong, C.-S., Lai, J., & Yishun, W. (2004). Factors affecting engineers’ acceptance of asynchronous e-learning systems in high-tech companies. Information, 41(6), 795-804.
Nunamaker, J.F., Applegate, L.M., & Konsynski, B.R. (1988). Computer-aided deliberation: Model management and group decision support. Journal of Operations Research, 36(6), 826-848.
Oratrix GRiNS. Retrieved from http://www.oratrix. com/
Nunnally, J. (1967). Psychometric theory. New York: McGraw-Hill. Nunnally, J. (1978). Psychometric theory. New York: McGraw-Hill Nurmi, R. (1999). Knowledge-intensive firms. In J. W. Cortada & J. A. Woods (Eds.), The knowledge management yearbook. Boston: Butterworth-Heinemann. O’Boyle, M. W. (1986). Hemispheric laterality as a basis for learning: What we know and don’t know., in Cognitive classroom learning: Understanding, thinking,
356
Orlikowski, W. J. (1992). The duality of technology Rethinking the concept of technology in organizations. Organization Science, 3(3), 398-427. Orlikowski, W. J. (1993). Case tools as organizationalchange - Investigating incremental and radical changes in systems-development. MIS Quarterly, 17(3), 309-340. Orwant, J. (2005). Heterogeneous learning in the doppelgänger UserModeling System. User Modeling and User-Adapted Interaction, 4(2), 107-130, 1995. Available online ftp://ftp.media.mit.edu/pub/orwant/doppelganger/ learning. ps.gz, Last access June 21th, 2005).
Compilation of References
Paiva, A. & Self, J. (1995). TAGUS - A user and learner modeling workbench. User Modeling and User-Adapted Interaction, 4(3), 197-228. Palette. Retrieved from http://palette.ercim.org/ Palfrey, J., & Gasser, U. (2008). Digital born. New York: Basic Books (To appear). Palfrey, J., Gasser, U., & Weinberger, D. (2007). Digital born. John Palfrey. From the Bercam Center at haverval Law School. Retrieved on 20 December, 2007 from http://blogs.law.harvard.edu/palfrey/2007/10/28/borndigital/ Palmer, J. (2002). Web site usability, design and performance metrics. Information Systems Research, 13(2), 151-167 PAPI (2000). Draft Standard for Learning Technology —Public and Private Information (PAPI) for Learners (PAPI Learner). IEEE P1484.2/D7, 2000-11-28. Available on-line: http://edutool.com/papi Passerini, K., & Granger, M.J. (2000). Information technology-based instructional strategies. Journal of Informatics Education & Research, 2(3).
Petrucco, C. (2007). Il castello e il villaggio - Social Software e LMS: integrare o abbattere? Retrieved on 20 December, 2007 from http://didaduezero.blogspot. com/ Pettenati M.C., Cigognini M.E., & Sorrentino F. (2007), Methods and tools for developing personal knowledge management skills in the connectivist era, EDEN 2007 Conference NEW LEARNING 2.0? Emerging digital territories Developing continuities, New divides, EDEN 2007 Annual Conference, 13-16 JUNE, 2007, Naples, Italy. Pettenati, M. C., & Ranieri, M. (2006). Informal learning theories and tools to support knowledge management. In distributed CoPs. TEL-CoPs’06: 1st International Workshop on Building Technology Enhanced Learning solutions for Communities of Practice, held in conjunction with the 1st European Conference on Technology Enhanced Learning Crete, Greece, October 2, 2006. Pettenati, M.C., & Cigognini, M.E. (2007). Social networking theories and tools to support connectivist learning activities. International Journal of Web-Based Learning and Teaching Technologies (IJWLTT), 2(3), 39-57, July-September 2007, Idea Group Inc.
Pear, J.J. (2003). Enhanced feedback using computeraided personalized system of instruction. In W. Buskist, V. Hevern, B. K. Saville, & T. Zinn (Eds.). Essays from E-xcellence in Teaching 3(11). Washington, DC: APA Division 2, Society for the Teaching of Psychology.
Pettenati, M.C., & Ranieri, M. (2006). Dal sé alle reti: nuove pratiche di social network per la collaborazione in rete. In Bonaiuti, G. (2006). Learning 2.0. Erickson: Trento.
Pear, J.J., & Crone-Todd, D.E. (2002). A social constructivist approach to computer-mediated instruction. Computer & Education, 38(1-3), 221-231.
Pettenati, M.C., Cigognini, M.E., Mangione, G.R., & Guerin, E. (2008). Personal knowledge management skills for lifelong-learners 2.0. Social Software and Developing Community Ontologie. IGI Publishings (To appear).
Pear, J.J., & Novak, M. (1996). Computer-aided personalized system of instruction: a program evaluation. Teaching of psychology, 23(2), 119-123.
Phillips M. C. (1998). Increasing students’ interactivity in an online course. Journal of Online Education, 2(3). 31-43.
Pelayo-Alvarez, M., Albert-Ros, X., Gil-Latorre, F., & Gutierrez-Sigler, D. (2000). Feasibility analysis of a personalized training plan for learning research methodology. Medical education, 34(2), 139-145.
Phillips, G. M., & Santoro, G. M. (1989). Teaching group discussions via computer-mediated communication. Communication Education, 39, 151–161.
Peters, T. J., & Waterman, Jr., R.H. (1982). In search of excellence.New York: Harper and Row.
Phillips, J. J. (1997). Handbook of training evaluation and measurement methods (3rd ed.). Houston, TX: Gulf Publishing.
357
Compilation of References
Picciano, A. G. (2002). Beyond student perceptions: Issues of interaction, presence, and performance in an online course. Journal of Asynchronous Learning Networks, 6(1), 20-41. Pierce, J. L., & Delbecq, A. (1977). Organization structure, individual attitudes and innovation. Academy of Management Review, 2(1), 27. Pinar, W. (2004). What is curriculum theory? Mahwah, NJ: Lawrence Erlbaum Associates. Pinsonneault, A., & Kraemer, K. (1990). The effect of electronic meetings on group processes and outcomes: An assessment of the empirical research. European Journal of Operational Research, 46, 143-161. Pinsonneault, A., & Kraemer, K.L. (1989). The impact of technological support on groups: An assessment of the empirical research. Decision Support Systems, 5, 197-216. Pisan, Y., Richards, D., Sloane, A., Koncek, H., & Mitchell, S. (2003). Submit! A Web-based system for automatic program critiquing. In Proceedings of the fifth Australasian Computing Education Conference (ACE 2003), 59–68. Plessers P., & De Troyer O. (2005). Ontology change detection using a version log, Proceedings of the 4th International Semantic Web Conference (ISWC-05) Galway, Ireland 6-10 November 2005. Poirier, T.I., & O’Neil, C.K. (2000). Use of Web technology and active learning strategies in a quality assessment methods course. American journal of pharmaceutical education, 64(3), 289-298. Poole, M. S., & DeSanctis, G. (1989). Understanding the use of group decision support systems: The theory of adaptive structuration. In C. Steinfield & J. Fulk (Eds.), Theoretical Approaches to Information Technologies in Organizations, Beverly Hills, CA: Sage. Poole, M.S., & DeSanctis, G. (1987). Group decision making and group decision support systems: A 3-year Plan for the GDSS Research Project. University of Minnesota Working Paper, MISRC-WP-88-02.
358
Poole, M.S., & DeSanctis, G. (1990). Understanding the use of group decision support systems: The theory of adaptive structuration. In J. Fulk, & C.W. Steinfield (Eds.), Organizations and Communication Technology, 173-193. Newbury Park, CA: Sage. Poole, M.S., & DeSanctis, G. (1990). Understanding the use of group decision support systems: The theory of adaptive structuration. In Organizations and Commuincation Technology, J. Fulk and C. Steinfeld (eds.). 173-193. New Bury Park, Ca: Sage Publications. Quan-Haase, A. (2005). Trends in online learning communities. SIGGROUP Bulletin, 25(1), 1-6. Rae, A. (1993). Self-paced learning with video for undergraduates: a multimedia Keller plan. British Journal of Educational Technology, 24(1), 43-51. Rainer, R. K. J., & Miller, M. D. (1996). An assessment of the psychometric properties of the computer attitude scale. Computers in Human Behavior, 12(1), 93-105. Ramsden, P. (2003). Learning to teach in higher education (2nd ed). London, RoutledgeFalmer. Ramsden, P., & Entwistle, N.J. (1981, November). Effects of academic departments on student’s approaches to studying. British Journal of Educational psychology, 51, 368-383. Ranieri M., (2006). Formazione e cyberspazio. Ddivari ed opportunità nel mondo della rete. Pisa: ETS. Ranieri, M. (2005). E-learning: modelli e strategie didattiche. Trento: Erickson. Rao, V.S., & Monk, A. (1999). The effects of individual differences and anonymity on commitment to decisions. Journal of Social Psychology, 139(4), 496-515. Ready, K.J., Hostager, T.J., Lester, S., & Bergmann, M. (2004). Beyond the silo approach: Using group support systems in organization behavior classes to facilitate students understanding of individual and group behavior in electronic meetings. Journal of Management Education, 28(6), 770-789.
Compilation of References
Reek, K.€A. (1989). The try system or how to avoid testing students programs. In Proceedings of SIGCSE, 112–116.
Rosenberg, M., 2001. . (2001). E-learning: Strategies for delivering knowledge in the digital age. Toronto: McGraw-Hill.
Rehabilitation Act. (1998). Retrieved on Mar 30, 2006 from http://www.usdoj.gov/crt/508/archive/oldinfo. html
Roth, C.H. (1993). Computer aids for teaching logic design. Frontiers in education conference, 188-191). IEEE.
Reigeluth, C.M. (1999). Instructional-Design theories and models: An overview of their current status. Hillsdale: Lawrence Erlbaum Associates.
Routman, R. (1988). Transitions: from literature to literacy. Heinemann.
Renée, E. D., Barbara, A. F., & Eduardo, S. (2005). E-Learning in organizations. Journal of Management, 31(6), 920-940. Rice, P.L. (1973, December). Making meetings count. Business Horizons. Ridgway, J., McCusker, S., & Pead, D. (2004). Literature review of E-Assessment. http://www.futurelab.org. uk/download/pdfs/research/ lit_reviews/futurelab_review_10.pdf last accessed 15 April 2007. Riding, & Buckle, C. F. (1990). Learning styles and training performance, Training Agency, Sheffield. Riding, & Cheema (1991). Cognitive styles -- An overview and integration. Educational Psychology, 11(3), 193-215. Rieber, L. R. (1991). Animation, incidental learning, and continuing motivation. Journal of Educational Psychology, 83(3), 318. Roblyer, M.D. (2006). Integrating educational technology into teaching. 4th Edition. Upper Saddle River, New Jersey: Pearson Prentice Hall. Roever, C. (2001). Web-based language testing. Language Learning & Technology, 5(2), 84-94. Rogers, E. M. (1995). Diffusion of innovations (Vol. 4th). New York: The Free Press. Rogers, P., & Lea, M. (2005). Social presence in distributed group environments: The role of social identity. Behavior & Information Technology, 24(2), 151-158.
Routman, R. (1991). Invitations: Changing as Teachers and Learners K-12. Heinemann. Roy, M. H., & Elfner, E. (2002). Analyzing student satisfaction with instructional technology techniques. Industrial and Commercial Training, 34(7), 272-277. Rudner, L.M. (2002). An examination of decision-theory adaptive testing procedures, Conference of American Educational Research Association, New Orleans, LA April 1-5. Rudner, L.M. (2006). An on-line interactive computer adaptive testing tutorial. Retrieved on August 02, 2006, from, http://edres.org/scripts/cat/catdemo.htm Russell, T. (2001). http://www.nosignificantdifference. org/ accessed on January 26, 2008. Russell, T. (2002). The “No Significant Difference Phenomenon” Website. Retrieved September 14, 2003, from http://teleeducation.nb.ca/nosignificantdifference Ryder (1999). Spinning Webs of Significance: Considering anonymous communities in activity systems. Retrieved December 8, 2007 from http://carbon.cudenver. edu/~mryder/iscrat_99.html Saikkonen, R., Malmi, L., & Korhonen, A. (2001). Fully automatic assessment of programming exercises. In Proceedings of the 6th annual conference on Innovation and Technology in Computer Science Education (ITiCSE), 133–136. Salzmann Ch., Gillet D., Scott P., & Quick K. (2008): Remote lab: Online support and awareness analysis. To
359
Compilation of References
be presented in the 17th IFAC World Congress, Seoul, Korea, July 6-11, 2008. Salzmann Ch., Yu. C. M, El Helou S., & Gillet D. (2008): Live interaction in social software with aapplication in collaborative learning. To be presented in the 3nd International Conference on Interactive Mobile and Computer aided Learning (IMCL), Jordan, April 16-18, 2008. Sambamurthy, V., & DeSanctis, G. (1990). An experimental evaluation of GDSS effects on group performance during stakeholder analysis. Proceedings of the TwentyThird Hawaii International Conference on System Sciences, 4, 79-88. Sambamurthy, V., & Poole, S.M. (1992). The effects of variations in capabilities of GDSS designs on management in cognitive conflicts in groups. Information Systems Research, 3(3), 224-251. Sampson, D., Karagiannidis, C., & Cardinali, F. (2002). An architecture for Web-based e-learning promoting re-usable adaptive educational e-content. Educational Technology and Society, 5(4), 27-36. Sampson, D., Karagiannidis, C., Schenone, A., & Cardinali, F. (2002). Integrating a knowledge-on-demand personalised learning environment in e-learning and e-working settings. Educational Technology & Society Journal, 5, 2.
Schultze-Mosgau, S., Zielinski T., & Lochner, J. (2004). Web-based, virtual course units as a didactic concept for medical teaching. Medical teacher, 26(4), 336-342. Sclater, N., & Howie, K. (2003). User requirements of the ultimate online assessment engine. Computers & Education, 40, 285-306. Sénac, P., Diaz, M., Léger, A., & de Saqui-Sannes, P. (1996). Modeling logical and temporal synchronization in hypermedia systems. IEEE Journal of Selected Areas on Communications, 14(1), 84-103. Servage, L. (2005). Strategizing for workplace e-learning: Some critical considerations. The Journal of Workplace Learning, 17(5-6), 304-317. Sewall, T. J. (1986). The measurement of learning styles: A critique of four assessment tools. ERIC Document 261 247 Sharda, R., Romano, N.C. Jr., Lucca, J.A., Weiser, M., Scheets, G., Chung J.-M., & Sleezer, C.M. (2004). Foundation for the study for computer-supported collaborative learning requiring immersive presence. Journal of Management of Information Systems, 20(4), 31-63. Shaw, G.J. (1998). User satisfaction in group support systems research: A meta-analysis of experimental results. Proceedings of the Thirty-First Annual Hawaii International Conference on System Science, 360-369.
Sauer, S., Osswald, K., Wielemans, X., & Stifter, M. (2006). Story authoring - U-Create: Creative authoring tools for edutainment applications (LNCS 4326, pp. 163-168).
Shaw, M. (1981). Group dynamics: The psychology of small group behavior (3rd ed.). New York: McGrawHill.
Sausne, Rebecca (2003). Thinking about going virtual? Better bone up on the for-profits to see what you’re up against. University Business, July http://universitybusiness.com/page.cfm?p=311 accessed January 29, 2005)
Shea, P., Pickett, A., & Pelz, W. (2004). Enhancing student satisfaction through faculty development: The importance of teaching presence, Elements of Quality Online Education: Into the Mainsteam, Needham, MA: SCOLE (ISBN 0-9677741-6-0).
Sawyer, J.E., Ferry, D.L., & Kydd, C. (2001). Learning about and from group support systems. Journal of Management Education, 25(3), 352-371. Schlechter, T. M. (1990). The relative instructional efficiency of small group computer-based training. Journal of Educational Computing Research, 6(3), 329-341.
360
Sheehan, T.J. (1978). Statistics for medical students: personalizing the Keller plan. The American statistician, 32(3), 96-99. Shen, J, Cheng, K., Bieber, M., & Hiltz, S. R. (2004). Traditional in-class examination vs. collaborative online
Compilation of References
examination in asynchronous learning networks: field evaluation results. In Proceedings of AMCIS 2004.
Smith, I. M. (1964). Spatial ability, Knapp, San Diego, CA.
Shipman, F.M., & McCall, R. (1994). Supporting knowledge-base evolution with incremental formalization. In Proceedings of CHI’94 Conference, 285-291. April 24-28, 1994, Boston, MA..
So, H.-J., & Brush, T. A. (2007). Student perceptions of collaborative learning, social presence and satisfaction in a blended learning environment: Relationships and critical factors. Computers & Education, doi:10.1016/ j.compedu.2007.05.009.
Shipman, F.M.,& Marshall, C.C. (1994). Formality considered harmful: Issues, experiences, emerging themes, and directions. Technical Report ISTL-CSA-94-0802, Xerox Palo Alto Research Center, Palo Alto, CA, 1994.
Solomon, L., & Wiederhorn, L. (2000). Progress of Technology in the Schools: 1999 Report On 27 States. Milken Exchange on Education and Technology. Milken Family Foundation. Santa Monica, CA.
Shneiderman, B. (1998). Designing the user interface: Strategies for effective human-computer interaction. Reading, MA: Addison-Wesley.
Sonnier, I. L. (1991). Hemisphericity: A key to understanding individual differences among teachers and learners. Journal of Instructional Psychology, 18(1), 17-22.
Shneiderman, B. (2000) Universal Usability, Communications of the ACM, 43(5), 84-91.
Specht, M. & Reinhard, O. (1998). ACE - Adaptive Courseware Environment. The New Review of Hypermedia and Multimedia, 4(1), 141 -161.
Short, K. (1995). Research and professional resources in children’s literature: Piecing a patchwork quilt. International Reading Association. Siemens G. (2006). Knowing knowledge. Retrieved on 20 December, 2007 from http://www.knowingknowledge.com Siemens, G. (2004). Connectivism: A learning theory for a digital age. Elearningspace.org Retrieved on 20 December, 2007 from http://www.elearnspace.org/Articles/connectivism.htm Silva, H., Rodrigues, R. F., Soares, L. F. G., & Muchaluat Saade, D. C. (2004). NCL 2.0:Iintegrating new concepts to XML modular languages. ACM DocEng. Simonson, M. (2007). Course Management Systems. The Quarterly Review of Distance Education, 8(1), xii-ix. Sims, R. R., Veres, J. G., Watson, P., & Buckner, K. E. (1986). The reliability and classification stability of the learning styles inventory, in Educational and Psychological Measurement, 753-760. SMIL. (n.d.). SMIL 2.1. Retrieved from http://www. w3.org/TR/SMIL2/
Specht, M. (2000). ACE Adaptive Courseware Environment. In P. Brusilovsky, O. Stock, & C. Strapparava (Eds.), In Proceedings of the International Conference on Adaptive Hypermedia and Adaptive Web-based Systems AH2000, (pp. 380-383). Statsoft (2002). http://www.statsoftinc.com/textbook/ stfacan.html. Accessed on November 30, 2002 Steeples, C., & Goodyear, P. (1999) Enabling professional learning in distributed communities of practice: Descriptors for multimedia objects. Journal of Network and Computer Applications, 22, 133-145. Stewart, R., Narendra, V., & Schmetzke, A. (2005). Accessibility and usability of online library databases. Library Hi Tech, 23(2), 265-286. Straetmans, G.J.M., & Eggen T.J.H.M. (1998). Computerized adaptive testing: what it is and how it works. Educational Technology, 82-89, January-February. Strijbos, J.W., Martens, R.L., & Jochems, W.M.G. (2003). Designing for interaction: Six steps to designing computer-supported group-based learning. Computer and Education, 41, 1-22.
361
Compilation of References
Sumner, M., & Hostetler, D. (2002). A comparative study of computer conferencing and face-to-face communications in systems design. Journal of Interactive Learning Research, 13(3), 277-291. Sure Y., Erdmann M., Angele J., Staab S., Studer R., & Wenke, D. (2002). OntoEdit: Collaborative ontology development for the Semantic Web. Proceedings of the 1st International Semantic Web Conference (ISWC-02). Svetcov, D. (2000). The virtual classroom vs. the real one. Forbes, 50-52. Swan, K. (2003). Developing social presence in online discussions. In S. Naidu (Ed.), Learning and teaching with technology: Principles and practices (pp. 147-164). London: Kogan Page. Talbott T., Peterson M., Schwidder J., & Myers J.D. (2005). Adapting the electronic laboratory notebook for the semantic era. Proceedings of the 2005 International Symposium on Collaborative Technologies and Systems, 2005 (pp. 136-143). Tan, B.C.Y., Raman, K.S., & Wei, K.K. (1994). An empirical study of task dimension of group support systems. IEEE Transactions on Systems, Man and Cybernetics, 24(7), 1054-1060. Tempich, C., Pinto, H.S., Sure, Y., & Staab, S. (2005) An argumentation ontology for dIstributed, loosely-controlled and evolvinG engineering processes of ontologies (DILIGENT). In: The 2nd European Semantic Web Conference, Greece, 241-256 Tennant, M. (1988). Psychology and adult learning, Routledge, London. Tennyson, R. D. & Rothen, W. (1977). Pre-task and ontask adaptive design strategies for selecting numbers of instances in concept acquisition. Journal of Educational Psychology, 69, 586-592. Thatcher J., Burks M., Swierenga S., Waddell C., Regan B., Bohman P., Henry S. L., and Urban M. (2002). Constructing Accessible Web Sites,. Birmingham, U.K.: Glasshaus.
362
Thomas, E.J., & Fink, C. (1963). Effects of group size. Psychology Bulletin, 60, 371-384. Thorndike, E. L. (1932). Fundamentals of learning. New York: Teachers College Press. TOEFL http://www.ets.org, http://www.toefl.org, http:// toeflpractice.ets.org/ To s h D. (2005). A concept diagram for the Personal Learning Landscape. Retrieved on 20 December, 2007 from http://elgg.net/ dtosh/weblog/398.html Tosh, D. (2007). The Future VLE (Virtual Learning Environment). Scott’s workblog, post By Scott Wilson on 13 nov 2007. Retrieved on 20 December, 2007 from http://www.cetis.ac.uk/members/scott/blogview?entry =20071113120959 Tosh, D., & Werdmuller B. (2004). Creation of a learning landscape: weblogging and social networking in the context of e-portfolios. Retrieved on 20 December, 2007 from http://eradc. org/papers/Learning_landscape.pdf Toulmin, S. (1958). The uses of argument. Cambridge: Cambridge University Press, Traupel, L. (2004). Redefining distance to market your company. http://www.theallineed.com/ad-online-business-3/online-business-028.htm, (accessed January 29, 2005) Triantafillou, E., Georgiadou, E., & Economides, A. (2007b).The role of user model in CAT: Exploring adaptive variables. Technology, Instruction, Cognition and Learning: An International, Interdisciplinary Journal of Structural Learning, 5(1), 69-89. Triantafillou, E., Georgiadou, E., & Economides, A.A. (2007a). Applying adaptive variables in computerised adaptive testing. Australasian Journal of Educational Technology, AJET, 23(3). Triantafillou, E., Georgiadou, E., & Economides, A.A. (2008). The design and evaluation of a computerized adaptive test on mobile devices. Computers & Education, 50.
Compilation of References
Tucker, L. F., & MacCallum, R. (1997). Unpublished manuscript available at http://quantrm2.psy.ohio-state. edu/maccallum/book/ch6.pdf Accessed on November 30, 2002 Turner, J. (1990). Role change. Annual Review of Sociology, 16, 87-110. Turoff, M., & Hiltz, S.R. (1995). Software design and future of the virtual classroom. Journal of Information Technology in Teacher Education, 4(2), 197-215. Tyran, C.K., & Shepherd, M. (2001). Collaborative technology in the classroom: A review of the GSS research and a research framework. Information Technology and Management, 2(4), 395-418. Tzitzikas Y., Christophides V., Flouris G., Kotzinos D., Markkanen H., Plexousakis D., & Spyratos N. (2007), Emergent knowledge artifacts for supporting trialogical E-Learning, International Journal of Web-Based Learning and Teaching Technologies (accepted). U.S. News and World Report (2001). Best online graduate programs, October. Ukens, L. (2001). What smart trainers should know: The secrets of success from the world’s foremost experts. San Francisco: John Wiley & Sons, Inc. Universal Usability Guide, universalusability.org, Retrieved from http:// www.universalusability.org/ Urdan, T. A., & Weggan, C. C. (2000). Corporate elearning: exploring a new frontier. New York: Hambrecht & Co. Valacich, J.S., Dennis, A.R., & Connolly, T. (1994). Idea generation in computer-based groups: A new ending to an old story. Organizational Behavior and Human Decision Processes, 57(3), 448-467. Valenti, S., Cucchiarelli, Al & Panti, M. (2002). Computerbased assessment systems evaluation via the ISO9126 quality model. Journal of Information Technology Education, 1(3).
Valenti, S., Cucchiarelli, Al, & Panti, M. (2001). A framework for the evaluation of test management systems. Current Issues in Education, 4(6). Vallerand, R. J. (1997). Toward a hierarchical model of intrinsic and extrinsic motivation. Advances in Experimental Social Psychology, 27, 271-360. Valo, A., Hyvonen, E., & Komurainen, V. (2005). A tool for collaborative ontology development for the Semantic Web, in: Proc. of International Conference on Dublin Core and Metadata Applications 2005, Madrid, Spain van Gelder, T. (2003), Enhancing deliberation through computer supported argument visualization, in P. Kirschner, S. Buckingham Shum and C. Carr (eds), Visualizing argumentation: Software tools for collaborative and educational sense-making, 97-115. .London: Springer Verlag. Van Rossum, G., Jansen, J., Mullender, K., & Bulterman, D. C. A. (1993). CMIFed:A presentation environment for portable hypermedia documents. ACM Multimedia. Vassileva, J. & Deters, R. (1998). Dynamic courseware generation on the WWW. British Journal of Educational Technologies, 29(1), 5-14. Vassileva, J. (1994). A new approach to authoring of adaptive courseware for engineering domains. In Proceedings of the International Conference on Computer Assisted Learning in Science and Engineering CALISCE’94, Paris, (pp. 241-248). Veerman, A.L., Andriessen, J.E., & Kanselaar, G. (1998). Learning through computer-mediated collaborative argumentation. Available on-line: http://eduweb.fsw. ruu.nl/arja/PhD2.html Venkatesh, V. (1999). Creation of favorable user perceptions: Exploring the role of intrinsic motivation. MIS Quarterly, 23(2), 239-260. Verdejo, M.F. (1993). Interaction and collaboration in distance learning through computer mediated technolo-
363
Compilation of References
gies. In T.T. Liao (Ed.) Advanced Educational Technology: Research Issues and Future Potential, New York: Springer. Vidou, G., Dieng-Kuntz, R., El Ghali, A., Evangelou, C.E., Giboin, A., Jacquemart, S., & Tifous, A. (2006). Towards an ontology for knowledge management in communities of practice. In proceeding of the 6th International Conference on Practical Aspects of Knowledge Management, (PAKM06), 30 Nov.-1 Dec. 2006, Vienna, Austria Vogel, D.R., Davison, R.M., & Shroff, R.H. (2001). Sociocultural learning: A perspective on GSS-enabled global education. Communications of the AIS, 7(9), 1-41. Vrandečić, D., Pinto, S., Sure Y., & Tempich, C. (2005). The DILIGENT Knowledge Process. Journal of Knowledge Management, 9, 85-96. W3C/WAI (1997). The World Wide Web Consortium (W3C) Web Accessibility Wagner, E. D., & Reddy, N. L. (1987). Design considerations in selecting teleconferencing for instruction. The American Journal of Distance Education, 1(3), 49-56. Wagner, G.R., Wynne, B.E., & Mennecke, B.E. (1993). Group support systems facilities and software. In L.M. Jessup & J.S. Valacich (Eds.), Group Support Systems New Perspectives, 8-55. New York: Macmillan. Wainer, H. (1990). Computerized Adaptive Testing: A Primer. New Jersey: Lawrence Erlbaum Associates, Publishers. Wainer, H., Dorans, D. J., Eignor, D., Flaugher, R., Green, B. F., Mislevy, R. J., Steinberg, L., & Thissen, D. (2000). Computerized adaptive testing: A Primer. (2nd edition). Hillsdale, NJ: Lawrence Erlbaum Associates. Waldrop J. and Stern S. (2003). Disability status: 2000. Census 2000 Brief. Retreived from http://www.census. gov/prod/2003pubs/c2kbr-17.pdf Wang, X. (2007). What factors promote sustained online discussions and collaborative learning in a Web-based course? International Journal of Web-Based Learning and Teaching Technology, 2(1), 17-38.
364
Wang, X., & Teles, L. (1998). Online collaboration and the role of the instructor in two university credit courses. In. Chan, T. W., Collins, A. & Lin, J. (Eds.), Global Education on the Net, Proceedings of the Sixth International Conference on Computers in Education, 1, 154-161. Beijing and Heidelberg: China High Education Press and Springer-Berlag. Wang, X.C., Hinn, D.M., & Kanfer, A.G. (2001). Collaborative learning for learners with different learning styles. Journal of Research on Technology in Education, 34(1), 75-85. Watson, J.M. (1986). The Keller plan, final examinations, and log-term retention. Journal for Research in Mathematics Education, 17(1), 60-68. Watson, R.T., DeSanctis, G., & Pool, M.S. (1988). Using a GDSS to facilitate group consensus: some intended and unintended consequences. MIS Quarterly, 12(3), 463-478. Watt, J.H., Walther, J.B., & Nowak, K.L. (2002). Asynchronous videoconferencing: A hybrid communication prototype. Proceedings of the 35th Hawaii International Conference on System Sciences, Los Alamitos: IEEE Press. Waxman, H.C., Connell, & J. Gray (December 2002). Meta-analysis: Effects of educational technology on student outcomes. North Central Regional Education Laboratory. Weber, G. & Brusilovsky, P. (2001). ELM-ART: An adaptive versatile system for Web-based instruction. International Journal of Artificial Intelligence in Education, 12(4). Special Issue on Adaptive and Intelligent Web-based Educational Systems, 351-384. Weber, G. & Specht, M. (1997). User modeling and adaptive navigation support in WWW-based tutoring systems. In A. Jameson, C. Paris, & C. Tasso (Eds.), In Proceedings of the Sixth International Conference on User Modeling, UM97, (pp. 289-300). Webster, J., & Hackley, P. (1997). Teaching effectiveness in technology-mediated distance learning. Academy of Management Journal, 40(6), pp1282-1309.
Compilation of References
Weir, L. (2005). Raising the awareness of online accessibility. THE Journal, 32(10). Weiss, D.J., & Kingsbury,G.G. (1984). Application of computerized adaptive testing to educational problems. Journal of Educational Measurement, 21(4),361-375. Welch, R.E., & Frick, T.W. (1993). Computerized adaptive testing in instructional settings. Educational Technology Research & Development, 41(3), 47-62. Wenger E. (1998). Communities of practice: Learning, meaning, and identity, Cambridge University Press. Wenger E. (1999). Community of practices: Learning, meaning and identity. Cambridge UK, Cambridge University Press. Wenger, E. (1987). Artificial Intelligence and Tutoring Systems - Computational and Cognitive Approaches to the Communication of Knowledge, (pp 13-25). Wenger, E. (1998). Communities of practice: Learning, meaning and identity. Cambridge University Press. Wenger, E., & Snyder, W. (2000). Communities of practice: The organizational frontier. Harvard Business Review, 78, 139-145. Wenger, E., McDermott, R., & Snyder, W.M. (2002). Cultivating Communities of Practice. Boston: Hardvard Business School Press. Wernet, S.P., Olliges, R.H., & Delicath, T.A. (2000). Postcourse evaluations of WebCT (Web Course Tools) classes by social work students. Research on Social Work Practice, 10(4), 487-504. Wherry, R. J., & South, J. C. (1977). A Worker Motivation Scale. Personnel Psychology, 30(4), 613-636. White, B. Y., & Frederiksen, J. R. (1998). Inquiry, modeling, and metacognition: Making science accessible to all students. Cognition and Instruction, 16(1), 3-188. WHO (2002), World Health Organization: Future trends and challenges in rehabilitation. Retrieved Feb 18, 2006 from http://www.who.int/ncd/disability/trends.htm
Wiley, D. A. (2000). Connecting learning objects to instructional design theory. A definition, a metaphore, and a taxonomy. In D. A. Wiley, (Ed.), The instructional use of learning objects, Online Version. Retrieved July 22, 2007, from http://reusability.org/read/chapters/wiley.doc Wilkinson, H. E., Orth, C. D., & Benfari, R. C. (1986). Motivation theories: An integrated operational model. SAM Advanced Management Journal, 51(4), 24. Wilkinson, I.A.G., & Fung I.Y.Y. (2002). Small-group composition and peer effects. International Journal of Educational Research, 37, 425-447. Williams, S., & Pury, C. (2002). Student attitudes toward participation in electronic discussions. International Journal of Educational Technology, 3(1), 1-15. Wilson, D., Varnhagen, S., Krupa, E., Kasprzak, S., Hunting, V. & Taylor, A. (2003). Instructors’ adaptation to online graduate education in health promotion: A qualitative study. Journal of Distance Education, 18(2), 1-15. Wilson, J., & Jessup, L.M. (1995). A field experiment on GSS anonymity and group member status. Proceedings of the 28th Annual Hawaii International Conference on Systems Science, 212-221. Wilson, S., Olivier, B., Jeyes, S., Powell, A., & Franklin, T. (2004). A technical framework to support e-learning. Technical report, JISC. http://www.jisc.ac.uk/uploaded_documents/Technical Framework feb04.doc last accessed 15 April 2007. Wilson, T.D. (2002). The nonsense of knowledge management. Information Research, 8(1), 144. Retrieved on 20 December, 2007 from http://InformationR.net/ir/81/paper144.html Winn, W. D. (1982). The role of diagrammatic representation in learning sequence, identification, and classification as a function of verbal and spatial ability. Journal of Research in Science Teaching, 19(1), 79-89. Wise, S.L., & Kingsbury, G.G. (2000). Practical issues in developing and maintaining a computerized adaptive testing program. Psicologica 21, 135-155.
365
Compilation of References
Wolfe, R. A. (1994). Organizational innovation: review, critique and suggested research. Journal of Management Studies, 31(3), 405-431. Wu, A. (2003). Supporting electronic discourse: Principles of design from a social constructivist perspective. Journal of Interactive Learning Research, 14(2), 167-184. Wu, D., & Hiltz, S. R. (2004). Predicting learning from asynchronous online discussions. Journal of Asynchronous Learning Networks (JALN), 8(2), 139-152. Wu, D., Bieber, M., Hiltz, S. R., & Han, H. (2004, January). Constructivist learning with participatory examinations, In Proceedings of the 37th Hawaii International Conference on Systems Sciences (HICSS-37), Big Island, CD-ROM. XML. (2006). Extensible markup language (XML) 1.1. Retrieved from http://www.w3.org/TR/xml11 XPath. (1999). XML path language (XPath) 1.0. Retrieved from http://www.w3.org/TR/xpath XSLT. (1999). XSL transformations 1.0. Retrieved from http://www.w3.org/TR/xslt Young, R., Shermis, M.D., Brutten, S.R., & Perkins, K. (1996). From conventional to computer-adaptive testing of ESL reading comprehension. System, 24(1), 23-40.
366
Zahorian S.A, Lakdawala, V.K., Gonzalez, O.R., Starsman, S., & J.F. Leathrum Jr. (2001). Question model for intelligent questioning systems in engineering education. Proceedings 31st ASEE/IEEE Frontiers in Education Conference, pp. T2B7-12, IEEE. Zeginis, D., Tzitzikas, Y., & Christophides, V. (2007). On the foundations of computing deltas between RDF models. In Proceedings of the 6th International Semantic Web Conference (ISWC-07). Zhang, D. (2004). Virtual mentor and the lab system—Toward building an interactive, personalized, and intelligent e-learning environment. Journal of Computer Information Systems, 44(3), 35-43. Zhang, D., & Nunamaker, J. F. (2003). Powering e-learning in the new millennium: An overview of e-learning and enabling technology. Information Systems Frontiers, 5(2), 207-218. Zhang, D., Zhao, J. L., Zhou, L., & Nunamaker, J.F. (2004). Can E-learning replace classroom learning? Communication of the ACM, 47(5), 75-79. Zmud, R. W. (1982). Diffusion of modern software practices: Influence of centralization and formalization. Management Science, 28(12), 1421-1431.
367
About the Contributors
Nikos Karacapilidis holds a professor position at the University of Patras (management information systems). His research interests lie in the areas of intelligent Web-based information systems, technology-enhanced learning, e-collaboration, knowledge management systems, group decision support systems, computer-supported argumentation, enterprise information systems and Semantic Web. He has been recently appointed as editor-in-chief of the Advances in Web-based Learning (AWBL) book series, published by IGI Global (http://www.igi-pub.com/bookseries/details.asp?id=432). More detailed information about his publications list, research projects involved and professional activities can be found at http://www.mech.upatras.gr/~nikos/. *** Dimitris Andreou is currently a software engineer at the Institute of Computer Science in FORTH. He holds a BSc in applied informatics from the University of Macedonia, and an MSc in computer science from the University of Crete. His is interested in object oriented programming, graph theory, and designing effective programming interfaces. Willem-Paul Brinkman is assistant professor at Delft University of Technology, The Netherlands, working in the man-machine interaction section. He is also an associate researcher in the School of Information Systems, Computing and Mathematics at Brunel University, UK. Here he has been teaching the module Foundations of Computing as a lecturer from 2003 to 2007. He obtained his PhD degree at the Technische Universiteit Eindhoven, The Netherlands, and he also has a Postgraduate Certificate in Teaching and Learning in Higher Education (PGCert). His research interests lie in the area of the human-computer interaction and computer-assisted learning. Dr. Brinkman is currently a member of the executive committee of the European Association of Cognitive Ergonomics (EACE), and a member of the programme committee of BCS HCI conference between 2005 and 2008. Vassilis Christophides studied electrical engineering at the National Technical University of Athens (NTUA), Greece. He received his DEA in computer science from the University PARIS VI and his PhD from the Conservatoire National des Arts et Metiers (CNAM) of Paris, France. He is an associate professor at the Department of Computer Science, University of Crete, and affiliated researcher at the Information Systems and Software Technology Laboratory of the Institute of Computer Science Foundation for Research and Technology-Hellas (FORTH-ICS). His main research interests include Semantic Web and peer-to-peer information management systems, semistructured and XML/RDF data
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
About the Contributors
models and query languages as well as description and composition languages for e-services. He has published over 60 articles in international conferences and journals and has served on numerous conferences program committees (ACM SIGMOD, VLDB, EDBT, WWW, ISWC, ICWE, ICWS, ECDL). He has received the 2004 SIGMOD Test of Time Award and the Best Paper Award at the 2nd and 6th International Semantic Web Conference in 2003 and 2007. Elisabetta Cigognini is a PhD student in Telematics and Information Society by the Electronics and Telecommunications Department of the University of Florence. She works in the domain of “personal knowledge management skills acquisition for lifelong learners in the Knowledge Society”. In 2003 she received the degree in communication sciences at Iulm University of Milan. In 2004 she was granted the post-graduate master title in “e-Learning Project Management and Design” by the University of Florence; since then she collaborated as eLearning contractor, instructional designers and eTutor both in corporate and in academic contexts. Her main research interests concern personal knowledge management skills, personal learning environment, instructional design, collaborative working environment, learning and knowledge management, e-learning and e-knowledge. Martha Cleveland-Innes is associate professor in the Center for Distance Education at Athabasca University. She is an award winning scholar in distance and higher education with an active research and publication program. Current research interests are the lived experience of online learners and instructors, affective outcomes and emotional presence in online learning and leadership in open and distance higher education. Awards include the 2005 Canadian Association for Distance Education Excellence in Research Award and a finalist for the Best Paper Award at the 2006 conference of the European Distance Education Network. She teaches Research Methods and Leadership and Project Management in distance and distributed learning. Yogesh Kumar Dwivedi is a lecturer in the School of Business and Economics, Swansea University. He obtained his PhD on ‘Investigating consumer adoption, usage and impact of broadband: UK households’ and an MSc in information systems from Brunel University. His research interests include the adoption, usage and impact of telecommunication technologies, the Internet and e-commerce. He has co-authored more than 20 papers in academic journals and international conferences. He is a member of the Association of Information Systems (AIS) and Lifetime Member of Global Institute of Flexible Systems Management, New Delhi. Anastasios Economides is an associate professor of computer networks at the University of Macedonia, Thessaloniki, Greece. He holds an MSc and PhD in computer engineering from the University of Southern California, Los Angeles. His research interests include Mobile Networks and Applications. He has published over 100 peer-reviewed papers. Sandy El Helou is a PhD student at the École Polytechnique Fédérale de Lausanne (EPFL). She received her BE degree in computer Engineering (with emphasis on communication) from the Lebanese American University in 2006. Her research interests lie in the field of computer supported cooperative work (CSCW), in the modeling, design, development and evaluation of social software applications and personalized awareness services for sustaining collaboration and cooperation for online communities.
368
About the Contributors
She is currently involved in Palette, a European Project aiming at developing interoperable Web services to sustain individual as well as organizational learning in CoPs. Christina Evangelou received her Diploma from the Mechanical Engineering and Aeronautics Department, Patras University, Greece in 2001, and since 2005 she holds PhD from the same Department. She is currently a post-doctoral research collaborator in the Knowledge Multimedia Laboratory of the Informatics and Telematics Institute of Centre for Research and Technology Hellas. Her research work focuses on collaborative and computer mediated decision support systems, knowledge management, multicriteria decision aid, ontologies and semantic representation. Giorgos Flouris is currently a researcher at the Institute of Computer Science in FORTH.He holds a BSc in Mathematics from the University of Athens and an MSc and a PhD in computer science from the University of Crete. He has worked as a post-doctoral research fellow at the Istituto della Scienza e delle Tecnologie della Informazione (ISTI) of CNR in Italy, under an ERCIM “Alain Bensoussan” postdoctoral fellowship. His research interests lie in the areas of knowledge representation and reasoning, belief revision, the semantic web and ontology evolution. Giorgos has published more than 25 research papers in peer-reviewed workshops, conferences and journals and has received a number of scholarships and awards, including a Best Paper Award in STAIRS-06. He is currently involved in the EU projects KP-Lab and CASPAR and has organized, or been involved in the organization of, several workshops, conferences and journal issues. Randy Garrison is currently the director of the Teaching & Learning Centre and a full professor in the Faculty of Education at the University of Calgary. Dr. Garrison has co-authored a book titled E-Learning in the 21st Century where he provides a framework and core elements for online learning. He has most recently co-authored a book titled Blended Learning in Higher Education that uses the Community of Inquiry Framework to organize the book. Dr. Garrison won the 2004 Canadian Society for Studies in Higher Education Award for distinguished contribution to research in higher education and the 2005 Canadian Association for Distance Education Excellence in Research Award. Denis Gillet is MER (associate professor) at the École Polytechnique Fédérale de Lausanne (EPFL). He received the PhD in information systems in 1995 from the EPFL. His research interests include distributed e-learning systems, sustainable interaction systems, computer-supported collaborative learning, real-time and ubiquitous Internet services, as well as hierarchical control systems. Dr. Gillet received the 2001 iNEER (International Network for Engineering Education and Research) Recognition Award for Innovations and Accomplishments in Distance and Flexible Learning Methodologies for Engineering Education. He is involved in many national and European projects dealing with Technology Enhanced Learning and Distributed Systems, including the ProLEARN Network of Excellence (http://www.prolearn-project.org) and the Palette Integrated Project (http://palette.ercim.org). George Gkotsis is a PhD student at the Mechanical Engineering and Aeronautics Department, University of Patras. He works as a research collaborator at the E-learning Sector of the Research Academic Computer Technology Institute, Patras, Greece. He holds an MSc from the Computer Engineering and Informatics Department, University of Patras (2005). His research interests are on collaboration, argumentation systems, knowledge management and visualization, Web engineering and hypertext.
369
About the Contributors
Dianne Hall is an associate professor of MIS at Auburn University. She received her doctorate at Texas A&M University where she also taught and consulted for several years. Her work has appeared in both academic and practitioner journals such as Decision Support Systems, Communications of the ACM, Communications of the AIS, Journal of Financial Services Marketing, Knowledge Management Research and Practice, and the Journal of Information Technology Theory and Application. Her work has also appeared in several books. Her current research interests include applications of information technologies in support of knowledge-based processes as well as multiple-perspective and value-based decision-making. Nikos Karousos holds a Diploma (1998) and an MSc (2000) from the Dept. of Computer Engineering and Informatics University of Patras, Greece. He is currently a PhD student in the provision of hypertext services. He works in the E-Learning Sector of the Research Academic Computer Technology Institute, Patras, Greece. His research is focused on the areas of hypertext/hypermedia, web services, developer support, WWW technologies and knowledge management. Terry Kidd is the director of instructional development and support services at the University of Texas Health Science Center School of Public Health. He has presented at international conferences on designing technology rich learning environments, web based instruction, and issues dealing with faculty and staff development. His research interests include designing technology-based learning environments, instructional design strategies for web based instruction and socio-cultural aspect of information communication and technology as they relate to social change and community development. He is the final stages of completing the doctoral degree from the Texas A&M University focusing on Educational Technology. Ellen Kinsel completed her master’s degree in distance education from Athabasca University in 2004. She has extensive experience as an online student and in online instruction design. Ellen is currently working for Odyssey Learning Systems, providing support and training for instructors and administrators using the Nautikos learning and content management system. Dimitris Kotzinos is an assistant professor at the Department of Geomatics and Surveying at the TEI of Serres and an affiliated researcher at the Information Systems and Software Technology Laboratory of the Institute of Computer Science, Foundation for Research and Technology-Hellas (FORTH-ICS). He holds a PhD on the topic of application of digital map technologies on developing internet based Advanced Traveler Information Systems (ATIS) from the Department of Production and Management, Technical University of Crete, Greece (2001) and an MSc in transportation, from the Civil Engineering Department, Ohio-State University, Columbus, USA (1996). His BS is in computer science, Department of Computer Science, University of Crete, Greece. (1994). His main research interests include development of methodologies, algorithms and tools for web based information systems, portals and web services. Especially applications of the above in the fields of: E-learning, Geographic Information Portals, real-time Advanced Traveler Information Systems (A.T.I.S.). He has published over 25 papers in various journals, conferences and workshops and serves as a program committee member and reviewer for various conferences and journals.
370
About the Contributors
Teresa Lang is an assistant professor of accounting at Columbus State University. She earned her doctorate at Auburn University while working as a full-time instructor. She is a licensed CPA with 15 years experience with “Big”, medium, and local accounting firm firms. She also hold the designations of CISA and CITP. Her research has appeared in several academic journals such as Journal of Computer Information Systems, Omega, and Academic Exchange Quarterly. Her current research interests include technology in education, privacy and control issues related to data management, and IT auditing. John Lim is associate professor in the School of Computing at the National University of Singapore. Concurrently, he heads the Information Systems Research Lab. Dr. Lim graduated with First Class Honors in electrical engineering and an MSc in MIS from the National University of Singapore, and a PhD from the University of British Columbia. His current research interests include e-commerce, collaborative technology, negotiation support, IT and education, and IS implementation. He has published in MIS and related journals including Journal of Management Information Systems, Journal of Global Information Management, Decision Support Systems, International Journal of Human Computer Studies, Organizational Behavior and Human Decision Processes, Behaviour and Information Technology, International Journal of Web-based Learning and Teaching Technologies, Journal of Database Management, and Small Group Research. Amit Kumar Mandal received his Master of Technology from IIT Kharagpur in 2006. His areas of interest include Internet and Web technologies, networking, neural networks, computer graphics and AI. He took up his first professional appointment in 2006 at Verizon where he continues to work now. Chittaranjan Mandal is actively involved in developing technologies for electronic learning. He has developed tools for web based course management, automatic evaluation and advanced content delivery. He received his PhD in 1997 from IIT Kharagpur and is also an associate professor there, with the School of Information Technology and the Department of Computer Science and Engineering. He served as a reader at Jadavpur University prior to joining IIT Kharagpur. He is also an Industrial Fellow of Kingston University since 2000. His research interests include formal modelling, high-level design and web technologies. He has over fifty refereed conference and journal publications. Ben Martz is a professor and chair of the Business Informatics Department at Northern Kentucky University. His teaching interests include groupware, team-based problem solving and entrepreneurship. He received his BBA in marketing from the College of William and Mary; his MS and PhD in business, with an emphasis in MIS, from the University of Arizona. Ben was one of the founding members, as well as president and COO, of Ventana Corporation - a technology, spin-off firm from the University of Arizona. Ben has published his research in MIS Quarterly, Decision Support Systems, The Journal of Management Information Systems and the Decision Sciences Journal of Innovative Education. Dora Nousia is a senior computer engineer educated at the University of Patras, and currently, director of the eLearning Sector of Research Academic Computer Technology Institute(CTI), Greece. She has designed and managed large pilot projects on the utilization of ICT in schools in Greece and abroad, and she is a consultant for various aspects of the application of eLearning into the entire educational system. She leads projects related to innovative software development for communities, collaboration and distance learning.
371
About the Contributors
Emiel Owens received his doctorate at University of Houston with an emphasis in research methods and statistical techniques in Education. Dr. Owens did further graduate work at University of Texas School of Public Health in the field of Biostatistics and Behavior Science. He has held research positions at University of Texas Health Science Center, Baylor College of Medicine, VA Hospital, and M.D. Anderson Hospital. Dr. Owens has also held teaching positions at Prairie View A & M, University of Houston at Clear Lake, and University of St. Thomas. He is currently an associate professor in the College of Education at Texas Southern University, and teaches courses in Educational Research and Statistics. Current research interest is in modeling longitudinal data. Maria Chiara Pettenati is senior researcher at the Telematics Laboratory of the Electronics and Telecommunications Department of the University of Florence since late 2004. Until 2004 she held a post-doctoral research position in the same laboratory. In 2000 she received the PhD grade in Telematics and Information Society with a dissertation titled “Design and Development of a Web-based environment for teaching and learning” granted from the University of Florence. Between 1996 and 1999 she was PhD visiting student at the EPFL (Swiss Federal Institute of Technology, Lausanne, CH). Her main research interests concern collaborative working and learning environment, e-knowledge and network mediated knowledge, trust intermediation architectures, information interoperability architecture, Web of Data. Andrew Rae (BSc(Rand), MA, PhD (Cantab), MA(London)). After research in pure mathematics at Cambridge University Andrew Rae lectured on mathematics at Warwick, Cambridge and London Universities before joining the mathematics department at Brunel University where he published in group theory and has set up a self paced learning module for first year Information technology students. Over 25 years enrolment on this module has grown from 12 to 165 students while supporting text, videos and, recently, web based material have been produced to a high standard. Chris Reade is head of the Department of Informatics and Operations Management in the Business School at Kingston University. He has a research background in mathematics and computing, and foundations of computing in particular, having been a lecturer in computer science for many years. Research areas that he publishes in include functional programming, formal methods and modelling, programming language technology and web technologies. He has also done work in formal foundations for geometric modelling. At Kingston, he has developed interests in web technologies and applications. He has been collaborating with colleagues at IIT Kharagpur for several years in the areas of formal modelling and web technologies for e-Learning. Yassin Rekik received the PhD degree in computer science from the École Polytechnique Fédérale de Lausanne (EPFL). He is a senior research associate at EPFL and professor at the University of Applied Sciences Western Switzerland (HES-SO). His research interests include Web-based learning, mobile learning, collaborative and group-oriented learning, and online experimentation and laboratory activities. Dr. Rekik is currently involved in several national and international initiatives and projects, in particular, the European Network of Excellence ProLEARN.
372
About the Contributors
Chrysostomos Roupas received the Diploma degree€in electrical engineering from Aristotle University of Thessaloniki, in 2003, and the MSc degree in information systems from the University of Macedonia in 2006. His research interests are in the design and development of software systems. Christophe Salzmann is a senior research associate at the École Polytechnique Fédérale de Lausanne (EPFL). He received his MS degree in computer Science from the University of Florida in 1999 and his PhD degree from the EPFL in 2005. His research interests include Web technologies, real-time control, and real-time interaction over the Internet with an emphasis on QoS. Morgan Shepherd is an associate professor of information systems at the University of Colorado at Colorado Springs. Morgan spent 10 years in industry, most of that time with IBM. His last position with IBM was as a technical network designer. He earned his PhD from the University of Arizona in 1995 and has been teaching for the I/S department at the University of Colorado in Colorado Springs since then. His primary teaching emphasis is in telecommunications at the graduate and undergraduate level. He has also taught numerous courses on computer literacy, web design and systems analysis and design. In addition, he has been teaching courses via distance education for several years. His primary research emphasis is on making distributed groups productive and applying this research to business as well as education. His research has appeared in the Journal of Management Information Systems, Decision Sciences Journal of Innovative Education, and Journal of Computer Information Systems. Holim Song is an assistant professor of instructional technology in the College of Education at Texas Southern University in Houston, Texas. He conducts research in the areas of multimedia integration on online learning environments. His research interests include designing and sequencing of online interactions, instructional strategies for web-based instruction, the impact of multimedia in online education, and designing and integrating multimedia learning objects to enhance instructional quality of face-to-face learning environment. He has a wide range of expertise in education, including secondary education and second language acquisition. He completed the MEd degree in second language education and the EdD degree in instructional technology from the University of Houston. Manolis Tzagarakis holds a PhD in computer engineering & informatics and is currently a researcher at the Research Academic Computer Technology Institute in Patras, Greece. His research interests are in the areas of hypertext and hypermedia, knowledge management systems, collaboration support systems, technology-enhanced learning, and group decision support systems. He was the program chair of the ACM Hypertext 2005 conference and the workshop chair for the ACM Hypertext 2004 conference. He has served the program committees of several conferences and workshops. More detailed information can be found at http://tel.cti.gr/tzag/. Yannis Tzitzikas is currently an assistant professor in the Computer Science Dep. at University of Crete (Greece) and associate researcher in Information Systems Lab at FORTH-ICS (Greece). Before joining the University of Crete and FORTH-ICS he was postdoctoral fellow at the University of Namur (Belgium) and ERCIM postdoctoral fellow at ISTI-CNR (Pisa, Italy) and at VTT Technical Research Centre of Finland. He conducted his undergraduate and graduate studies (MSc, PhD) in the Computer Science Department at University of Crete. In parallel, he was a member of the Information Systems Lab of FORTH-ICS for about 8 years, where he conducted basic and applied research around semantic-
373
About the Contributors
network-based information systems within several EU-founded research projects. His research interests fall in the intersection of the following areas: information systems, information indexing and retrieval, conceptual modeling, knowledge representation and reasoning, and collaborative distributed applications. The results of his research have been published in more than 40 papers in refereed international conferences and journals, and he has received two best paper awards (at CIA’2003 and ISWC’07). Xinchun Wang is an assistant professor of linguistics at California State University, Fresno. She received her PhD in applied linguistics from Simon Fraser University in Canada. Her research interests include applied linguistics, cross-linguistic speech perception and production, computer assisted language learning (CALL), and web-based interactive learning and collaboration. She has published research articles in related journals such as System, International Journal of Web-based Learning and Teaching Technologies. Yinping Yang is currently a PhD candidate in the Department of Information Systems, School of Computing, at National University of Singapore since August 2003. During her candidature, Yinping has published in various international journals and conferences such as Journal of Global Information Management, Behavior and Information Technology, IFIP WC8 International Conferences on Decision Support Systems, International Conference on Human-Computer Interaction, Workshop in Information Technologies and Systems, and Hawaii International Conference on System Sciences. Her research work has centered on the conceptualization, design, implementation as well as industrial acceptance of web-based negotiation support systems. She has also looked into issues related to e-collaborations and group decision support systems. Yingqin Zhong is currently a PhD candidate in the Department of Information Systems, School of Computing at the National University of Singapore. She has received Bachelor of Computing (Honors) in June 2003 and M.Sc. in MIS in 2005 from the National University of Singapore. Her primary research interests include cultural issues in e-collaboration, IT and education, and adoption of collaborative learning technology. Her papers have been published in Information Resources Management Journal, International Journal of Web-based Learning and Teaching Technologies, Encyclopedia of Information Science and Technology, Hawaii International Conference on System Sciences, International Conference on Human-Computer Interaction, Annual Americas Conference on Information Systems, and Information Resource Management Association international conference.
374
375
Index
A accessibility 235 accessible design 235 ACE 133 activity 5, 7, 13, 29, 39, 51, 61, 65, 82, 112, 113, 121, 124, 125, 126, 139, 145, 205, 222, 224, 229, 246, 248, 260, 265, 266, 279, 284, 305, 306, 307, 308, 309, 310, 311, 313, 315, 316, 321 actors 305 AHA 134 ambidextrous theory 58, 59 anonymity 79, 80, 82, 84, 85, 90, 97, 99, 102, 103, 197, 199, 267 argumentative collaboration 245–257 argumentative collaboration, challenges 248 argumentative collaboration framework 249 ASSYST 170 asynchronous learning networks (ALNs) 81
B black-box testing 171, 176
C change service 147 cognition 37 cognitive presence 3, 6 cognitive styles 37 collaboration 146 collaboration tools xvi, 258 collaborative learning 81, 220 collaborative learning, student attitude toward 18, 20 community of inquiry model 2 computer-mediated communications (CMC) 219 computer attitudes 40 computer mediated communicative tasks 16
computer supported collaborative work 259 connectivist learning environment 112 constructivist learning theory 219 CoPe_it! 245–257 course management systems 35–52 course management systems (CMS) xv, 233 C programming xiv–xviii, 168–185
D DCG 133 desktop conferencing 81 distance education 14, 29, 36, 51, 68, 71, 72, 73, 74, 75, 76, 77, 78 distance education industry xii, 71, 72
E e-learning xv, 13, 53, 55, 56, 61, 68, 69, 110, 127, 130, 140, 166, 167, 185, 200, 233 e-learning, and motivation 61 e-learning 2.0 115 École Polytechnique Fédérale de Lausanne xvii– xviii, 300–316 eJournal 302 ELM-ART 133 eLogbook 303, 308 eLogbook, and events logging 310 eLogbook, email-based interface 311 eMersion 301
F face-to-face classroom 4, 12, 218, 219, 230 formal learning 110, 111, 124, 127, 301, 303
G group support systems (GSS) xii, 79, 99, 100, 101, 102, 105, 106
Copyright © 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Index
H
N
high-level system architecture, diagram 172
namespaces 144, 145, 148, 151, 152, 154, 157, 158
I
O
ILESA 134 informal e-learning 111, 112 Innovation literature 57 innovation literature 57 intelligent tutoring 132, 133, 134 InterBook 133
online collaboration 15, 16, 17, 18, 20, 25, 28, 29, 119, 120, 129, 259, 300 online collaboration, and personalization 258–270 online interactive learning x, 15 online learning environment xvii online learning environment (OLE) 15, 271
K
P
knowledge artifacts 142, 143, 144, 165, 167, 249 knowledge flow 110, 112, 118, 120, 123, 124, 126, 256 KOD 134, 135
perceptual response 38, 39 Personalised System of Instruction (PSI) 271, 272 personalization services 258–270 personalized e-learning 130 personal learning environment 116 process oriented collaboration 24 product oriented collaboration 18, 24
L learner 1, 2, 3, 4, 5, 6, 7, 8, 10, 11, 12, 13, 17, 18, 20, 39, 47, 52, 53, 54, 55, 56, 77, 96, 97, 98, 103, 114, 115, 116, 118, 120, 121, 123, 124, 126, 127, 128, 131, 132, 135, 136, 143, 145, 146, 147, 162, 163, 187, 215, 219, 220, 221, 224, 234, 258, 260, 261, 262, 263, 265, 266, 267, 268, 272, 274, 282 learner profile 259, 262, 263, 265, 266, 267, 268 learners x learning, and storytelling 319 learning, in a connectivist environment 113 learning motivation 233, 235, 237, 238, 239, 240, 241, 242 learning styles 35, 37, 39, 42, 43, 44, 45, 46, 49, 51, 52, 57, 68, 83, 103, 132, 236, 242, 263 lifelong learning 77, 110, 111, 112, 113, 116, 118, 124, 125, 126 Little Red Riding Hood example 320
M mathematics, in an online environment 271–299 motivation 37, 48, 51, 53, 55, 56, 57, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 113, 118, 121, 122, 144, 169, 187, 192, 193, 205, 219, 225, 226, 228, 229, 230, 231, 232, 233, 235, 237, 238, 239, 240, 241, 242, 276, 280, 283, 287, 305, 315 motivation, and e-learning 61 multimedia authoring 317–333 multimedia storytelling 318
376
R RDF KB 144 Registry service 147 richness 236 role 4 role acquisition 5
S security 122, 169, 170, 174, 179, 186, 188, 189, 197, 199, 267, 313 self-test 275, 277, 280, 284, 287, 288, 289 side-effects 146, 147, 148, 149, 150, 151, 163 small groups learning 220 social network 111, 122, 124, 128, 308 social networking theories xiii, 110 social presence 2, 3, 7, 12, 14, 16, 30 storytelling 319, 320, 321, 326, 332 storytelling, multimedia 318 student interaction 15, 76, 82 student learning, automation of 169 student performance 36, 39, 40, 45, 46, 47, 48, 49, 51, 214, 288
T teaching presence 1, 2, 3, 4, 6, 7, 8, 10, 11, 12, 13, 14 team-based learning xv, 218, 219, 220, 221, 222, 223, 224, 225, 226, 228, 229, 230, 231
Index
team-based learning, framework 225 team-based learning, Web-supported 221 trust 21, 26, 32, 73, 113, 122, 224, 225, 226, 228, 230, 231, 267, 315 TRY 170
U units of learning (UoL) 319, 321
V Versioning service versioning service videoconferencing visual information
147, 155, 156, 157 154 81, 103, 193 39
W WebBoard 220, 221, 222, 224, 231 white-box testing 171, 172
377